Exercise best anti-aging treatment, study suggests

Canadian scientists appear to have proven that you can, in fact, run away from old age.

In what could stand up as the most powerful evidence yet that exercise prolongs life, a study by McMaster University researchers in Hamilton found that signs of premature aging were halted — and even reversed — in virtually every tissue and organ in the bodies of exercised mice.

The finding, which could be a turning point in anti-aging medicine, suggests the proverbial fountain of youth won’t come from a pill or from an exotic berry from the Amazon, but rather plain old exercise.

Mice genetically altered to age faster were forced to run on treadmills for 45 minutes, three times a week.

Five months later, the mice looked as young, healthy and active as wild-type mice — mice that didn’t have the genetic mutation — while their sedentary and same-aged siblings were balding, greying and shrinking.

While the exercised mice scampered and scurried about their cages, the aging non-runners huddled in a corner, barely moving.

Not only did the treadmill-running mice look as sleek-coated, bright-eyed and bushy-tailed as wild mice, but the researchers also saw “huge recovery” in age-related damage to practically every tissue they could analyze.

The study’s beauty lies in its simplicity, says principal investigator Dr. Mark Tarnopolsky.

“What’s neat about our study is that this is something that is conceivably so simple. We purposely exercised them three times a week for 45 minutes at a moderate-intensity exercise, which is something that any human — provided they don’t have . . . (an illness) — can do.”

What’s more, the exercise did more than just protect the muscles and heart, as might have been expected.

The team found “unprecedented” anti-aging effects of endurance exercise on the brain, skin, hair, gonads (ovaries and testicles), kidneys, spleen and liver.

“Every part of the body was protected by exercise,” said Tarnopolsky, a professor of pediatrics and medicine at McMaster’s Michael G. DeGroote School of Medicine. “I think that exercise is the most potent anti-aging therapy available today and likely forever.”

Death is inevitable, “but exercise is the only way to stay healthy and free of disease for a longer period of time,” he added.

“We know that exercise has benefits even when humans start over the age of 65. But this study clearly shows that we can get closer to the fountain of youth if we start when we’re young and do moderate exercise our whole life.”

The findings, published Monday in the journal Proceedings of the National Academy of Sciences, represents “one of the most striking rescues yet reported in aging models without gene therapy or a pharmaceutical intervention,” said lead author Adeel Safdar, a senior PhD student working with Tarnopolsky.

At the crux of the experiments lies the mitochondrial hypothesis.

The mice were genetically manipulated to age twice as fast as normal because of a defect in the repair system of their mitochondria, the powerhouses or furnaces inside each cell that give our body energy.

Evidence has been mounting for decades that the older we get, the more mutations we accumulate in mitochondrial DNA. The furnaces start to break down, resulting in a steady decline in tissue and organ function, Safdar said.

That not only leads to aging, he said, but also to all the diseases associated with getting older, including cancer, Alzheimer’s, diabetes and Parkinson’s.

“And that’s really the premise upon which these mice were created,” Tarnopolsky said. “The people who created the mice essentially said, ‘Why don’t we create a mouse that has mitochondrial dysfunction and see if they age prematurely?’ ”

Lo and behold, the mice die about twice as fast as normal mice and show many features of human aging, he said, including hair loss, hearing loss, cataracts, brain atrophy or shrinkage, enlarged hearts and smaller muscles.

Epidemiological studies in humans have shown that people who are physically active or exercise regularly have fewer chronic diseases and tend to live longer — runners especially.

“So we thought this was a nice model that would allow us to really test how effective exercise really is in human aging,” Tarnopolsky said.

The experiments began when the mice were three months old — about 20 in human years — and ended five months later when the mice were eight months old — in their late 60s by the human equivalent.

The mice were randomly assigned to running three times per week or to just being sedentary in their cages.

Five months later, the sedentary mice showed major signs of aging: Their ovaries and testes were small and their hearts were enlarged, compared to the running mice, whose hearts were essentially normal. “The brain was atrophic, or small in the non-runners, but it was back to normal size in the runners,” Tarnopolsky added.

It’s not clear exactly what’s happening. But exercise is a physiological stressor — a good one — that causes the body to produce more energy.

“In our study, we saw huge recovery in mitochondrial function (in the exercised mice),” Safdar said.

Bigger studies involving more mice are needed to determine just how strong the life-extending effect of exercise might be.

But, said Tarnopolsky, the message is “it’s never too late” to star exercising.

“I really think we have to start when people are young. We have to encourage our children and people throughout their life to maintain healthy levels of physical activity.”

Ottawa Citizen

__________

Full article: http://www.ottawacitizen.com/health/Exercise+best+anti+aging+treatment+study+suggests/4324434/story.htmlhttp://www.ottawacitizen.com/health/Exercise+best+anti+aging+treatment+study+suggests/4324434/story.html

Doctors Mystified by Case of World’s Thinnest Woman

8,000 Calories a Day

Lizzie Velasquez is a mystery to doctors.

Texas native Lizzie Velasquez, 21, is thinner than anyone thought possible. She spends her days wolfing down burgers, fries and cake, consuming more than three times the normal calorie requirements. Doctors can’t explain how she can be so underweight and still alive.

She starts the day with corn flakes or a burrito. An hour later, 21-year-old Lizzie Velasquez is already snacking on potato chips or cookies. Soon afterwards, she eats fried chicken with French fries or a pizza. By lunchtime, Velasquez has already consumed about 4,000 calories, as much as the average road worker or miner burns in an entire day.

What 8,000 calories means

The same pattern continues throughout the rest of the day. Velasquez likes it when whatever she has on her plate is covered with plenty of melted cheese. By the time the native Texan goes to bed, the caloric value of the food she has eaten that day corresponds to about 8,000 calories.

The same procedure has repeated itself day after day for years. With that kind of diet, one would image the young woman would be so obese that she could barely leave her home. But the opposite is true. Lizzie Velasquez is so thin that strangers sometimes knock on the door of the family home to angrily inform her parents that they should feed their daughter properly.

Of course, these people have no way of knowing that Velasquez has probably already consumed as much food in her young life as her mother, who is twice her age.

Zero Body Fat

Nevertheless, Velasquez has no fat at all on many parts of her body — which, in her case, literally means zero fat. That is in contrast to, say, bodybuilders who claim to not have a single gram of fat on their body when they still have about 6 to 8 percent body fat.

But because Velasquez, unlike bodybuilders, has hardly any muscle mass either, she looks as if her skin were stretched directly across her skeleton. She walks on stilt-like legs and her handshake is as light as can be. But apart from her extremely low body weight — about 62 pounds (28 kilograms) at a height of 5 foot 2 inches (157 centimeters) — Lizzie Velasquez is doing well. Her condition will not deteriorate as long as she continues to eat enough.

Her metabolism is a mystery. What happens to all the energy from the fast food Lizzie consumes? Doctors don’t know the answer. All they know is that Velasquez is part of a tiny minority on the planet, probably only a handful of people, who can eat as much of whatever they want without gaining weight.

Is the mysterious anomaly a disease, a syndrome, a genetic defect — or even a gift, as Velasquez calls it? Some have already speculated that the body of this young woman from Texas could hold some sort of magical formula — a “thinness gene,” if you will — that many an overweight person would love to have.

Cheeseburgers without Regret

Human metabolism has in fact been thoroughly studied, and nutrition science yields new revelations week after week. They fill the pages of glossy women’s magazines in the form of diet tips, some of which are controversial. Nevertheless, experts still cannot offer a satisfactory explanation of why some gluttons stay thin while less fortunate people gain weight even if they are relatively modest eaters.

Velasquez has girlfriends who envy her for her ability to eat several cheeseburgers in a row without regret. But this form of recognition is relatively new. For most of her life, Velasquez was either ridiculed or pitied because of the way she looks.

Faced with such adversity, she developed a defiant sense of pride. She insists that she wouldn’t want to change anything about her condition, even if there were the prospect of a cure. “The syndrome is worth every negative experience,” she says. “I don’t want to look like everyone else.”

The mysterious ailment has never occurred in her family before. Velasquez’s younger siblings — her brother Chris and her sister Marina — have developed normally. Her parents Lupe and Rita, who are religious, allowed Lizzie to grow up with the knowledge that fate had dealt her a special hand.

This outlook is reflected in the title of a book Velasquez has written: “Lizzie Beautiful.” Not surprisingly, the book’s publication triggered media interest in the emaciated woman.

Too Strong to Die

As a young girl, Velasquez appeared as a guest on several television programs. Some audience members reacted to the hyper-thin child, with her thick glasses, the way visitors to a fair in Victorian London once must have gawked at Joseph Merrick, the severely deformed man known as the Elephant Man. Unable to bear the horrifying otherness they were witnessing, many visitors, then and now, tried to compensate for their discomfort by making absurdly vulgar remarks.

Velasquez already attracted attention at her birth. She weighed 2 pounds, 10 ounces (1,190 grams) and was only 16 inches (40 centimeters) long. “I fit into a small shoebox,” she says. Far more disconcerting was the fact that the newborn had no fatty tissue at all. Her arteries were clearly visible under her skin, and her head resembled that of a crudely carved wooden doll.

Doctors did not think that the little girl would survive. But then, to everyone’s surprise, it turned out that all of her internal organs — lungs, heart, liver and intestines — were fully functional. Apparently Velasquez was too strong to die.

Surprising the Doctors

A detective-like search for the essence of her mysterious ailment began. But the effort was in vain. Doctors couldn’t figure out what the girl lacked.

They told the parents that their daughter would never be able to walk or talk. When Velasquez was four, doctors discovered that she was blind in her right eye. Her vision was also significantly restricted in her left eye.

But Lizzie could walk — and talk. And she did grow. The only problem was that she was unable to gain any weight. The taller she became, the more emaciated she looked.

Lacking answers, the doctors had only one piece of advice for the parents: “Keep an eye on your daughter, and get in touch with us if anything seems strange.”

But everything about her was already strange.

Too Thin to Be Alive

When Lizzie was 13, her mother wrote an account of her daughter’s condition in a medical newsletter, which attracted the attention of Abhimanyu Garg at the University of Texas Southwestern Medical Center in Dallas, who contacted the Velasquez family. Garg, an internist, specializes in the study of diseases relating to human metabolism. Since then, he has paid regular visits to the Velasquez home, keeping track of Lizzie Velasquez’s progress from a medical perspective. 

Garg examined the then-adolescent more extensively than any other doctor before him. Velasquez’s bone density was measured using a scanning method called dual-energy x-ray absorptiometry. Garg performed a biochemical analysis of her metabolism and examined her entire body using magnetic resonance tomography. The results showed that Lizzie Velasquez is surprisingly healthy for a young woman who in theory is too thin to be alive.

A body mass index (BMI) value of 20 to 25 is considered normal. Someone with a BMI of less than 16 is considered critically underweight. Velasquez has a BMI of 10.9.

Normal Development

Garg was the first to come up with a name for the strange disorder, calling it neonatal progeroid syndrome (NPS). It is an extremely rare condition that was first described in the mid-1970s by Thomas Rautenstrauch, a German pediatrician.

Rautenstrauch had reported on babies that were severely underweight and with prematurely aged faces, beak-like noses, thin hair and growth disorders. Longer-term observation of the small, horribly disfigured patients was often impossible, because most died in infancy. The few that did grow older soon exhibited a pronounced mental deficiency.

Even though the external symptoms of NPS apply to Velasquez, her brain has developed normally. And at 5 foot 2 inches, she isn’t even particularly short for a woman of Mexican descent.

Taste for Junk Food

The number of NPS cases worldwide ranges from 30 to 60. Velasquez is even unusual within this small group. There are only two other known cases of females in whom the condition has taken a similarly atypical course. Coincidentally, one of them lives in Austin, like Velasquez, and is about 14 years old. The second is a woman in her 30s who lives in Great Britain. Unlike Velasquez, these two women have decided to avoid the public eye.

Probably no other scientist has illuminated the rare syndrome as thoroughly as Abhimanyu Garg. He believes that the disease is genetically determined, although he is unable to name a specific gene that’s involved. Garg also has no hopes for a cure. He doesn’t even know what advice to give a patient like Velasquez to keep herself reasonably healthy.

Meanwhile, her only option is to keep on feasting. “I’m extremely picky when it comes to food,” Velasquez confesses. She never eats salad and doesn’t touch fruit, either. Her eating behavior corresponds to the clichéd image of the US teenager who eats nothing but junk food.

“Thank God I can get away with it,” she says, neglecting to mention the fact that her body, like anyone else’s, also suffers from the effects of unhealthy eating. For example, she recently had to stop drinking soft drinks because her blood sugar levels had become so high.

‘Make Sure She Eats’

Velasquez also largely dismisses the notion that it must be a burden to have to eat three to four times as much as a normal person throughout the day. But sometimes she can’t hide the fact that her high-calorie diet can be tiresome.

“She’s a typical 21-year-old who doesn’t always do what she should,” says Joe Caruso, who helps her with media inquiries. When the two travel together, he has a special responsibility. “This morning Lizzie’s mother said to me: ‘Make sure she eats,'” he says.

Caruso occasionally disappears for a minute, only to return with a piece of cake, which he hands to Lizzie as if it were medicine. She chews without pleasure — as if it really were her medication.

No one can say what actually happens to all the nutrients in Velasquez’s body. In healthy individuals, some of the nutrients would be converted into fat deposits. But the energy in the food she eats apparently does have some effect. When she doesn’t eat, she becomes tired quickly and her immune resistance declines rapidly.

Hunger Pangs

A reporter once wrote that she has to eat a meal every 15 minutes. Nonsense, says Lizzie. It is true, however, that she feels hungry far more often than normal people do. If she ignores the impulse, her energy level soon plummets. Because she has no reserves at all, a lack of food becomes quickly and seriously noticeable.

As a child she often suffered from ear infections — possibly because she wasn’t able to make people understand the importance of her frequent hunger pangs. She often spent weeks at a time in bed, worn out by an ordinary cold.

Hasn’t she ever dreamed of being strong and powerful? “I never really saw a need for that,” she claims. Her mother once sent her to a gym, hoping that Lizzie could lift weights to strengthen her muscles. Garg intervened. His patient perspires heavily during physical exercise and can easily become dehydrated.

Garg is about to repeat all the tests he has already performed on her once before. They are the helpless attempts of a man who faces a mystery he is unable to solve.

‘A Huge Gift’

Lizzie is amused by the idea that she will go down in medical history as a living miracle. She is also aware that she could become a curiosity handed from one doctor to the next. Or she could become a sort of trophy case that brings fame to a particular doctor.

But none of this troubles her. After spending many years in and out of various laboratories and doctors’ offices, she no longer has much faith in anyone solving the mystery. And she repeatedly insists that she isn’t interested in a cure. “This is a gift, a huge gift, an honor,” she says, referring to her disease.

Then it’s time for her to go. She’s tired, and she feels cold.

It’s noon in her native Austin, and the temperature is 40 degrees Celsius (104 degrees Fahrenheit) in the shade.

__________

Full article and photos: http://www.spiegel.de/international/zeitgeist/0,1518,729805,00.html

The Workout Enigma

Recently, researchers in Finland made the discovery that some people’s bodies do not respond as expected to weight training, others don’t respond to endurance exercise and, in some lamentable cases, some don’t respond to either. In other words, there are those who just do not become fitter or stronger, no matter what exercise they undertake. To reach this conclusion, the researchers enrolled 175 sedentary adults in a 21-week exercise program. Some lifted weights twice a week. Others jogged or walked. Some did both. Before and after the program, the volunteers’ fitness and muscular strength were assessed. At the end of the 21 weeks, the results, published earlier this year in Medicine and Science in Sports and Exercise, were mixed. In the combined strength-and-endurance-exercise program, the volunteers’ physiological improvement ranged from a negative 8 percent (meaning they became 8 percent less fit) to a positive 42 percent. The results were similar in the groups that undertook only strength or only endurance training. Some improved their strength enormously, some not at all. Others became aerobically fitter but not stronger, while still others showed no improvements in either area. Only a fortunate few became both fitter and more buff. As the researchers from the University of Jyvaskyla wrote with some understatement, “large individual differences” exist “in the responses to both endurance and strength training.”

Hidden away in the results of almost any study of exercise programs is the fact that some people do not respond at all, while others respond at an unusually high rate. Averaged, the results may suggest that a certain exercise program reliably will produce certain results — that jogging, say, three times a week for a month will improve VO2max (maximal oxygen capacity) or reduce blood pressure; and for almost any given group of exercisers, those results are likely to hold true. But for outliers, the impacts can be quite different. Their VO2max won’t budge, or it will fall, or it will soar.

The implications of such wide variety in response are huge. In looking at the population as a whole, writes Jamie Timmons, a professor of systems biology at the Royal Veterinary College in London, in a review article published last month in The Journal of Applied Physiology, the findings suggest that “there will be millions of humans that cannot improve their aerobic capacity or their insulin sensitivity, nor reduce their blood pressure” through standard exercise.

But what is it about one person’s body that allows it to react so vigorously to exercise, while for others the reaction is puny at best? One answer, to no one’s surprise, would seem to be genetics, although the actual mechanisms involved are complex, as a recent study by Dr. Timmons and others underscored. In that work, researchers accurately predicted who would respond most to endurance exercise training based on the expression levels of 29 different genes in their muscles before the start of the training. Those 29 genes are not necessarily directly associated with exercise response. They seem to have more to do with the development of new blood vessels in muscles; they may or may not have initiated the response to exercise. Scientists just don’t know yet.

In other words, this issue is as intricate as the body itself. There is a collection of compelling data that indicate that about half of our aerobic capacity “is genetic,” Dr. Timmons wrote in an e-mail. “The rest may be diet,” or it could be a result of epigenetics, a complicated process in which the environment (including where you live and what you eat) affects how and when genes are activated. “Or it could be other factors,” he said. Although fewer studies have examined why people respond so variously to strength training, “we have no reason to doubt,” he said, that genetics play a similar role.

But none of this means that if you once took up jogging or weight lifting and didn’t respond, you should take to the couch. It may be that a different exercise regimen would prompt beneficial reactions from your particular genome and physiology, Dr. Timmons said. (Although scientists still have a long way to go before they can say, definitively, who needs what exercise, based on genetic and other differences.) In the meantime, Dr. Timmons stressed, even low responders should continue to sweat. Just as scientists don’t yet understand the complicated underpinnings of the body’s response to exercise, they also don’t necessarily understand the full range of exercise’s impacts. Even if you do not increase your VO2max, Dr. Timmons said, you are likely to be deriving other benefits, both big and small, from working out. Exercise does still remain, “on average,” he said, “one of the best ‘health’ treatments we have.”

Gretchen Reynolds, New York Times

__________

Full article and photo: http://well.blogs.nytimes.com/2010/11/17/phys-ed-the-workout-enigma/

Scourge of Humankind

High-profile efforts to fight malaria confront an ever-changing enemy that has evolved alongside man

Bad Air. Even the word “malaria” tells us that the disease caused by the plasmodium pathogen is out of the ordinary. The name “Mal-Aria” didn’t come into common medical usage until about 300 years ago, but for many more centuries “swamp fever” expressed the same thing: a serious disorder that assaulted the human body and especially the brain. Associated with bad drainage, water-logged soil and damp climate, it was the quintessential disease of location.

Location still matters a great deal in the incidence of malaria, but it is no coincidence that the most intensely malarious locations in the world today—sub-Saharan Africa, Southeast Asia, parts of South America—are also some of the poorest. This connection between poverty and malaria is undeniable, but their causal chain is problematic. Does malaria cause poverty through premature death, chronic disability and low productivity? Or does poverty itself cause the social chaos and unhealthy conditions that permit malaria to take a stranglehold on a town, region, country and even a continent? This seemingly straightforward question has been fiercely debated for a century and more.

Siblings in Cambodia take protection beneath insecticide-laced anti-mosquito netting.

Ronald Ross, who won the 1902 Nobel Prize for medicine for his demonstration that malaria is transmitted by Anopheles mosquitoes, firmly believed that malaria causes poverty. Get rid of malaria and malarious areas of the world will begin to prosper. Jeffrey Sachs, the outspoken economist who heads Columbia University’s Earth Institute, has inherited Ross’s modern mantle. Other economists (and malariologists) have not been so certain. In many parts of the world, including Britain and the U.S., malaria lost its hold only as prosperity was gradually achieved. Malaria’s disappearance was a welcome byproduct of better nutrition, schools, roads, health care and methods of agriculture.

These two interpretations translate into two differing medical approaches to the disease—vertical and horizontal. A classic vertical response was the initial campaign by the World Health Organization, in the 1950s and 1960s, to eradicate malaria through the use of the insecticide DDT. Following its abandonment, there were strong calls among international health workers to go down a “horizontal” route: If prosperity brings health along with it, Western aid for developing countries ought to be devoted to helping provide modern infrastructure.

The results of the horizontal approach have been patchy at best. Childhood mortality in sub-Saharan Africa has been staggeringly intransigent. A million deaths is the figure typically reported for mortality world-wide, but that may not have much substance in reality, as investigative journalist Sonia Shah notes in “The Fever: How Malaria Has Ruled Humankind for 500,000 years.” We don’t know how many people die from malaria, or even die with the disease. What we do know is that current inroads against malaria are piecemeal and may not be sustainable.

Now the arrival of Bill and Melinda Gates on the international health scene has changed everything, placing a renewed emphasis on the vertical approach. The founder of Microsoft admirably wanted to put some of his vast fortune back into society, and “neglected diseases” (including malaria, AIDS, and drug-resistant tuberculosis) seemed to be a good place to start. Melinda Gates, in particular, has placed malaria eradication back on the agenda, and the Gates Foundation funds research toward improved drug treatments, safer insecticides and eventually an effective vaccine. But the lessons of history should give us pause.

Back in the 1930s, thoughtful malariologists such as S.P. James, who had worked in India shortly after Ross, believed that it was ill-advised even to attempt eradication in regions with high incidence of the disease. Despite the high infant mortality that malaria causes, individuals who survive the disease acquire sufficient immunity to cope as adults. Destroying that herd immunity, even for a generation, means risking that the disease will return with a vengeance.

Doctors like James and others who advised the League of Nations Malaria Commission also appreciated how complex malaria actually is. Four different species of parasites can cause it, and each evokes a different response in its human host. More than two-dozen species of Anopheles mosquitoes can transmit the parasite, and each species has its own breeding patterns and favored habitat. The consequence, as malariologist Lewis Hackett (1884-1962) wistfully wrote in the 1930s, is that malaria is “so moulded and altered by local conditions that it becomes a thousand different diseases and epidemiological puzzles.” The treatment that works best in one location may not work elsewhere—and “elsewhere” may be only a few miles away.

DDT briefly seemed to make these insights irrelevant. It formed the bedrock of the World Health Organization’s postwar campaign and achieved a good deal more than it is often given credit for. By 1963, when the funding for the DDT drive dried up amid concerns about the insecticide’s ecological effects, malaria had been eliminated from Europe, the U.S. and many other parts of the world. It had been almost eradicated from India and Sri Lanka.

In the rich countries it has stayed away—barring imported cases. In Asia, the gains quickly evaporated. India was down to about 50,000 cases when the spraying ended. By 1969 it had zoomed back to one million. Yet the WHO initiative may have failed even if it had been continued, due to the cunning adaptations of the enemy. The mosquitoes gradually grew resistant to DDT, while the parasites themselves became resistant to anti-malarial drugs such as chloroquine and atebrin.

All these issues, and many others, are brilliantly exposed in Ms. Shah’s book. She has read widely and appreciates the enormous efforts that even the modest goal of better malaria control will entail. Because she understands malaria’s history, she is also skeptical that the present eradication campaign will succeed, at least in the short term. Already some problems that bedeviled previous “vertical” efforts have raised their heads again.

Mosquitoes seem to be getting used to pyrethrum, the insecticide used to impregnate bed nets. (The nets are also not always used for the job they are designed for—having been discovered to be helpful in fishing and worth a bit of money on the black market.) Likewise, resistance to artemisinin-based treatments is beginning to be reported. Drugs are distributed but patients do not finish the course, stopping when they feel better or sharing the drugs with their friends and family. Self-medication is also a serious issue, since many drugs bought privately are fakes or contain a fraction of the medicine required. Inadequate dosing increases the probability that drug resistance will emerge in the parasites.

Supporters of a “horizontal” approach can point to even more basic problems. In many countries, the infrastructure for effective treatment is missing: Hospitals are often little more than makeshift facilities where patients come in hope or despair. The medications themselves are more expensive than most governments can afford, which is why international aid agencies and private philanthropy such as the Gates Foundation are attempting to bridge the gaps.

These groups are also, as we learn in Bill Shore’s “The Imaginations of Unreasonable Men,” backing a more ambitious approach: the development of a malaria vaccine. Mr. Shore, the well-known founder of Share Our Strength, a charity aimed at eliminating childhood hunger in the U.S., here provides an upbeat account of several American scientists researching malaria prevention. He and Sonia Shah sit at opposite ends of the inter national malaria problem—the vertical visionary and the horizontal historian. I prefer to the sober analysis of Ms. Shah to the well-meaning hype of Mr. Shore, but Mr. Shore tells his story well, making a virtue of his own studied naïveté in order to explain current efforts for a general reader.

What comes through very clearly is that the Gates money has transformed American malaria research. “The Gates Foundation very much acts like the general contractor responsible for eradicating malaria,” Mr. Shore writes, “using a wide variety of subcontractors who specialize in vaccines, drugs, diagnostic techniques, and public health systems.” Mr. Shore’s profiles thus include several recipients of Gates’s generosity, including Amyris Biotechnologies and the Institute for OneWorld Health. Sanaria, a start-up developing a malaria vaccine (one of 35 candidates currently tracked by WHO), might not have been able to continue its work without a timely Gates grant.

All the new ideas and energy coming from America have inevitably raised hopes, around the world, that perhaps Yankee ingenuity really can crack the malaria problem in the laboratory. The scientists themselves certainly seem confident. But should they be? Most malariologists agree that malaria cannot be eliminated without a vaccine. But that does not mean that a vaccine will necessarily eliminate malaria.

The depressing fact is that both mosquito and parasite are highly adaptable, and malaria has been central to human life for (to borrow from Ms. Shah’s subtitle) 500,000 years. Our battles with it have been written into the human genome: Sickle cell anaemia and other similar disorders, for instance, are genetic evidence of how humans and malaria have evolved together. Given this history, it is optimistic to think that the disease can be easily stamped out, especially considering that whatever magic solution might be discovered will still need to be delivered via a social infrastructure that doesn’t exist in much of the world.

The danger of the latest eradication attempt is that, with even the best will in the world, private philanthropy may not have staying power. The slog will be long and hard and the results slow in developing. We must not be paralyzed by the past. Nevertheless, the international health industry needs to set attainable goals—or risk repeating the failures of the initial malaria eradication program, which stopped when the money ran dry.

In most of the world today, malaria is a disease of poverty, and any doctor knows that the best way to get rid of a disease is to attack its cause.

Dr. Bynum is professor emeritus of the history of medicine at University College London.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703326204575616452789851046.html

The Headache That Wouldn’t Go Away

“My arm — something is biting my arm!” The 26-year-old woman struggled to sit up in bed. What’s wrong? her husband asked, alarmed and suddenly wide awake. His wife didn’t seem to hear him. Suddenly, her whole body began to jerk. Although he had never seen a seizure, the young man knew immediately that this was one. After a long and terrifying minute the jerking stopped and his wife lay quiet with her eyes closed, as if she were asleep. When he couldn’t wake her, he picked up the phone and dialed 911.

In the emergency room, the young woman was sleepy and confused. She didn’t remember the seizure. All she knew was that she felt bad earlier that day. Her shoulders ached and she had these strange shooting pains that ran up her neck, into her skull. She had a wicked headache too. Although she had this headache for months, it was much worse that day. At home she took a long hot bath and went to bed. She woke up in the ambulance.

She’d had no fever, she told the E.R. doctor, and hadn’t felt sick — just sore. And now she felt fine. Her arm didn’t hurt — in fact she couldn’t remember that it had ever hurt. She still had the headache, though. She didn’t smoke, didn’t drink and took no medications. She moved to Boston from Bolivia several years earlier to get married and now had 15-month-old. Other than mild confusion, the patient’s physical exam was normal. The E.R. doctor ordered blood tests to look for evidence of infection along with a CT scan of her head to look for a tumor.

Her headache started the year before, when she was pregnant. Before that she had the occasional headache, but back when her daughter was barely a bump, she got one that simply never went away. She told the midwife, who said it wasn’t unusual to get headaches during pregnancy. But to the patient, this headache seemed different. It was like a vise on her head, just over her eyes. The pressure wasn’t excruciating, just unrelenting. She took Tylenol, and that sometimes helped, but the headache always came back. Sometimes it even woke her up in the middle of the night. Finally the midwife sent her to her primary-care doctor.

Her doctor, a young internist in her first year of training (who asked that her name not be used), was worried about this headache. It had persisted for weeks and woke her patient up from sleep — that was unusual. The doctor recalled how happy the patient was when she called her with the news of her positive pregnancy test. And now barely showing at five months, she looked like the picture of expectant health. Had she had any weakness or numbness? Was there any loss of hearing or blurry vision? No, no and no. Well, she did have blurry vision, but that’s only because she hated wearing her glasses.

The doctor focused her exam to look for any hint that this headache might be because of some kind of brain injury. She looked into the patient’s eyes with the ophthalmoscope, scanning the retina for any signs of increased pressure inside the brain. She checked the patient’s strength, coordination and reflexes. Nothing. Her exam was completely normal.

Headaches are common, accounting for some 18 million doctor visits a year. Most are completely benign, but up to 3 percent of patients with a headache severe enough to send them to the emergency room will have something worth worrying about. Doctors are taught to look for three types of potentially dangerous headaches: the first, the worst and the cursed. The first headache in someone who doesn’t have headaches; the worst headache ever in someone who does; or a headache “cursed” by symptoms like weakness or numbness. A CT scan should be considered for these possibly life-threatening headaches. This headache fit into none of these informal categories.

This patient was woken up from sleep by her headaches — that’s unusual, but the doctor knew that it was not one of the recommended reasons for getting a CT scan. And she was pregnant. A CT scan of the head requires a relatively high dose of radiation. Was the doctor’s concern great enough to risk exposing the fetus based only on this somewhat unusual symptom? Not yet. Especially since there was another possible cause of the persistent headache — eyestrain. The patient was no longer wearing her glasses; she didn’t even own a pair, she confessed. She should get new glasses, the doctor suggested, and see if wearing them helped her headache. If not, she should come back. Perhaps they would get a CT scan at that point.

It was more than a year later when the patient next came to see the doctor. She had gotten glasses, and though the headaches hadn’t stopped, they seemed a bit better. It was no longer a constant pain. She had one maybe three to four times a week, and it lasted for a few hours and went away with a little ibuprofen. Besides, she was really too busy with the baby and her job to worry too much about them.

Then, six months after that last visit, she had this middle of the night seizure. In the E.R., the blood tests were all normal. Not so the CT scan. On the right side of the patient’s brain, just over the eye, there was a bright circle of white, the size of a dime. Not a brain tumor. No, the radiologist said, this was a tiny worm, a larvae, the young offspring of a tapeworm. The parasite, known as Taenia solium, is transmitted through undercooked pork contaminated by tapeworm eggs. Once in the body, the eggs hatch and then attach themselves to the intestinal wall and within a few months can grow to up to 15 feet or more. A mature tapeworm will then release hundreds of eggs into the gut every day. If any of these are ingested, they can hatch, enter the bloodstream and, once there, can lodge almost anywhere in the body, although they usually end up in muscle and in the brain.

Although unusual in the United States, pork tapeworm is common in the developing world. And having these larvae in the brain, a condition known as neurocysticercosis, is the most common cause of adult epilepsy in South and Central America. The patient was probably infected with this tapeworm years earlier when she lived in Bolivia. This kind of infection can be asymptomatic for years. Once the doctors saw the CT scan, the patient was treated with an antiparasite medication for 30 days and started on antiseizure medications.

When her primary-care doctor heard that her patient had been diagnosed with neurocysticercosis, she scoured the patient’s hospital chart and then her own notes. How had she missed that? What should she have done differently? She discussed the case with several of her teachers, who assured her that she had done everything properly. One of the frustrating truths in medicine is that it is possible to do everything right and still be wrong and miss the diagnosis.

The young doctor called the patient to see how she was doing and to schedule a follow-up visit. She was disappointed, though not completely surprised, when the patient chose to see a different doctor at the clinic.

In thinking about this case, the doctor’s greatest regret is that she didn’t get the chance to follow up on her patient and find out that her headaches didn’t go away by just wearing glasses. When patients don’t come back, the temptation is to assume they’ve gotten better. That is often not the case. Sometimes they’ve just given up. Now when she has a patient she is worried about, the doctor doesn’t tell them to call her if they don’t get better. Instead she has them make an appointment to come back in a couple of weeks. “If they are all better,” the doctor told me, “they can cancel the appointment. But just in case they aren’t — the way this woman wasn’t — they can come back, and I can have another shot at the whole thing.”

Lisa Sanders is the author of “Every Patient Tells a Story: Medical Mysteries and the Art of Diagnosis.”

__________

Full article and photo: http://www.nytimes.com/2010/11/07/magazine/07FOB-Diagnosist-t.html

With hope, farewell fear

Cancer

The long struggle to understand cancer

The Emperor of All Maladies: A Biography of Cancer. By Siddhartha Mukherjee. Scribner; 541 pages.

Here’s how they look in the lymph

IT IS said that when the good burghers of Amsterdam were first presented with a rhinoceros—armoured, horned, three-toed, with a prehensile lip—spectators shook their heads in disbelief. Cancer provokes a similar bafflement. So protean are its forms and so varied its features that even specialist prognoses of aggressiveness, invasion and response to treatment have typically generated more exceptions than rules. Apparently identical cancers in two patients may behave so unlike as to appear utterly different diseases. Siddhartha Mukherjee’s “The Emperor of All Maladies” tells of the search for a “unifying theory” of cancer, the common attribute of all types of malignant cell growth that might reveal its cure.

The arc of this rich and engrossing book matches Mr Mukherjee’s personal evolution as an oncologist, beginning on the first day of his hospital residency. It seems that the diversity of this implacable, shape-shifting foe will defeat him. He is faced with dead-end discoveries, therapeutic disasters and revelations that lead only to more mysteries. But with the perceptiveness and patience of a true scientist he begins to weave these individual threads into a coherent and engrossing narrative.

The earliest references to cancer, in 2600BC or so by Imhotep, an ancient Egyptian physician, then by Hippocrates and Galen, were simple clinical descriptions: swelling, ulceration, death. These were attributed to “humours” or blockages of bile. In the mid-19th century, a pioneering German pathologist, Rudolf Virchow, identified one feature of cancers: that they represent an uncontrolled proliferation of cells. The cause may still have been a mystery, but the search began for cures.

Surgeons brandished the knife, cutting ever wider and deeper, but recurrence of the disease suggested that the operations had been too conservative, that even more extensive procedures stood more chance of cure. As the science of pathology advanced and leukaemias and lymphomas became recognised as cancers of blood cells, it became clear that the entire bone marrow or lymphatic system could not be extirpated (although some physicians tried). Chemotherapeutic drugs were used, singly and then in increasingly lethal combinations to try to destroy all abnormal cells. X-rays and other forms of radiation were known to kill cells and these were aimed at lymph glands near and distant, on sites of secondary cancer spread in bone and lung and brain.

Practitioners in each field claimed advances. Individual lives were saved. Scientific optimism after the second world war led a leading American oncologist, Sidney Farber, to talk in 1962 of the underlying “singularity” of cancer, and to postulate a “universal cure”. Screening programmes for breast and cervical cancer promised detection at an earlier stage, with improved outcomes. In 1985 American epidemiologists conducted a review of the benefits of these advances in diagnosis and treatment. In the previous year there had been 211 deaths and 448 new cancer cases diagnosed for every 100,000 Americans. When these were compared with the figures for 1962, it was evident that this war was not being won; cancer-related deaths had increased by 8.7%. Much of this could be traced back to the lung-cancer epidemic that had followed the surge in smoking in the 1950s, but the message was clear. More needed to be understood about cancer’s causes before true advances could be made on the curative front.

Scientists had not been idle, but the significance of their labours remained obscure. A virus had been found to cause a cancer in chickens. A rare eye tumour sometimes manifested itself in members of the same family. Certain occupations seemed to breed malignancy; chimney sweeps historically got cancer of the scrotum, dye manufacturers bladder cancer and wartime shipyard workers handling asbestos died from aggressive tumours of the membrane lining the chest. Radium, that source of X-rays used to treat some cancers, had induced fatal malignancies in Marie Curie and her fellow researchers. There was the explicit association between cigarettes and lung cancer, while the newest disease, AIDS, sometimes appeared as multiple malignant tumours called Kaposi’s sarcoma. What common process could be understood from these apparently unconnectable discoveries?

It was time to return to fundamentals. Instead of replicating to an organised plan and forming normal tissue, cancer cells multiply madly. They secrete compounds that induce a proliferation of blood vessels that feed their growth and allow them to infiltrate the walls of adjacent organs. These attributes are genetically programmed functions of normal cells that are intended to combat infection or repair injury. They are controlled by molecular triggers that switch on and off on demand. The common aspect of the apparently disparate causes of cancer is that they induced genetic damage, locking these switches full on. Chaotically multiplying, the abnormal cells mutate, evolving myriad new characteristics: resistance to chemotherapy or an ability to establish colonies in distant parts of the body. The unifying theory had been found.

No longer is the topography of cancer formless flesh, unbridled and invading, but a complex scaffold of cancer DNA that affords binding sites for molecular therapies to block or reset aberrant cellular switches. Twenty-four new drugs are already in use, targeting specific mutations of lung, breast, colon, prostate and blood-cell malignancies. Researchers have hundreds of others in trial. Surgery, radiotherapy and chemotherapy treatments advance constantly.

Cancer’s endless mutability, its ruthless adaptation to survive, is being matched by resourcefulness. The epitaph of the emperor of maladies has not been written quite yet, but his all-conquering domain is in perceptible retreat.

__________

Full article and photo: http://www.economist.com/node/17413995

Hospitals Have Hope in Dutch ‘Search and Destroy’ Strategy

Combating Deadly Bacteria

An electron micrograph image of clumps of methicillin-resistant Staphylococcus aureus bacteria, commonly referred to by its acronym, MRSA. The “killer bug” has become commonplace in German hospitals, where it infects one in 70 patients in the average intensive care unit. So-called “multiresistant pathogens” are a serious problem in Germany and other countries

Every day, several people die in German hospitals after being infected with bacteria resistant to most antibiotics. Though the threat is growing, a strategy long-used in the Netherlands is catching on and raising hopes.

When Germans are admitted into Dutch hospitals, they are usually surprised to learn that they will be placed under quarantine. Doctors and nurses will only approach them after donning protective gowns, gloves and surgical masks.

In the Netherlands, Germans are considered an infection risk because their hospitals, nursing homes, rehabilitation centers and dialysis stations back home are full of so-called “multiresistant pathogens” — in other words, bacteria that have grown resistant to almost all antibiotics.

One Dangerous Bacterium

In particular, the “killer bug” MRSA — short for methicillin-resistant Staphylococcus aureus — has become almost commonplace in German hospitals, where it infects one in 70 patients in the average intensive care unit (ICU). Likewise, although MRSA infections are starting to level off — albeit at a high level — there has been a marked increase in so-called ESBL-producing intestinal bacteria (with ESBL being an abbreviation of “extended spectrum beta-lactamase”). These bacteria produce an enzyme that can destroy penicillins, cephalosporins and other antibiotics. Indeed, the Robert Koch Institute, Germany’s leading institution for disease control and prevention, estimates that the country sees at least four unnecessary deaths every day as a result of infections acquired in hospitals.

A Life-Threatening Scratch

“At first it was only a little scratch, here, on my lower leg,” says Emma P., 83, who lives near the northwestern German city of Münster. But that seemingly harmless injury was just the beginning of an ordeal that would last for months.

After being admitted to the hospital with a broken leg, P. scraped her ankle on her hospital roommate’s walker. Initially, it didn’t seem like anything serious. “The nurse just stuck a bandage on it,” P. explains.

But the open wound refused to close. Instead, it continued to grow until it had eaten its way deeper into the tissue and halfway around her leg. “At a certain point,” P. says, “it had become a real hole.”

A swab test eventually revealed that MRSA had gotten into the wound. Emma P.’s scratch could now mean death as a result of blood poisoning.

Fears of a ‘Post-Antibiotic Era’

These days, standard antibiotics are almost completely ineffective against pathogens like MRSA. And, to make matters worse, pharmaceutical companies don’t even have any new ones in development. Since new antibiotics must be used in moderation so as to slow the emergence of resistant strains, drug makers have few incentives to develop them. Likewise, the World Health Organization (WHO) is already warning that, if we don’t come up with new ways to fight these kinds of infectious diseases, we might just enter into a “post-antibiotic era” in which they can’t be treated at all.

Although this is an admittedly nightmarish scenario, there have been some studies and projects suggesting that these bacteria can be successfully combatted. But, to get there, we have to develop the right strategy.

“First, we have to create an awareness of the problem,” says Petra Gastmeier, who heads the Institute of Hygiene and Environmental Medicine at Berlin’s Charité Hospital. “Otherwise, a doctor who has worked at a hospital for five years will just think the infection level at that hospital is normal.”

For years, regular and thorough hand disinfection has been regarded as a miracle weapon against every type of hospital infection. But this practice is no longer enough by itself. Indeed, the daily hospital routine has become so complex that an ICU nurse caring for three patients would have to wash his or her hands an unfeasible 150 times a day just to have a good chance of preventing the transmission of multiresistant bacteria.

And then there are other issues. As Matthias Schrappe, director of the University of Bonn’s Institute for Patient Safety, explains, “multiresistant bacteria are also transmitted outside the hospital and brought back into the clinical setting from there. Anyone who hopes to successfully combat these pathogens has to take this into account.”

Holland’s ‘Search and Destroy’ Technique

Holland’s “search and destroy” strategy for specifically fighting MRSA bacteria has precisely this in mind. Every at-risk patient — for example, anyone who has recently been in a hospital — is first placed in quarantine until the results of a nasal swab test indicate that he or she is MRSA-free. Those found carrying the dangerous bacterium, on the other hand, are kept in isolation and treated until the pathogen can no longer be detected.

Emma P. was also quarantined as soon as she tested positive for the bacterium. “I was allowed to have visitors,” she says, “but they all had to wear masks.” And, she adds, most of the time she was alone, “without a soul in sight.”

As P. explains, her recovery was not a pleasant one. Every day, her wound would undergo a cleaning that involved scraping it out with a sharp-edged spoon and using a vacuum dressing to draw out the fluids in her wound. Every day, an ointment containing one of the last few antibiotics to be effective against MRSA was applied inside her nose, where the bacteria tend to accumulate. And, every day, she had to wash her body and her hair with disinfectant soap.

Still, the effort paid off. After a long three weeks, the MRSA pathogen could no longer be found in P.’s wound. Doctors then removed a piece of skin from her upper leg and grafted it onto the gaping hole in her lower leg. Now the wound is starting to heal.

‘Our Only Chance’

“Actively searching for the bacteria and then targeting and destroying them is our only chance,” says Alexander Friedrich, a senior physician at the Institute for Hygiene at UKM hospital in Münster, Germany. Friedrich believes that the search-and-destroy strategy used in the Netherlands should serve as a model for others. With it, the Dutch have managed to keep their MRSA rates extremely low over the last two decades. 

Five years ago, Friedrich launched EurSafety Health-Net, a joint German-Dutch project that fosters close cooperation in fighting the hospital bacteria between regions along and on both sides of the border. Though the collaboration project might teach the Germans a lot, it could also benefit the Dutch. Ron Hendrix, a microbiologist and project coordinator, lives in Enschede, five kilometers (three miles) from the German border and knows what such proximity can mean. “When there’s a fire in Germany,” he says, “we also have a problem.”

Across the border, in the northwestern German region participating in the project, all hospitals are now required to screen at-risk patients for MRSA. As a result, it is only now possible to get a detailed view of what happens when the medical community tries to sweep a problem under the rug for too long. “Here,” says Friedrich, pointing to a map of the region. “In 2004, two patients with an MRSA strain that was new to the region were transferred from southern Germany to here.” By 2008, he recounts, the same strain of the bacterium was found in hospitals in five different regional districts. It had become uncontrollable — but it halted at the Dutch border.

“Here in Germany, patients are often transferred from one hospital to another,” Friedrich explains. “It was only after we began to closely examine this transfer network that we could finally grasp that all hospitals in the German part of the project region actually make up a more-or-less single entity.”

In addition to hospitals, Friedrich has also brought into his network nursing homes, rehabilitation clinics and, most importantly, the offices of physicians in private practice. “The whole thing can only work,” Friedrich explains, “if they continue to perform swab tests in patients after they have been treated for MRSA in the hospital and to complete the treatment that was started there.”

The Hygiene Officer

Friedrich can already report the new strategy’s initial successes. In the project region surrounding Münster, there has been a sharp drop in the number of cases of MRSA-related blood poisoning. Now, he says, the important thing is to transfer responsibility for fighting MRSA from the level of coordinators and public health departments to deep within the hospitals themselves.

“For example,” Friedrich says, “the public health departments need to occasionally send people to the hospitals to check on whether they really have a hygiene officer, whether this person isn’t also the mobile-phone, radiation and genetic-engineering manager who is also standing in the operating room from morning to night, and whether the only reason he was even appointed as the hygiene officer was because he didn’t say ‘no’ fast enough.”

Ulrich Hartenauer is the chief of anesthesiology at Münster’s EVK hospital — and the perfect example of a hygiene officer who takes his job seriously. “As the chief of anesthesiology,” he says, “I play an important role. Anyone who falls out of my good graces is going to have a rough time. I can quickly make them aware of just how limited their options are.”

To provide hospitals with incentives and a structure for implementing a successful strategy against multiresistant bacteria, Friedrich has created five seals of quality that hospitals can earn in the same way that hotels can earn one to five stars.

The first (and lowest) seal, which is awarded for instituting thorough MRSA-screening processes, is already hanging in the lobby of Hartenauer’s hospital. Now he is doing what he needs to do to earn the second seal: expanding its screening for ESBL-producing pathogens. (The three other seals — for the training of hygiene personnel, the follow-up treatment of infected patients and the development of care networks — will only start being awarded over the next five years.)

A Different Bacteria, a Different Beast

Unfortunately, earning that second seal will be much harder than the first. “It’s now becoming clear,” says Wolfgang Witte, a division head at the Robert Koch Institute, “that the measures taken against MRSA cannot prevent the occurrence of ESBL-producing bacteria.”

Unlike MRSA, ESBL-producing bacteria do not form colonies in the nose and on the skin, but deeper within the body. And when they are in the intestine, for example, they become practically beyond the reach of doctors. What’s more, a wider range of bacteria might be involved, and ones that can pass resistance genes among themselves.

As Witte warns, it’s “critical that we start doing something about ESBL-forming bacteria.” The most important measure will be changing the way antibiotics are used in hospitals so as to make it harder for bacteria to develop resistance to them.

Breaking Old Habits

For his part, Hartenauer is already attending a seminar called “Antibiotic Stewardship” to learn how to advise all the doctors in his hospital on how to use antibiotics in a more logical way. But many more people need to follow suit. “There are far too few experts in this field,” says Winfried Kern, head of the Center of Infectious Diseases and Travel Medicine at the University of Freiburg in the southern German city, who launched the seminar. “Doctors just keep using antibiotics in the same way they were first taught to.”

According to Hartenauer, a typical thing he needs to educate doctors about is “perioperative antibiotic prophylaxis.” The surgical practice is used to make sure that all skin bacteria entering a surgical wound are immediately killed off. “But surgeons have a different way of thinking about it,” Hartenauer says. “They say to themselves: ‘Hmm, this was a difficult operation. I removed necrotic tissue, and it took a long time. I don’t want this wound to get infected later on. So let’s just extend the prophylaxis by three days.’ Then, of course, I have to get involved and say: ‘We don’t do this anymore.'”

Still, Hartenauer knows he is fighting an uphill battle. “When it comes to antibiotic therapy, I’m interfering directly with the treatment prerogatives of a fellow physician,” he says. “That’s when I might find myself resorting to psychological tricks.”

Getting hospital management on his side can also be helpful, Hartenauer adds: “I tell them: ‘I’ll help you reduce your annual budget for antibiotics by €20,000 ($28,000).'”

__________

Full article and photos: http://www.spiegel.de/international/germany/0,1518,726781,00.html

Seeking Proof in Near-Death Claims

At 18 hospitals in the U.S. and U.K., researchers have suspended pictures, face up, from the ceilings in emergency-care areas. The reason: to test whether patients brought back to life after cardiac arrest can recall seeing the images during an out-of-body experience.

People who have these near-death experiences often describe leaving their bodies and watching themselves being resuscitated from above, but verifying such accounts is difficult. The images would be visible only to people who had done that.

“We’ve added these images as objective markers,” says Sam Parnia, a critical-care physician and lead investigator of the study, which hopes to include 1,500 resuscitated patients. Dr. Parnia declined to say whether any have accurately described the images so far, but says he hopes to report preliminary results next year.

The study, coordinated by Southampton University’s School of Medicine in England, is one of the latest and largest scientific efforts to understand the mystery of near-death experiences.

At least 15 million American adults say they have had a near-death experience, according to a 1997 survey—and the number is thought to be rising with increasingly sophisticated resuscitation techniques.

People often describe moving down a dark tunnel toward a bright light after a near death experience.
__________

Dead or Alive?

An analysis of 613 near-death experiences gathered by the Near Death Research Foundation found:

  • About 75% included an out-of-body experience
  • 76% reported intense positive emotions
  • 34% described passing through a tunnel
  • 65% described encountering a bright light
  • 22% had a life review
  • 57% encountered deceased relatives or other beings

Note: Patients could report more than one sensation.

__________

In addition to floating above their bodies, people often describe moving down a dark tunnel toward a bright light, feeling intense peace and joy, reviewing life events and seeing long-deceased relatives—only to be told that it’s not time yet and land abruptly back in an ailing body.

The once-taboo topic is getting a lot of talk these days. In the new movie “Hereafter,” directed by Clint Eastwood, a French journalist is haunted by what she experienced while nearly drowning in a tsunami. A spate of new books details other cases and variations on the theme.

Yet the fundamental debate rages on: Are these glimpses of an afterlife, are they hallucinations or are they the random firings of an oxygen-starved brain?

“There are always skeptics, but there are millions of ‘experiencers’ who know what happened to them, and they don’t care what anybody else says,” says Diane Corcoran, president of the International Association for Near-Death Studies, a nonprofit group in Durham, N.C. The organization publishes the Journal of Near-Death Studies and maintains support groups in 47 states.

Dr. Corcoran, a retired Army colonel who heard wounded soldiers talk of such experiences as a nurse in Vietnam, says many military veterans have had near-death experiences but are particularly hesitant to talk them for fear of being branded psychologically disturbed.

Some investigators say the most remarkable thing about near-death reports is that the core elements are the same, among people of all cultures, races, religions and age groups, including children as young as 3 years old.

In his new book, “Evidence of the Afterlife,” Jeffrey Long, a radiation oncologist in Louisiana, analyzes 613 cases reported on the website of his Near Death Research Foundation and concludes there is only one plausible explanation: “that people have survived death and traveled to another dimension.”

Skeptics say there is no way to verify such anecdotal reports—and that many of the experiences can be explained by neurobiological changes in the brain as people die.

In the 1980s, British neuroscientist Susan Blackmore theorized that oxygen deprivation was to blame and noted that fighter pilots also encountered tunnel vision and hallucinations at high altitudes and speeds.

This year, a study of 52 cardiac-arrest patients in Slovenia, published in the Journal of Critical Care, found that the 21% who had near-death experiences also had high blood levels of carbon dioxide, which has been associated with visions, bright lights and out-of-body experiences.

A study of seven dying patients at George Washington University Medical Center, published in the Journal of Palliative Medicine, noted that their brainwaves showed a spurt of electrical activity just before they were pronounced dead. Lead investigator Lakhmir Chawla, an intensive-care physician, notes that the activity started in one part of the brain and spread in a cascade and theorized that it could give patients vivid mental sensations.

Matt Damon, left, plays a psychic in the movie ‘Hereafter,’ which explores themes of the afterlife.

Some scientists have speculated that the life review some patients experience could be due to random activation of the dying brain’s memory circuits. The sensation of moving down a tunnel could be due to long-buried birth memories suddenly retrieved. The feeling of peace could be endorphins released during extreme stress.

Other researchers say they have produced similar experiences by stimulating neurons in parts of the brain—or by giving patients ketamine, a tranquilizer and sometime party drug.

Yet researchers who have studied near-death experiences note that such experiments tend to produce only fragmentary visions and hallucinations, not the consistent, lucid and detailed accounts of events that many resuscitated patients report. One study found that people who had near-death experiences had higher blood oxygen levels than those who didn’t.

Several follow-up studies have found that people undergo profound personality changes after near-death experiences—becoming more altruistic, less materialistic, more intuitive and no longer fearing death. But some do suffer alienation from spouses or friends who don’t understand their transformation.

Other relatives understand all too well.

Raymond Moody, who first coined the term near-death experience in his 1975 book “Life After Life,” explores the even stranger phenomenon of “shared death experiences” in a new book, “Glimpses of Eternity.” He recounts stories of friends, family and even medical personnel who say they also saw the light, the tunnel and accompanied the dying person partway on his or her journey.  “It’s fairly common among physicians who are called to resuscitate someone they don’t know—they say they’ve seen a spirit or apparition leave the body,” says Dr. Moody.

Meanwhile, in his book, “Visions, Trips and Crowds,” David Kessler, a veteran writer on grief and dying, reports that hospice patients frequently describe being visited by a deceased relative or having an out-of-body experience weeks before they actually die, a phenomenon called “near-death awareness.”  While some skeptics dismiss such reports as hallucinations or wishful thinking, hospice workers generally report that the patients are otherwise perfectly lucid—and invariably less afraid of death afterward.

Mr. Kessler says his own father was hopeless and very sad as he was dying. “One day, he had an amazing shift and said, ‘Your mother was here—she told me I’d be dying soon and it will be fine—everyone will be there.”

Dr. Parnia, currently an assistant professor of critical care at State University of New York, Stony Brook, says verifying out-of-body experiences with pictures on the ceiling is only a small part of his study. He is also hoping to better understand whether consciousness exists apart from the brain and what happens to it when the brain shuts down. In near-death experiences, people report vivid memories, feelings and thought processes even when there is no measurable brain activity.

“The self, the soul, the psyche—throughout history, we’ve never managed to figure out what it is and how it relates to the body,” he says. “This is a very important for science and fascinating for humankind.”

Melinda Beck, Wall Street Journal

__________

Full article and photos: http://online.wsj.com/article/SB10001424052702304248704575574193494074922.html

Hope for Human Fertility in a Study of a Worm

A new report suggests that the biological clocks of worms, like C. elegans, and humans may wind down over time for similar underlying reasons.

Modern medicine, a healthy lifestyle and a shot or two of Botox might leave a woman feeling youthful for decades, but by her early 30s, her reproductive ability is already in rapid decline.

Now, a study in the journal Cell suggests that one of the favorite animals for experiments in modern science, the tiny worm C. elegans, may provide new insight into female fertility. For the worm, as for women, it is diminished egg quality, not quantity, that marks the first sign of reproductive aging.

Coleen Murphy, a molecular biologist at Princeton University, and colleagues found that as C. elegans ages, its oocytes, or unfertilized eggs, start to degrade because of increased secretion of a protein called transforming growth factor beta, or TGF-beta. The same protein is found in humans and other mammals.

The researchers also experimented with mutant worms that had low TGF-beta activity levels. In these worms, reproductive ability was lengthened and egg quality did not degrade.

In women, too, aging brings about a decline in the quality of oocytes, leading to a greater chance of birth defects like Down syndrome. Scientists have tested the protein in mice and found a similar effect, but in humans it may not play exactly the same role.

Still, this kind of research may eventually provide the knowledge to extend female fertility, Dr. Murphy said.

“The dream would be that you could give a woman in her early 30s a supplement or a drug to keep her oocytes healthy as long as possible,” she said. “We have treatments now that extend life span, but nothing extends our reproductive span.”

There are limits, as witnessed in some of the mutant worms. Although their reproductive ability increased, their life span did not, and reproducing in old age — 13 days for the worms — was fatal.

Syndya N. Bhanoo, New York Times

__________

Full article and photo: http://www.nytimes.com/2010/10/19/science/19obworm.html

Thesis, antithesis, synthesis

Psychiatric diagnosis

The way diseases of the psyche are diagnosed is changing rapidly. Doctors are struggling to keep up

WHAT good is a diagnostic tool if it is too complicated for doctors to use? This is the dilemma facing psychiatry. In the United States the release back in February of a draft version of the latest edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) has triggered a furious row over whether this tool has become too complex. Meanwhile, the World Health Organisation (WHO) points out that more than three-quarters of people with brain disorders in the developing world are not being treated, and on October 7th it released simplified guidelines for diagnosis and treatment designed especially for use by the front-line in medicine: primary-care doctors.

These developments highlight a revolution in psychiatry, the last bastion of symptom-based medicine. In no other medical domain is the symptom (say, anxiety) also the diagnosis. There is a reason for this: the brain is a complex organ and the causes of its disorders remain poorly understood. But thanks to brain imaging and genetics, that is changing fast.

Research into the impact of genetic mutations on brain development has revealed that conditions previously considered distinct often have common genetic underpinnings, so there is bound to be overlap between psychiatric disorders as they have been defined traditionally. One example is a gene called DISC 1 (which stands for disrupted-in-schizophrenia 1). A malfunctioning version of the protein encoded by this gene contributes, as its name suggests, to schizophrenia. But it also contributes to anxiety. Likewise, genes involved in making the myelin sheaths that insulate nerve cells go wrong in both schizophrenic and bipolar patients. In this context it is no surprise that a decade of brain imaging has shown the same neuronal circuits to be involved in many disorders. The reward circuits, for instance, are implicated in addiction, schizophrenia, depression and obsessive-compulsive disorder. According to John Krystal, the editor of Biological Psychiatry, one of the field’s leading journals, what seems to be emerging is a completely new way of looking at psychiatric diagnosis in which there is no one-to-one relationship between genes and symptoms but, rather, genes affect the development of brain circuits and this then produces symptoms.

Mind and matter

The authors of DSM-V attempt to capture this less than black-and-white picture by dispensing with the approach taken in previous DSMs, which was based on cut-and-dried checklists of symptoms. Instead, they have adopted a “dimensional” approach, in which patients are assessed for symptoms besides those that match their principal diagnosis, as well as for the severity of their symptoms.

One of the goals is to help medics identify mild forms of severe illnesses such as schizophrenia before people have experienced their first serious psychotic episode, in the hope of stopping the disease progressing. But DSM-V will also propose new conditions, such as “complicated grief”, which describes bereaved people who may benefit from being treated as if for major depression.

The aim is to help doctors offer patients the most appropriate treatment. But an important by-product will be that researchers working on the psychiatric drugs of the future will be able to test them in genetically engineered animal models that more closely resemble human reality. The importance of this was underlined by Eric Nestler of the Mount Sinai Medical Centre, in New York, and Steven Hyman of Harvard University in this month’s Nature Neuroscience, when they wrote that drug development for schizophrenia, major depression, bipolar disorder and autism “is at a near standstill”.

The proposed changes, however, worry some psychiatrists, who see in them a creeping medicalisation of normal behaviour. They point out that the DSM carries a lot of weight. Pharmaceutical companies devise new drugs for the conditions it defines, lawyers use it to sue doctors, ordinary people use it to diagnose themselves. They fear that by blurring the boundary between health and disease, DSM-V loses sight of a doctor’s first duty: to do no harm.

To overcome this, there have been suggestions in the past that the DSM should be divided into two: a scientific version, for use by researchers and psychiatrists, and a pragmatic version, for everyone else. Writing in the Psychiatric Times in August, Seyyed Nassir Ghaemi of Tufts University in Boston argued that this was not the answer. It would simply lead to the “gerrymandering” of definitions based on outdated and invalid knowledge.

Now the WHO has taken the initiative with its Intervention Guide, in which the organisation’s own classification of brain disorders—itself in the process of being revised to make it simpler for non-specialists to use—has been distilled into 100 pages of easy-to-follow flowcharts. The WHO points out that in the developing world nearly 95m people with depression and more than 25m with epilepsy receive no treatment or care.

In the end, says Dr Krystal, the dichotomy between the valid and the useful may turn out to be a false one. The most commonly prescribed psychiatric drugs are effective for many diagnoses, precisely because those diagnoses have underlying features in common. In his view, society’s demands are not mutually exclusive. Doctors can continue to do no harm, while researchers brace themselves for exciting, and unsettling, times to come.

__________

Full article and photo: http://www.economist.com/node/17248900

The Experiments in Guatemala

A medical historian’s discovery that American researchers in the 1940s deliberately infected hundreds of people in Guatemala with syphilis or gonorrhea has provoked outrage in both countries. President Obama and Secretary of State Hillary Rodham Clinton rightly apologized to President Álvaro Colom of Guatemala. More will be needed to make amends, beginning with a planned investigation of this appalling breach of medical ethics.

The experiments were brought to light by Susan Reverby, a professor at Wellesley College, who found unpublished records in the archives at the University of Pittsburgh. The studies were led by Dr. John C. Cutler, an internationally known expert on sexually transmitted diseases and a former assistant surgeon general.

From 1946 to 1948, American public health doctors under his command infected nearly 700 Guatemalans — prisoners, mental patients and soldiers — without their permission or knowledge. Anyone who became infected was given penicillin and presumed to be cured, although records suggest that many were not adequately treated.

The aim of the research was to test whether penicillin could prevent the transmission of syphilis, whether better blood tests for the disease could be developed, and what dosages could cure syphilis. That cannot justify experimenting on human beings without their consent.

Although the American government, which financed the research, bears the chief responsibility, the studies were carried out in collaboration with Guatemala’s top venereal disease expert and several Guatemalan ministries and institutions.

Top health officials insist that current rules governing federally financed research would prohibit such experiments. They require that subjects be fully told of the risks and give their informed consent. Institutional review boards must approve the research.

The Obama administration has said it will ask the Institute of Medicine to investigate the experiments; a presidential bioethics commission will suggest methods to ensure that all human research around the globe meets rigorous ethical standards. The Guatemalan government plans to conduct its own investigation. The United States should also pay reparations to any survivors that can be found and compensate Guatemala by paying for ethical health projects there.

Editorial, New York Times

__________

Full article: http://www.nytimes.com/2010/10/08/opinion/08fri3.html

Healthy Living, For Two

In “Origins,” Annie Murphy Paul explores current scientific thinking about how our lives are shaped by what happens to us in utero.

Tobacco, heavy drinking, illegal drugs, depression: We seem to grasp that these aren’t healthy for anyone, let alone a pregnant woman. But just what effect do the things that women inhale, consume and experience have on a fetus? In “Origins,” Annie Murphy Paul sets out to discover the answer. Along the way she explodes myths, reviews scientific evidence and explores the new frontier of fetal-origins research, the study of how we are shaped in utero by a combination of genes and environment.

Ms. Paul, who was pregnant with her second child as she embarked on the project, explores possible prenatal influences on everything from physical development to mental health. But “Origins” is most compelling when Ms. Paul discusses the harrowing effects of past ignorance, such as the treatment of morning sickness in the late 1950s with thalidomide, a drug later linked to severe birth deformities, and the use of diethylstilbestrol (DES) from the 1940s to the 1970s to prevent miscarriage—the drug was later found to cause higher rates of cancer and reproductive problems in the daughters of women who took it.

Ghastly though thalidomide and DES turned out to be, the problems they caused provided stark evidence of the intricate biological links between pregnant women and their unborn children. One researcher tells Ms. Paul that fetal-origins research means “extracting otherwise inaccessible scientific knowledge from the harsh soil of human catastrophe.”

A key to the evolving science is a major shift in thinking about the role of the placenta. The organ develops in female mammals during pregnancy, lining the uterine wall and partially enveloping the fetus; the umbilical cord connects the fetus to the placenta, enabling the flow of nutrients and the elimination of waste. Until modern times, the placenta was regarded as the perfect filter, protecting the fetus from harmful substances in the mother’s body and letting through helpful ones.

Such thinking has changed radically as evidence mounts that the placenta is not such a discerning filter after all. It does an impressive job of blocking bacteria from reaching the fetus, but studies show that it does not block other dangers known as teratogens—agents such as radiation, pathogens, drugs and chemicals that can cause malformation of the developing embryo.

But how worried should expectant mothers be? Anyone looking for definitive answers in “Origins” will be disappointed, because there aren’t many. Ms. Paul starts her quest in her first month of pregnancy, and it is a long slog from there to delivery. In each successive month, marked by the start of another chapter, she encounters new questions and potential minefields, and she sifts through real science and junk science. Over and over, she finds, the bottom line about what’s safe and what’s not is: “We just don’t know.” Eat fish? Yes, as long as the species isn’t exposed to mercury. Reduce stress? Yes, but remember that moderate stress levels may actually be good for the fetus, accelerating its maturation.

Even warnings about the ill effects of tobacco, alcohol and drugs taken during pregnancy are not clear-cut. As the series “Mad Men” reminds us, it wasn’t so long ago that pregnant women rarely modified their behavior with a fetus’s well-being in mind. In the heart of the baby-boom era, pregnant women were told it was fine to light up a cigarette and knock back a few drinks. Research has changed that view, of course. Smoking has been tied to miscarriage, stillbirth, premature birth, low birth weight and birth defects. Excessive alcohol use can lead to fetal-alcohol syndrome, which stunts development and can cause birth defects. Heavy drinking, surprisingly, is worse for the fetus than smoking crack cocaine; the “crack-baby epidemic” of terribly damaged infants, widely predicted in the 1980s, never materialized. Children exposed to cocaine in the womb do exhibit behavioral problems later in life, but a mother’s use of alcohol and tobacco is more damaging.

Then again: Pregnant women who smoke early in pregnancy can minimize damage to the fetus if they stop smoking by week 15. As for alcohol, the author cites studies indicating that children of women who were light drinkers during pregnancy—one or two alcoholic beverages a week—are less likely to suffer from emotional problems and hyperactivity than the children of women who shunned alcohol. But researchers also note that lighter drinkers tend to be better educated and from higher-income households than abstainers, and such factors may influence results.

“A reader venturing into this literature is liable to get whiplash,” says Ms. Paul. The readers of “Origins” may at times feel the same way. Yet with all the conflicting evidence, erring on the side of caution seems wise. When Ms. Paul tells one expert that she turned out fine even though her mother drank during pregnancy, he offers the unnerving reply: “Are you sure you’re fine? What could you have been if she didn’t drink?”

Ms. Paul does a commendable job of gathering in one place much of the literature on fetal-origins research. She discusses, for instance, concern about phthalates, a synthetic chemical used to make plastics more pliable. The evidence about its possibly damaging fetal development is not clear, but Ms. Paul chooses a prudent course. She throws away certain recyclables and makes a point of not putting the plastics in the microwave or dishwasher, where heat may release the chemical.

“Origins” also examines such matters as the possible influence of a pregnant mother’s depression on her unborn child’s later mental health. Ms. Paul visits Catherine Monk, a psychiatry professor at Columbia University, whose research suggests that the “intrauterine environment” may be one of the ways, in addition to genetics and parenting, that mental illness is passed from one generation to the next. The science remains unsettled—but on that front and many others Ms. Paul makes a persuasive case that fetal-origins research promises one day to bear valuable fruit.

Ms. Landro writes the Informed Patient column for the Journal.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703447004575448773639436854.html

Think the Answer’s Clear? Look Again

DEBUNKER Dr. Donald A. Redelmeier is an internist-researcher in Toronto.

Presidential elections can be fatal.

Win an Academy Award and you’re likely to live longer than had you been a runner-up.

Interview for medical school on a rainy day, and your chances of being selected could fall.

Such are some of the surprising findings of Dr. Donald A. Redelmeier, a physician-researcher and perhaps the leading debunker of preconceived notions in the medical world.

In his 20 years as a researcher, first at Stanford University, now at the University of Toronto, Dr. Redelmeier, 50, has applied scientific rigor to topics that in lesser hands might have been dismissed as quirky and iconoclastic. In doing so, his work has shattered myths and revealed some deep truths about the predictors of longevity, the organization of health care and the workings of the medical mind.

“He’ll go totally against intuition, and come up with a beautiful finding,” said Eldar Shafir, a professor of psychology and public affairs at Princeton University who has worked with Dr. Redelmeier on research into medical decision-making.

Dr. Redelmeier was the first to study cellphones and automobile crashes. A paper he published in The New England Journal of Medicine in 1997 concluded that talking on a cellphone while driving was as dangerous as driving while intoxicated. His collaborator, Robert Tibshirani, a statistician at Stanford University, said the paper “is likely to dwarf all of my other work in statistics, in terms of its direct impact on public health.”

As an internist who works at Sunnybrook Hospital in Toronto, Canada’s largest trauma center, Dr. Redelmeier sees a large number of patients in the aftermath of crashes. As a result, one of his abiding professional preoccupations is with vehicle crashes. He found that about 25 more people die in crashes on presidential Election Days in the United States than the norm, which he attributes to increased traffic, rushed drivers and unfamiliar routes.

He also discovered a 41 percent relative increase in fatalities on Super Bowl Sunday, which he attributed to a combination of fatigue, distraction and alcohol. After publication of the findings on the Super Bowl, the National Highway Traffic Safety Administration embarked on a campaign with the slogan “Fans don’t let fans drink and drive.”

In preparation for a recent interview in his modest office in the sprawling hospital complex, Dr. Redelmeier had written on an index card some of his homespun philosophies.

“Life is a marathon, not a sprint,” he read, adding, “A great deal of mischief occurs when people are in a rush.”

To that end, he studied the psychology around changing lanes in traffic. In an article published in Nature in 1999, Dr. Redelmeier and Professor Tibshirani found that while cars in the other lane sometimes appear to be moving faster, they are not.

“Every driver on average thinks he’s in the wrong lane,” Dr. Redelmeier said. “You think more cars are passing you when you’re actually passing them just as quickly. Still, you make a lane change where the benefits are illusory and not real.” Meanwhile, changing lanes increases the chances of collision about threefold.

Often he works from a hunch. In the Canadian Medical Association Journal in December, Dr. Redelmeier examined University of Toronto medical school admission interview reports from 2004 to 2009. After correlating the interview scores with weather archives, he determined that candidates who interviewed on foul-weather days received ratings lower than candidates who visited on sunny days. In many cases, the difference was significant enough to influence acceptance.

Dr. Redelmeier’s work on longevity began 10 years ago, when he was watching the Academy Awards and noticed that the celebrities on stage “don’t look anything like the patients I see in clinic,” he said. “It’s not just the makeup and the plastic surgery and wardrobe. It’s the way they move, it’s their gestures. They seem so much more vivacious. It seemed so much more than skin deep and might go all the way to longevity.”

His findings: Academy Award winners live an average of three years longer than the runners-up. A potential explanation could be an added measure of scrutiny, a public expectation of healthier living.

Dr. Redelmeier did not set out to be a researcher. “For years, I thought I’d be a straight clinician,” he said. But during his years at Stanford in the 1980s and early 1990s, first as a medical resident, then as a fellow, he met Amos Tversky, the cognitive psychologist who helped inspire the field of behavioral economics, which examines the cognitive, social and emotional aspects of people’s financial decisions.

Professor Tversky, who was Dr. Redelmeier’s fellowship supervisor, changed Dr. Redelmeier’s thinking entirely.

In 1990, he and Professor Tversky published a paper in The New England Journal of Medicine showing that when physicians make a medical decision for a single hypothetical patient, they favor more expensive treatments than when making a decision for a group of hypothetical patients with similar symptoms. And in 1996 the two scientists found that increased arthritis pain had nothing to do with the weather. They attributed the misperception to the human tendency to look for patterns even where none may exist.

Dr. Redelmeier credits Professor Tversky, who died in 1996, with shaping his own approach to research in the medical realm. “He provided me with a language and a logic for tackling issues that seemed to be around me all the time, but weren’t so apparent to other people,” he said.

Dr. Redelmeier isn’t one to forget about his past research. With the Academy Award study, for instance, he regularly updates the database.

“It’s important for him to know this wasn’t some statistical blip we happened to stumble across,” said Dr. Sheldon Singh, a cardiologist at Sunnybrook Hospital and assistant professor of medicine at the University of Toronto who co-wrote the Academy Awards paper. And in a paper coming out in the September issue of Chance, a statistics journal, Dr. Redelmeier and Professor Tibshirani show that on Election Day 2008, more fatalities occurred than on two control days, the Tuesdays before and after the election. This paper is a follow-up to an earlier one published in The Journal of the American Medical Association.

“Part of the satisfaction for Don is knowing the results stand the test of time,” Dr. Singh said.

Dr. Redelmeier’s unusual approach goes hand in hand with some pronounced personality quirks. His e-mails, which are legendary among their recipients, are written as lists, with a number assigned to each thought. Dr. Redelmeier does this, he said, in order to focus on the content of a message rather than get distracted by grammar, punctuation and syntax.

“I remember the first time I got one, I was a little offended,” Dr. Singh said. “I’d never gotten an e-mail not written in a paragraphed format. Yet he addressed everything I needed to know.”

Dr. Redelmeier takes the results of his research seriously. He rides his bike to work, and when he does drive, he resists “small temptations to change lanes.”

Dr. Redelmeier said he was currently looking at attention deficit disorder among teenage drivers, and whether, like epilepsy, the disorder should be considered a medically reportable condition.

Not everyone has unconditional admiration for Dr. Redelmeier’s work. Professor Tibshirani, for instance, has reservations about some of Dr. Redelmeier’s choices, and declined to collaborate on the Academy Awards study.

“I honestly thought it was frivolous, and we’ve argued about it,” Professor Tibshirani said. He also questioned the Election Day research. “Of course there’s more traffic, so it seemed self-evident,“ he said.

That perspective amuses rather than offends Dr. Redelmeier. When asked about it via e-mail, he responded within one of his numbered missives:

15) I sometimes tell a joke to tackle the issue

16) that is, about people’s ability to judge “frivolity”

17) namely, imagine Charles Darwin 150 years ago

18) at the time he disappointed his father by neglecting medical training

19) and asked, instead, to go on a two-year vacation in the tropics

20) with an emphasis on bird watching (finches)

21) the father was not impressed and thought the son was wasting his time

Another Redelmeier philosophical pearl is “Do not get trapped into prior thoughts. It’s perfectly O.K. to change your mind as you learn more.”

In patient care, he said, he frequently does just that. “I think I know the diagnosis and start the treatment, then follow up and realize I was wrong,” he said. “I intercept a lot of my own errors at a relatively early stage.”

This, not surprisingly, became the basis of some classic Redelmeier research around raising physicians’ awareness of their own thinking — cognitive shortcuts that might lead to a diagnostic error.

Professor Tibshirani said he once accompanied Dr. Redelmeier on rounds at the hospital. “I watched him talk to patients, and they love him,” he recounted.

While Dr. Redelmeier enjoys his patient interactions, he appears incapable of resisting the lure of a good research topic. Several years ago he compared medical school class presidents to a control group of others in the class and found that the presidents died an average 2.5 years earlier than those in the control group. The type who would run for class president, he concluded in the resulting paper, “may also be the type who fails to look after their health or is otherwise prone to early mortality.”

The idea came to him one day in a hallway at the University of Toronto’s Faculty of Medicine, where he had stopped to admire a century’s worth of class photos showing mostly white men.

“Some people might say, ‘What an old boys’ network,’ ” Dr. Redelmeier said. “But I thought, ‘My goodness, what a homogeneous population, akin to identical white mice, which thereby controls for all sorts of differences.’ ” Thus was born another Redelmeier classic.

Katie Hafner, New York Times

__________

Full article and photo: http://www.nytimes.com/2010/08/31/science/31profile.html

Who’s ObamaCare’s Daddy?

Now even liberals are denying paternity.

“I know this is a tough vote,” President Obama told House Democrats at a March pep rally merely hours before they passed national health care. But he added that he was “actually confident” that “it will end up being the smart thing to do politically because I believe that good policy is good politics.” Apparently not, as even some liberal lobbies are now being forced to concede.

On Thursday, Families USA hosted a “messaging” conference call with Democrats and Democratic allies, admitting that ObamaCare has not in fact become more popular since it passed. Families USA called for a wholesale shift in how Democrats now attempt to sell its handiwork to the public, the central theme being that “The law is not perfect, but it does good things and helps many people. Now we’ll work to improve it.”

That’s according to the power point presentation that accompanied the briefing, as first reported by Ben Smith of Politico.com. “Don’t make grand claims about the law,” another slide added. “Use ‘improve it’ language.” But wait: Wasn’t improvement supposed to be the point of ObamaCare?

The presentation was based on internal polling done by the Herndon Alliance, which was formed in 2005 to lobby for national health care and whose daisy chain of “partners” includes the Center for American Progress, AARP and the unions AFL-CIO, SEIU and AFSCME. That Families USA would endorse this strategic switcheroo is especially notable—make that astonishing—given that a plan like ObamaCare has been the group’s existential goal for two decades. This is like Moses saying that, on second thought and after consulting with his pollster, maybe the land of milk and honey is overrated.

So much, too, for the liberal claim—or delusion—at the time of the ObamaCare votes that failing to pass it would be worse politically. “I think Democrats fully understand they have to pass this legislation,” Families USA president Ron Pollack said in January. “The alternative is an absolute disaster.”

Bill Clinton also importuned reluctant Democrats with the revisionist history that Republicans took the House in 1994 not because of HillaryCare but to punish his party for failing to pass it. In one bit of widely reported advice, Mr. Clinton conceded that ObamaCare was unpopular but told Members to “put the corn where the hogs can get to it.” To translate from the Bubba idiom, he meant that if they explained it simply and clearly, voters would come around.

Not according to the Families USA Herndon presentation. “Keep claims small and credible; don’t overpromise or ‘spin’ what the law delivers,” it advised. Its “to don’t” items include offering “a long list of benefits” or claims that “the law will reduce costs and the deficit,” even as “Voters are concerned about rising health care costs and believe that costs will continue to rise.”

Looping back to Mr. Obama’s political advice, his government takeover of health care is unpopular because it is arguably the worst legislation since the Smoot-Hawley Tariff, and the voters know it. Still, it’s amazing to see Democrats now pretend for election purposes that the bill they said would be an achievement for the liberal ages is something other than what they passed. Voters should demand DNA samples, aka, Democratic voting records.

Editorial, Wall Street Journal

__________

Full article: http://online.wsj.com/article/SB10001424052748703579804575441480012598138.html

Moose Offer Trail of Clues on Arthritis

LONG-RUNNING A study that began in 1958 has found poor nutrition as the cause of arthritis.

In the 100 years since the first moose swam into Lake Superior and set up shop on an island, they have mostly minded their moosely business, munching balsam fir and trying to evade hungry gray wolves.

But now the moose of Isle Royale have something to say — well, their bones do. Many of the moose, it turns out, have arthritis. And scientists believe their condition’s origin can help explain human osteoarthritis — by far the most common type of arthritis, affecting one of every seven adults 25 and older and becoming increasingly prevalent.

The arthritic Bullwinkles got that way because of poor nutrition early in life, an extraordinary 50-year research project has discovered. That could mean, scientists say, that some people’s arthritis can be linked in part to nutritional deficits, in the womb and possibly throughout childhood.

The moose conclusion bolsters a small but growing body of research connecting early development to chronic conditions like osteoarthritis, which currently affects 27 million Americans, up from 21 million in 1990.

Osteoarthritis’s exact cause remains unknown, but it is generally thought to stem from aging and wear and tear on joints, exacerbated for some by genes. Overweight or obese people have greater arthritis risk, usually attributed to the load their joints carry, and the number of cases is increasing as people live longer and weigh more.

But the moose work, along with some human research, suggests arthritis’s origins are more complex, probably influenced by early exposures to nutrients and other factors while our bodies are developing. Even obesity’s link to arthritis probably goes beyond extra pounds, experts say, to include the impact on the body of eating the wrong things.

Nutrients, experts say, might influence composition or shape of bones, joints or cartilage. Nutrition might also affect hormones, the likelihood of later inflammation or oxidative stress, even how a genetic predisposition for arthritis is expressed or suppressed.

“It makes perfect sense,” said Dr. Joanne Jordan, director of the Thurston Arthritis Research Center at the University of North Carolina. “Osteoarthritis starts way before the person knows it, way before their knee hurts or their hand hurts. It’s very clear that we’re going to have to start looking back” at “things in the early life course.”

Such research could lead to nutritional steps people can take to protect against osteoarthritis, a condition that is often painful or debilitating, and according to federal data, costs billions of dollars annually in knee and hip replacements alone.

“It would be helpful to know if we want to make sure pregnant moms are taking certain vitamins or if you need to supplement with such and such nutrition,” said Dr. David Felson, an arthritis expert at Boston University School of Medicine. “The moose guy is right in that we probably should study weight or some other nutritional factor almost through adolescence when the bones or joints have stopped forming.”

The “moose guy” is Rolf Peterson, a Michigan Technological University scientist on the Isle Royale project, which began in 1958 and is reportedly the longest-running predator-prey study.

For half the year, Dr. Peterson and his colleagues are the only humans allowed on the 45-mile-long island, part of a national park. They stay in yurts, a log cabin or a wood-stove-heated lodge, navigate the wilderness without roads or cars, and share a single staticky phone line. They analyze everything from wolves’ moose-hunting strategies to moose feces. Collecting bones of more than 4,000 moose, they noticed that out of 1,200 carcasses they analyzed, more than half had arthritis, virtually identical to the human kind. It usually attacked the hip and instantly made the moose vulnerable.

“Arthritis is a death sentence around here — you need all four legs,” Dr. Peterson said. “Wolves pick them off so quickly that you don’t even see them limping.”

What is more, the arthritic moose were often small, measured by the length of the metatarsal bone in the foot. Small metatarsals indicate poor early nutrition, and scientists determined that the arthritic moose were born during times when food was scarce, so their mothers could not produce enough milk.

Dr. Peterson said if the arthritis were caused by excess wear and tear on the moose’s joints, that would have meant that times of food scarcity occurred when the moose were already grown, since the extra wear would have happened to moose walking farther to find edible plants. But the arthritic moose had had plentiful food as adults.

For people, several historical cases may suggest a nutritional link. Bones of 16th-century American Indians in Florida and Georgia showed significant increases in osteoarthritis after Spanish missionaries arrived and tribes adopted farming, increasing their workload but also shifting their diet from fish and wild plants to corn, which “lacks a couple of essential amino acids and is iron deficient,” said Clark Larsen, an Ohio State University anthropologist collaborating with Dr. Peterson. Many children and young adults were smaller and died earlier, Dr. Larsen said, and similar patterns occurred when an earlier American Indian population in the Midwest began farming maize.

British scientists studying people born in the 1940s found low birth weight (indicating poor prenatal nutrition) linked to osteoarthritis in the men’s hands, Dr. Felson said. And Dr. David Barker, a British expert on how nutrition and early development influence cardiac and other conditions, said “studies of people in utero during the Great Chinese Famine” of the late 1950s found that “40, 50 years later, those people have got disabilities.”

Overeating can be as problematic as undereating. Dr. Lisa A. Fortier, a large-animal orthopedist at Cornell University’s College of Veterinary Medicine, said she saw “abnormal joint and tendon development from excessive nutrition” in horses overfed “in utero or in the postnatal life,” probably ingesting “too much of the wrong type of sugar that may cause levels of inflammation.”

Dr. Peter Bales, an orthopedic surgeon affiliated with University of California, Davis, Medical Center, who has written about nutrition and arthritis, sees similar problems in overweight patients. He said the causes were not as “simplistic” as “carrying more weight around,” but might involve nutritional imbalances that could hurt joints and erode cartilage. Much is unknown about nutrition’s relevance. Isle Royale moose, for example, also seem to have genetic predispositions for arthritis, suggesting that nutrition might be amplifying or jump-starting the genes.

“Genes are not Stalinist dictators,” said Dr. Barker, now at Oregon Health and Science University. “What they do, how they’re expressed, is conditional on the rest of the body. The human being is a product of a general recipe, and the specific nutrients you get or don’t get.”

Studying nutrition in people is much more complicated than in moose. Dr. Peterson said the early moosehood developmental window occurred in utero through 28 months, but humans’ developmental time frame lasted into the teens. Some experts say prenatal nutrition is most critical; others see roles for nutrients after birth and beyond.

“Up until the growth plates close, which is through adolescence and even early adulthood, the effects of nutrition are magnified,” said Dr. Constance R. Chu, director of the Cartilage Restoration Center at the University of Pittsburgh, who said nutrients might affect the number of healthy cells in cartilage and its thickness. “But in my opinion, it’s relevant throughout life.”

Pam Belluck, New York Times

__________

Full article and photo: http://www.nytimes.com/2010/08/17/health/research/17moose.html

Phys Ed: Can Exercise Moderate Anger?

For years, researchers have known that exercise can affect certain moods. Running, bike riding and other exercise programs have repeatedly been found to combat clinical depression. Similarly, a study from Germany published in April found that light-duty activity like walking or gardening made participants “happy,” in the estimation of the scientists. Even laboratory rats and mice respond emotionally to exercise; although their precise “moods” are hard to parse, their behavior indicates that exercise makes them more relaxed and confident.

But what about anger, one of the more universal and, in its way, destructive moods? Can exercise influence how angry you become in certain situations?

A study presented at the most recent annual conference of the American College of Sports Medicine provides some provocative if ambiguous answers. For the study, hundreds of undergraduates at the University of Georgia filled out questionnaires about their moods. From that group, researchers chose 16 young men with “high trait anger” or, in less technical terms, a very short fuse. They were, their questionnaires indicated, habitually touchy.

The researchers invited the men to a lab and had them fill out a survey about their moods at that moment. During the two days of the study, the men were each fitted with high-tech hairnets containing multiple sensors that could read electrical activity in the brain. Next, researchers flashed a series of slides across viewing screens set up in front of each young man. The slides, intended to induce anger, depicted upsetting events like Ku Klux Klan rallies and children under fire from soldiers, which were interspersed with more pleasant images. Electrical activity in the men’s brains indicated that they were growing angry during the display. For confirmation, they described to researchers how angry they felt, using a numerical scale from 0 to 9.

On alternate days, after viewing the slides again (though always in a different order), the men either sat quietly or rode a stationary bike for 30 minutes at a moderate pace while their brain patterns and verbal estimations of anger were recorded. Afterward, the researchers examined how angry the volunteers became during each session.

The results showed that when the volunteers hadn’t exercised, their second viewing of the slides aroused significantly more anger than the first. After exercise, conversely, the men’s anger reached a plateau. They still became upset during the slide show — exercise didn’t inure them to what they saw — but the exercise allowed them to end the session no angrier than they began it.

What the results of the study suggest is that “exercise, even a single bout of it, can have a robust prophylactic effect” against the buildup of anger, said Nathaniel Thom, a stress physiologist who was the study’s lead researcher.

“It’s like taking aspirin to combat heart disease,” he said. “You reduce your risk.”

When the men did not exercise, they had considerable difficulty controlling their racing emotion. But after exercise, they handled what they saw with more aplomb. Their moods were under firmer control.

The question of just how, physiologically, exercise blunts anger remains open. Mr. Thom and his colleagues did not test levels of stress hormones or brain chemicals in the test subjects. But earlier work by other scientists suggests that serotonin, a neurotransmitter in the brain, probably played a role, Mr. Thom said. “Animal studies have found that low levels of serotonin are associated with aggression, which is our best analogue of anger in animals,” he said. “Exercise increases serotonin levels in the rat brain.” Low serotonin levels in humans are also thought to contribute to mood disorders.

Changes in the activity of certain genes within the brain may also have an impact. In a 2007 experiment at Yale University, researchers found that prolonged running altered the expression of almost three dozen genes associated with mood in the brains of laboratory mice. Mr. Thom says he hopes that future studies by himself and others will help to determine the specific underlying mechanisms that link exercise and a reduction of anger.

But for now, the lesson of his preliminary work, he said, is that “if you know that you’re going to be entering into a situation that is likely to make you angry, go for a run first.”

Gretchen Reynolds, New York Times

__________

Full article: http://well.blogs.nytimes.com/2010/08/11/phys-ed-can-exercise-moderate-anger/

Good Grief

A startling suggestion is buried in the fine print describing proposed changes for the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders — perhaps better known as the D.S.M. 5, the book that will set the new boundary between mental disorder and normality. If this suggestion is adopted, many people who experience completely normal grief could be mislabeled as having a psychiatric problem.

Suppose your spouse or child died two weeks ago and now you feel sad, take less interest and pleasure in things, have little appetite or energy, can’t sleep well and don’t feel like going to work. In the proposal for the D.S.M. 5, your condition would be diagnosed as a major depressive disorder.

This would be a wholesale medicalization of normal emotion, and it would result in the overdiagnosis and overtreatment of people who would do just fine if left alone to grieve with family and friends, as people always have. It is also a safe bet that the drug companies would quickly and greedily pounce on the opportunity to mount a marketing blitz targeted to the bereaved and a campaign to “teach” physicians how to treat mourning with a magic pill.

It is not that psychiatrists are in bed with the drug companies, as is often alleged. The proposed change actually grows out of the best of intentions. Researchers point out that, during bereavement, some people develop an enduring case of major depression, and clinicians hope that by identifying such cases early they could reduce the burdens of illness with treatment.

This approach could help those grievers who have severe and potentially dangerous symptoms — for example, delusional guilt over things done to or not done for the deceased, suicidal desires to join the lost loved one, morbid preoccupation with worthlessness, restless agitation, drastic weight loss or a complete inability to function. When things get this bad, the need for a quick diagnosis and immediate treatment is obvious. But people with such symptoms are rare, and their condition can be diagnosed using the criteria for major depression provided in the current manual, the D.S.M. IV.

What is proposed for the D.S.M. 5 is a radical expansion of the boundary for mental illness that would cause psychiatry to intrude in the realm of normal grief. Why is this such a bad idea? First, it would give mentally healthy people the ominous-sounding diagnosis of a major depressive disorder, which in turn could make it harder for them to get a job or health insurance.

Then there would be the expense and the potentially harmful side effects of unnecessary medical treatment. Because almost everyone recovers from grief, given time and support, this treatment would undoubtedly have the highest placebo response rate in medical history. After recovering while taking a useless pill, people would assume it was the drug that made them better and would be reluctant to stop taking it. Consequently, many normal grievers would stay on a useless medication for the long haul, even though it would likely cause them more harm than good.

The bereaved would also lose the benefits that accrue from letting grief take its natural course. What might these be? No one can say exactly. But grieving is an unavoidable part of life — the necessary price we all pay for having the ability to love other people. Our lives consist of a series of attachments and inevitable losses, and evolution has given us the emotional tools to handle both.

In this we are not unique. Chimpanzees, elephants and other mammals have their own ways of mourning. Humans have developed complicated and culturally determined grieving rituals that no doubt date from at least as far back as the Neanderthal burial pits that were consecrated tens of thousands of years ago. It is essential, not unhealthy, for us to grieve when confronted by the death of someone we love.

Turning bereavement into major depression would substitute a shallow, Johnny-come-lately medical ritual for the sacred mourning rites that have survived for millenniums. To slap on a diagnosis and prescribe a pill would be to reduce the dignity of the life lost and the broken heart left behind. Psychiatry should instead tread lightly and only when it is on solid footing.

There is still time to keep the suggested change from entering the D.S.M. 5, which will not be published until May 2013. The task force preparing the new manual could adopt a more cautious and modest estimation of the reach of psychiatry and its appropriate grasp.

For the few bereaved who are severely impaired or at risk of suicide, doctors can already apply the diagnosis of major depression. But don’t change the rules for everyone else. Let us experience the grief we need to feel without being called sick.

Allen Frances, an emeritus professor and former chairman of psychiatry at Duke University, was the chairman of the task force that created the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders.

__________

Full article and photo: http://www.nytimes.com/2010/08/15/opinion/15frances.html

My Life in Therapy

All those years, all that money, all that unrequited love. It began way back when I was a child, an anxiety-riddled 10-year-old who didn’t want to go to school in the morning and had difficulty falling asleep at night. Even in a family like mine, where there were many siblings (six in all) and little attention paid to dispositional differences, I stood out as a neurotic specimen. And so I was sent to what would prove to be the first of many psychiatrists in the four and a half decades to follow — indeed, I could be said to be a one-person boon to the therapeutic establishment — and was initiated into the curious and slippery business of self-disclosure. I learned, that is, to construct an ongoing narrative of the self, composed of what the psychoanalyst Robert Stoller calls “microdots” (“the consciously experienced moments selected from the whole and arranged to present a point of view”), one that might have been more or less cohesive than my actual self but that at any rate was supposed to illuminate puzzling behavior and onerous symptoms — my behavior and my symptoms.

To this day, I’m not sure that I am in possession of substantially greater self-knowledge than someone who has never been inside a therapist’s office. What I do know, aside from the fact that the unconscious plays strange tricks and that the past stalks the present in ways we can’t begin to imagine, is a certain language, a certain style of thinking that, in its capacity for reframing your life story, becomes — how should I put this? — addictive. Projection. Repression. Acting out. Defenses. Secondary compensation. Transference. Even in these quick-fix, medicated times, when people are more likely to look to Wellbutrin and life coaches than to the mystique-surrounded, intangible promise of psychoanalysis, these words speak to me with all the charged power of poetry, scattering light into opaque depths, interpreting that which lies beneath awareness. Whether they do so rightly or wrongly is almost beside the point.

IT WAS A SNOWY Tuesday afternoon in February, and I was inching along Fifth Avenue in a taxi, my mood as gray as the sky, on my way to a consultation with a therapist in the Village who was recommended to me by Dr. O., another therapist I had seen in consultation, who in turn was referred to me by a friend’s therapist. Once again — how many times have I done this? — I was on a quest for a better therapist, a more intuitive therapist, a therapist I could genuinely call my own, a therapist who could make me happy. I liked Dr. O., a man in his 80s who struck me as having a quick grasp of the essential details, the issues that dragged along with me year after year like a ball and chain. He seemed to get to the heart of the matter — had I ever felt loved? Had I ever loved? — with disarming ease. But then, after several visits, during which I envisioned myself finally and conclusively grappling with things, toppling over the impediments that stood in my way and coming out a winner, Dr. O. suddenly announced that he couldn’t take on any new patients. He said he had given the prospect of working with me a great deal of thought but in the end didn’t think he was prepared to commit himself.

I resisted the impulse to plead on my behalf, which was my impulse around all elusive men, be they shrinks or lovers, and accepted his verdict. (I later found out that he had been very ill and was in the process of bringing his practice to a close.) I wasn’t, after all, therapistless; I had been seeing Dr. L. for the past year and a half, an older woman with a practical manner and a radically limited wardrobe (she wore only black and brown pantsuits) that I had begun to view as a symptom of her tunnel vision, so different from my own scattered and unfiltered way of being. Of late, I had been feeling increasingly dissatisfied with the tenor of our sessions; they seemed more like the sort of conversations you would have with a good friend in a coffee shop than the intense, closely-examined, hesitation-filled discourse I had come to expect from the therapeutic encounter. The subject of “transference” (the patient’s projection of feelings and situations from the past onto the therapist), which is usually at the heart of psychoanalysis, didn’t even come up for discussion, much less any examination of possible signs of “countertransference” (the analyst’s emotional reactions to the patient stemming from his or her unconscious needs and conflicts). There was the day she used the word “cute” to describe a complicated, even twisted man I was telling her about, and I found myself wondering about her own personal evolution, how much she really understood about relations between men and women. Who was to say whether she had the kind of varied real-world experience that I might benefit from, one that could yield the sort of rich understanding of social texture that the anthropologist Clifford Geertz referred to as “thick description”?

My dissatisfaction led me to wonder whether it was time for a change — or whether, at long last, it might be time to strike out on my own and weather my internal and external vicissitudes alone, perilous as that prospect might appear to a person who hadn’t been without a therapist’s support in 40-odd years. On the other hand, I couldn’t actually picture myself going without a listening ear, someone who attended to my story along with me several days a week, who was ready and waiting to receive news of my life, undramatic and unimportant in the larger scheme of things as it might be. It was one thing to mock therapy and its practitioners, as I regularly did, or to fume at the expense; it was another thing entirely to walk away from the cushioning it offered. Much as I might disparage it, I was convinced it had helped keep me alive, bounded to the earth; there were also my antidepressants, of course, steadily elevating my dopamine levels and moderating my moods, but without the benefit of talk therapy, I felt unduly fragile, like a tightrope walker absent a net. Which is why, when Dr. O. conveyed his unavailability, I did what any addict does and asked where else I could get my fix. He said he would think about who might be suitable, and about 10 days later I received a short note from him with the name of Dr. D.

This is how I came to seek out the psychiatrist in the Village, who turned out to be a man in what I guessed to be his early 60s, with saucer eyes that were given to an exaggerated registering of emotion like the eyes of a comic-book character. I arrived at his office magisterially late after the endless cab ride, wet from the snow. I felt immediately on edge, furious at myself for not having taken the subway, silently calculating how much money had evaporated along with the first 20 minutes of the session. I always resented the implacability of the therapeutic “hour” (which translated into a mingy 50, or increasingly these days, 45 minutes at most), the way it commenced at some tediously calibrated moment, like 11:10 or 5:25, instead of satisfyingly on the hour or half-hour. Not to mention the way the end of a session was visible from the start, getting ready to wave goodbye to you and your problems just as you were settling in to reflect more fully upon them. Now, having abbreviated the session even before it began, I felt full of righteous if illogical outrage: who mandated all these carefully preserved professional rituals in the first place — the couch with its flimsy little napkin covering the place where you were to lay your head, the de rigueur box of tissues, the chair (for those who preferred to sit up, like me) placed at a careful distance from the therapist’s own? What was the point of these rules? Did they serve the patient in any way or were they just a means of securing the analyst enough patients to bring in the money to pay for a weekend house?

Needless to say, I didn’t air any of these thoughts and instead went into my skittish, slightly apologetic, pre-emptively self-deprecating patient mode — intent on sounding like someone who was aware of the pathological currents that ran beneath a life that might be viewed as functional, even successful, if looked at from afar. Dr. D. spoke very little, in the manner of a true-blue analyst — the more silent the therapist, it’s safe to say, the more likely he is to favor a strict analytic approach — in a voice that was low and grave, with an almost total absence of inflection. I had to suppress the urge to ask him to speak up for fear of offending him. Still, I was struck by the way he managed to convey a spirit of deep thoughtfulness whenever he did utter a few words. “You have trouble negotiating distance, don’t you?” he asked, after I wandered into my hyperkinetic version of free association, saying what came to my mind without exercising too much editorial control, babbling on about the chilly caretaker from my childhood who never answered me when I tried to engage her, moving on to my father’s obliviousness to my youthful presence even when I was sitting next to him in a car and then to my fear of being overattached to the people in my life that I felt closest to. “It’s a problem for you either way,” he added. “Isn’t it? Too close. Too far. Neither feel entirely comfortable.” I wasn’t completely sure what he meant, but I answered that I saw his point.

I went to Dr. D. to discuss the metaphysics of therapy rather than the logistics — whether, that is, I should be in therapy at all, and if so, what for and what kind. I talked about past therapists and their different styles of treatment, most of them Freudian-derived to a greater or lesser extent, all of which had turned me off in one way or another. In therapy that was more psychoanalytically oriented, I told him, I tended to get trapped in long-ago traumas, identifying with myself as a terrified little girl at the mercy of cruel adult forces. This imaginative position would eventually destabilize me, kicking off feelings of rage and despair that would in turn spiral down into a debilitating depression, in which I couldn’t seem to retrieve the pieces of my contemporary life. I don’t know whether this was because of the therapist’s lack of skill, some essential flaw in the psychoanalytic method or some irreparable injury done to me long ago, but the last time I engaged in this style of therapy for an extended period of time with an analyst who kept coaxing me to dredge up more and more painful, ever earlier memories, I ended up in a hospital. When I got out two summers ago, I reacted to the trauma of hospitalization by seeking out the aforementioned Dr. L., who took a more contained, present-oriented approach, with far less time and energy spent trying to excavate distant hurts and grievances. While she may have had the convictions of a Freudian, she also had the manner of a strategic adviser, cheering me on in my daily life. And yet, after seeing her for 18 months, I felt that I was doing myself an injustice by merely skimming the surface, leaving myself vulnerable to the kind of massive subterranean conflict I feared would sooner or later come out of nowhere and hit me hard once again. Would I ever, I wondered, manage to find the right mix, the style of therapy that fit my particular mold? Did it even exist?

As I mulled it over for Dr. D., I noticed that I was speaking with greater detachment and less gusto than I usually brought to the occasion. Perhaps this was a response to Dr. D.’s own removed demeanor, which made me in turn wonder if patients eventually began to sound like their therapists, much in the way husbands and wives of long standing are said to resemble each other. And then there was my feeling that I better not get in too deep. I was wary by this point of the alacrity with which I attached to shrinks, each and every one of them, as if I suspended my usual vigilant powers of critical judgment in their presence merely because they wore the badge of their profession. The truth of the matter was that in more than 40 years of therapy (the only person I knew who may have been at it longer than me was Woody Allen, who once offered me his own analyst), I never developed a set of criteria by which to assess the skill of a given therapist, the way you would assess a dentist or a plumber. Other than a presentable degree of intelligence and an office that didn’t set off aesthetic alarms — I tended to prefer genteelly shabby interiors to overly well-appointed ones, although I was wary of therapists who exhibited a Collyer Brothers-like inability to throw anything away — I wasn’t sure what made for a good one. I never felt entitled to look at them as members of a service profession, which is what, underneath all the crisscrossing of need and wishfulness, they essentially were. The sense of urgency that generally took me into a new shrink’s office was more conducive to seeing myself as the one being evaluated rather than the evaluator. Was I a good-enough patient? Would this latest psychiatrist (I saw mostly M.D.’s) like me and want to take me on? Or would he/she write me off as impossibly disturbed under my cloak of normalcy?

I knew I wasn’t the most promising candidate — I was, in fact, a prime example of what is referred to within the profession as a “difficult” patient, what with my clamorous ways, disregard for boundaries and serial treatments — but perhaps this time, after so many disappointments, I would get lucky. Somewhere out there, sitting in a smaller or larger office on Central Park West or the Upper East Side, tucked behind a waiting area furnished with a suitably arty poster or two, a couple of chairs and old copies of The New Yorker and National Geographic Traveler, was a practitioner who would not only understand my lifelong sorrow and anger in an empathic (but not unduly soppy) fashion but also be able to relieve me of them. Just as some people believe in the idea of soul mates, I held fast to the conviction that my perfect therapeutic match was out there. If only I looked hard enough I would find this person, and then the demons that haunted me — my love/hate relationship with my difficult mother (who has been dead now for four years), my self-torturing and intransigently avoidant attitude toward my work, my abiding sense of aloneness and seeming inability to sustain a romantic relationship and, above all, my lapses into severe depression — would become, with my therapist’s help, easier to manage.

Therapy, as Freud himself made clear, is never about finding a cure for what ails you. Its aim, despite the lyrical moniker it is known by (“the talking cure” was not actually Freud’s phrase but rather that of Dr. Josef Breuer’s patient Bertha Pappenheim, whom Freud wrote about as Anna O.), was always more modest. Freud described it as an effort to convert “hysterical misery” into “common unhappiness,” which suggests a rather minimalist framework against which to judge progress. There is no absolute goal, no lifetime guarantee, no telling how much therapy is enough therapy, no foolproof way of knowing when you’ve gotten everything out of it that you can and would be better off spending your valuable time and hard-earned money on other pursuits.

All of which raises the question: What exactly is the point? How can you be expected to know when being in therapy is the right choice, to know which treatments are actually helpful and which serve merely to give the false sense of reassurance that comes with being proactive, with doing all that we can? Does anyone, for example, really know what “character change” looks like? That, after all, is what contemporary therapy that is more than chitchat for the so-called worried well aims to promote. More pressing, who can be trusted to answer these questions? Looked at a certain way, the entire enterprise seems geared toward the needs of the therapist rather than the patient to a degree that can feel, after a certain amount of time, undemocratic, if not outright exploitative. With no endpoint in sight, it’s possible to stay in therapy forever without much real progress; at the same time, the weight of responsibility is borne almost entirely by the patient, whose “resistance” or lack of effort-making is often blamed for any stagnancy in treatment before the possibility of a therapist’s shortcomings is even acknowledged. As the psychiatrist Robert Michels observed in his aptly titled essay “Psychoanalysis and Its Discontents,” for patients, “it often seems as if psychoanalysis isn’t even designed to help them. Patients want answers, whereas psychoanalysts ask questions. Patients want advice, but psychoanalysts are trained not to give advice. Patients want support and love. Psychoanalysts offer interpretations and insight. Patients want to feel better; analysts talk about character change.”

My abiding faith in the possibility of self-transformation propelled me from one therapist to the next, ever on the lookout for something that seemed tormentingly out of reach, some scenario that would allow me to live more comfortably in my own skin. For all my doubts about specific tenets and individual psychoanalysts, I believed in the surpassing value of insight and the curative potential of treatment — and that may have been the problem to begin with. I failed to grasp that there was no magic to be had, that a therapist’s insights weren’t worth anything unless you made them your own and that nothing that had happened to me already could be undone, no matter how many times I went over it.

And yet it seems to me that the process itself, in its very commitment to interiority — its attempt to ferret out prime causes and pivotal events from the psychic rubble of the past and the unwieldy conflicts of the present — can be intriguing enough to stand in as its own reward. In the course of growing up, we all learn to repress our unruly fantasies and to keep our more anarchic thoughts mostly to ourselves. As for our dreams and what they might signify — their “latent content,” that is, as opposed to their “manifest content” — who can be expected to be interested in them except a close friend or tolerant spouse, both of whom are assuredly only half listening? Therapy offers us a particular kind of chaste intimacy, one that in its ideal (if not always actual) form is free of the burden of desirous expectations. Or as Adam Phillips, the writer and psychoanalyst, puts it with characteristic brio: “Psychoanalysis is about what two people can say to each other if they agree not to have sex.” It is a place to say out loud all that we have grown accustomed to keeping silent, in the hope that we might better understand ourselves and our missteps, come to terms with disowned desires and perhaps even find a more direct route to an effectively examined life. It provides an opportunity unlike any other to sort through the contents of your own mind — an often painfully circuitous operation — in the presence of someone who is trained to make order out of mental chaos. Although it is possible to view the whole exercise as an expensive self-indulgence — or, as its many detractors insist in one way or another, as the disease for which it purports to be the cure — psychoanalysis is the only game in town in which you are free to look and sound your worst the better to live up to your full potential.

FREUDIAN PSYCHOANALYSIS reached its high-water mark in the 1950s, having become something of a secular religion; it offered, as Dan Wakefield observes in his book “New York in the Fifties,” a “dream of wholeness” — and, no less important, “the cure for what ailed you sexually.” All of the so-called New York Intellectuals, like Delmore Schwartz and Mary McCarthy, dipped into analytic treatment at one time or another; and James Baldwin, in a 1959 essay, noted of “the citizens of this time and place” that “when they talk, they talk to the psychiatrist; on the theory, presumably, that the truth about them is ultimately unspeakable.” In the mid-60s, psychoanalysis was still very much in vogue, having not yet become the reviled and increasingly discredited discipline it came to be in the 1980s and 1990s, when anti-Freudians like Frederick Crews and Peter Swales did their dismantling work and the psychopharmaceutical industry flourished. (My favorite line from Donald Barthelme’s 1972 short story “The Sandman” is, to my mind, more predictive than descriptive: “The prestige of analysis,” the protagonist writes to his girlfriend’s shrink, in defense of her decision to give up analysis and use the money saved to buy a grand piano, “is now at a nadir.”) Popular magazines like Redbook and McCall’s familiarized Middle America with basic Freudian concepts, the better to understand phenomena like marital discord and sibling rivalry, and references to therapy abounded in theater and film. In their book “Psychiatry and the Cinema,” Glen O. Gabbard and Krin Gabbard refer to the period from the late 1950s to the early 1960s as the “Golden Age” of psychiatry in the movies: “For a half-dozen years . . . films reflected — however imperfectly — a growing conviction in American culture that psychiatrists were authoritative voices of reason, adjustment and well-being.” In 1969, Alexander Portnoy unburdened the content of his carnal character on the silent Dr. Spielvogel and made his creator, Philip Roth, a household name.

Still, while seeing a shrink was often considered something to be proud of back in the 1960s, lending you an aura of intellectual gravitas, it was at that time largely an adults-only activity. I began seeing my first therapist, Dr. Welsh, at the age of 10, but I didn’t know of any other kids my age who availed themselves of a psychiatrist, and the entire venture filled me with shame. I’m not sure how much I told her — children at that age tend to be loyal to their backgrounds, however dysfunctional — but I do recall busily beating up dolls in her office. Welsh was kindly and gray-haired, a renowned child psychiatrist, but to me she was little more than a stranger whose courteous style confused me. For one thing, she wasn’t Jewish, which, given my Orthodox upbringing, immediately opened up a chasm of nonfamiliarity between us. For another, I kept wondering why she was so nice to me; I wasn’t used to such treatment, which was surely one reason I needed to see a psychiatrist in the first place. I couldn’t figure out a connection between the world inside her office and the world outside it; they seemed like separate universes with different rules of conduct. In one, I was listened to, when I did choose to speak, with a great deal of attentiveness; in the other I was more often than not pushed aside, my anxieties discounted or ridiculed.

At some point I stopped seeing Dr. Welsh, and in junior high I started seeing a female psychiatric social worker with a mop of gray curls whose eyes crinkled up when she smiled and whose lack of an advanced degree wasn’t lost on me. It was my first inkling that shrinks were just other people in disguise, that they didn’t belong to some special class of being. I liked this therapist, who was a warmhearted Jewish woman of a type that I associated with the Eastern European mothers of many of my classmates, but she was no match for my mother’s Germanic coldness or her unbreakable grip on me. In any event, I’m not sure how versed she was in the nether reaches of pathological enmeshment, which she would have had to be for us to get anywhere. Short of that, what I wanted was for her to be my mother, just as early on I longed for my male therapists to be my father. Substitute parenting, or “reparenting,” as it is referred to, may have been what was on offer in the therapeutic realm, but what I wanted in my overly concrete way was the real thing. I wanted, that is, to be adopted — actually adopted — just as I would later wish for one or the other of my therapists to leave his wife for me. (My model was Elaine May, who married her shrink.)

In my late teens I started seeing Dr. S., a white-haired but vigorous psychiatrist. Like most of the shrinks I would see, he was a deracinated Jew who kept regular hours on Yom Kippur, as if to prove a point. He was an old-school analyst in the American mode, meaning that he hewed to the Freudian party line but in a casual, “we’re all only human” sort of fashion, and his office was all the way over on Riverside Drive in the 80s. I remember the address well because in winter, when the wind howled along the side streets in the evenings and I had to make my way to the bus stop at the end of a session, I felt like a character out of “Dr. Zhivago.” Although he regularly doodled with his fountain pen on a prescription pad, Dr. S. never took notes, claiming that it was a matter of principle. He sometimes closed his eyes during the session, either to allow himself to relate more profoundly to what I was talking about or to take a quick nap, I was never sure which.

Dr. S.’s office, which was reachable by a spiral staircase from his apartment, was one of the most beautiful — most dignified — I would ever find myself in. It had an air of serene comprehension, of truths having been sought after, suffered through and finally arrived at right within its confines. Spacious and thickly carpeted, with whitewashed walls on which hung a series of sepia-toned prints, it was also filled with the anthropological artifacts — somber clay heads and stone figures lined his bookshelves — that many psychoanalysts feel obliged to possess in homage to Freud’s own cherished collection of tchotchkes. There was a mechanical clock that made a faint whirring sound as it flipped over from one minute to the next, making me acutely aware of stray silences and unspoken thoughts. What made Dr. S.’s setting unique, however, was the constant presence of his dog, a golden-colored beagle with soft, flappy ears. The dog would pad over to my chair when I came in and look at me with her moist sympathetic eyes, waiting to be scratched behind the ears. There were times I refused to oblige her, ignoring her mute appeal until she tucked her tail between her legs and slunk over to Dr. S.’s chair to be petted. I felt, or maybe I was only imagining, the doctor looking at me intently at those moments, taking in my unresponsiveness and making a mental note: patient inhibited and cold; resists giving affection to loving animals.

It was with Dr. S. that I began developing a style custom-made to the therapeutic encounter, especially as it played out with male shrinks. Suspicious as I was of men to begin with, based on my experience of a remote father and a passel of brothers who remained alien, sports-obsessed creatures to me, it was hard for me to believe in their interest, much less their kindly intentions. As a result, I would use up a lot of the hour making apposite, witty remarks in an effort to entertain Dr. S. (a shrink I saw a few years back found me so knee-slappingly funny that he asked whether I had ever considered becoming a stand-up comedian) and spent the rest of the time pelting him with accusatory questions as to the quality of his attention and his reasons for seeing me: Was he really listening to me? Or was he preoccupied with his dog, especially after she had been injured in a car accident on Riverside Drive? Did he ever think about me when I wasn’t sitting in front of him? Would he see me if I couldn’t afford to pay his fee? If he was only doing this for the money, I blithely continued, why hadn’t he gone into a more remunerative profession, like law or business? Why the veneer of caring?

Despite my repeated threats to leave, I continued to see Dr. S. through my college years. When I look back on it, it seems to me that he was a gem of a man, really, to put up with my hot-and-cold attitude toward the work he was trying to do with me, but it did me little good at the time. Part of the problem was that Dr. S. tended to speak in broad generalities, which I found anything but reassuring. When I would say, for instance, as I often did, that nobody cared about me (by which I mainly meant my parents), he would answer, “First you must care about yourself.” This struck me as a sleight-of-hand solution, one that only fueled my anxiety that this feeling of universal indifference was not a neurotic misperception but the truth, the horrible truth lying in wait for me to come upon it. Could it be that the essence of my treatment consisted of Dr. S.’s gently leading me up to the dismal reality that was my life — that I would be “cured” only when I faced up to my darkest fears and accepted them as legitimate rather than exaggerated?

The goal of successful psychoanalysis, I knew, especially when it came to more severe problems, was not just to modify neurotic suffering so it took on the aspect of “ordinary unhappiness,” but to effect a real difference in the patient’s way of functioning. My character, sadly enough, seemed the same as it had always been, given to angry outbursts that alienated the very people I wanted around me, followed by regretful nostalgia for that which might have been. I had succeeded in driving away my first serious boyfriend, a bearded medical student, with precisely such maneuverings. And all Dr. S. managed to come up with in response to my acute grief over Mark was the suggestion that I think of him as dead. When I came in with what I saw as a telltale dream, in which I walked up and down the hallway outside Mark’s apartment until he opened the door and saw me, thereby vindicating me in my wistful belief that he hadn’t forgotten me, that the force between us was so strong that it could lead him to intuit that I was outside his apartment door, Dr. S. insisted on pointing out the flaws in my reasoning: “You realize, don’t you, that in your dream Mark only opened the door because he heard a disturbance outside. You were walking back and forth and making noise, so naturally he wanted to find out what was going on. It had nothing to do with the feelings he once had for you. You were intruding. You could have been anyone.” I scornfully replied that the hallway was carpeted, that I wasn’t making any noise and that Mark just felt me out there. The doctor puffed sagely on his pipe and disagreed with me once again, taking the sort of gentle tone you would use with a hopelessly crazy person.

Dr. S. was fond of telling me that the past didn’t interest him “except in terms of the present,” which was all fine and well except for the fact that it left me marooned, by myself, in ghostly rooms. I felt more rather than less alone in therapy, stuck with myself and my self-destructive patterns, which I saw as direct products of the very past that Dr. S. didn’t care to explore. Weren’t analysts supposed to be expert guides through the minefields of the past? Wasn’t going back into the interior where early humiliations festered their proclaimed specialty? If not, then whence was this ever-elusive “character change” supposed to emerge? Even to this day, I’m not sure I know anyone whose character has been genuinely transformed because of therapy. If anything, most people seem to emerge as more backed-up versions of themselves.

THERE ARE SOME of the things that never happened in therapy: No one ever stopped me from doing something I was intent on doing, even if it was clear that the issue was a symbolically loaded one and worthy of further exploration before I took any action. No one ever offered to adopt me or to take me home for so much as a single night, like the British child psychiatrist D. W. Winnicott (my ideal shrink) did with one of his patients. No one ever suggested that I move away from home or stand up to my parents. No one ever offered to see me free of charge. No one ever took me on his or her lap, as the Hungarian analyst Sandor Ferenczi was known to have done, in keeping with his belief that the clinical interaction should be a reciprocally empathic, mutual encounter. (Ferenczi and Freud would eventually break over their different therapeutic stances but not before Ferenczi noted in his clinical diary that Freud shared with him the harsh sentiment “that neurotics are a rabble, good only to support us financially and to allow us to learn from their cases: psychoanalysis as a therapy may be worthless.”) And for all the emphasis on therapy’s being a place of intimate disclosure — for all the times, in between shows of hostility, that I haltingly stated my feelings of great affection or even love for my therapists — none of them ever opened up about their feelings for me other than to convey a vague liking or appreciation for some facet of my personality.

Here are some of the things that did happen in therapy: My mother once came to a session with a hipster shrink and his trippy wife-partner whom I saw briefly in my late 20s, and their opening move was to ask about the state of her sex life. She inquired stonily what this had to do with helping me, while I squirmed in my seat, wondering what I was doing with this harebrained pair. Some years later, when I was in my mid-30s, my mother came to another session, this time with Dr. E., a young and pretty psychiatrist whose last name indicated a hefty Wasp lineage. During this session the three of us decided that I would marry the man I had been dating on and off for the past six years, despite the fact that I had broken off my engagement to him months earlier. We discussed the matter of my newly pending marriage as if it were simply the practical solution to a neurotic issue, having little to do with the man in question and much to do with my abiding inability to make a decision. I remember distinctly that Dr. E. and my mother agreed that I was a very loyal person and that the chances of my getting divorced, no matter how tentative I might have felt about going ahead, were minuscule. (As it turned out, I divorced less than five years later.) They also agreed that I should proceed rapidly so that I would have less time to mull things over: my nuptials were accordingly scheduled for three weeks from the day of our meeting. Needless to say, everything was hastily arranged, from the invitations to the flowers, and the whole affair had the quality of a shotgun wedding, albeit one whose urgency came not from an incipient baby but from the fear of my thorough­going ambivalence that was shared by my mother and my shrink. Dr. E. came to the ceremony, looking lovely and blond in a black velvet dress, but she left before the dinner, as if to draw a line between being a witness to the event and being a friend. She has gone on to achieve spectacular success in her career, and to this day I wonder whether she thinks of her intervention as courageous or a mistake of her youth.

Two of my therapists died on me, one quite suddenly only months after I started seeing her and the other after suffering a recurrence of leukemia during my treatment with him. A third committed suicide, jumping off the roof of the very building where I had gone to see him. He had struck me as slightly forlorn, verging on seedy, when I was his patient, like a character out of a W. Somerset Maugham novel. He moved his jaw a lot when he spoke, and I thought I heard his teeth click, suggesting ill-fitting dentures. After I arrived uncharacteristically early one day and overheard him asking out a woman on a date on his living-room phone, I decided I could not live with his desolation — my own was hard enough — and brought my visits to an end. He killed himself about a year later, and although I was not egotistical enough to imagine that his dire act had anything to do with me, I felt guilty all the same for having rejected him.

The analyst who died shortly after I began going to her was an energetic European of about 70. She went into the hospital for what was supposed to have been a routine matter and never came out. Even though this happened nearly three decades ago, to this day I think of Dr. Edrita Fried as the one who got away — the one who might have worked miracles, because she reminded me of a more benign version of my mother and thus would be uniquely capable of understanding the kind of damage that had been done. She was also, up to that point, the only therapist I chose on my own, without benefit of one of the two consultants my parents turned to for referrals. I discovered Dr. Fried by chance, stumbling across a book she wrote on the shelf of a local bookstore. It was called “The Courage to Change,” and I read it cover to cover almost on the spot; when I finished, I called her blind and, much to my surprise, she readily agreed to see me. (I thought of analysts as existing in a closed circle to which you could gain access only by mentioning the name of a colleague they approved of.) Although I must have seen Dr. Fried for just a month or two before she was hospitalized, I had already formed a strong attachment to her when I received the call that she died. I remember sitting on my bed in my dark little apartment on 79th Street, holding the phone in my hand even after the person on the other end hung up, feeling doomed.

Nothing, however, compared with the overwhelming loss I felt at the death of Dr. A., whom I saw in my mid-20s and whose re-emerged cancer I failed to pick up on even after he started to display the telltale signs of radiation treatment: skin that was reddened and raw and a toupee that covered up his thinning hair, which I didn’t at the time recognize as a toupee at all but thought of as a strange new Prince Valiant-like hairstyle he just happened to be trying out. I even questioned him about it. How did you suddenly sprout bangs? You’ve always parted your hair to the side before. That made me cringe later on when I realized the truth of the matter. Even if I could forgive myself for misreading the physical clues, I should have known something was up when he suddenly announced that he was taking an impromptu vacation during the following spring instead of waiting for the proverbial shrinkless August. (The majority of therapists take August off, as if it were a religious obligation, leaving their patients to stew in their own juices.) He was suspiciously specific about the details, almost as if he were trying to avoid any probing questions by giving me more information than I could ever need up front. First, he told me, he was planning to visit a sick uncle, then to join his family on one of their athletic trips — the kind that featured canoes and tents instead of hotel rooms — across some carefully selected part of the American wilderness. I should have smelled a rat right there and then, but the truth was that I had never been informed of Dr. A.’s illness in the first place and probably would have disavowed the evidence of its return even if I had, so focused was I on my life with him inside his office.

The trouble, you see, was that I loved Dr. A., even though this often took the form of my fighting with him. Because he happened to have a small red rug under his chair, for instance, I saw fit to tell him that red was my least favorite color. In the same vein, after I began to suspect that the carefully framed photos of mountains and forest scenes that hung on his walls were in fact taken by him — I could just imagine him trudging up a perilously narrow footpath with an up-to-the-minute camera slung across his chest — I made sure to tell him that I found them numbingly bland. In some way, I’m sure, I was trying to catch him out and prove him unworthy of my attachment, but for the most part these fights were just a ruse, a way of throwing him off the scent. Dr. A., whom I took to be in his 40s, was the only person in my life who paid close attention to my innermost being: I felt fully recognized by him, felt that he saw me as I was and that I could thus trust him with the bad as well as the good about myself. Who else besides a therapist, when it comes down to it, can you trust to accept all parts of you? Your parents, if you are lucky. So I loved Dr. A. and relied on him and fought with him, fought about the money I had to pay him, fought about the rug and the photos and his skin and his hairstyle until it was too late to straighten anything out.

Talk about a lack of proper “termination” of the psychoanalytic experience. That last week before he left, we scheduled an extra session. I felt worried about his departure, despite his elaborate explanations, and wondered out loud if Dr. A. was trying to punish me in some way for being so contentious a patient. “Of course not,” he said, laughing. “I like our fights, at least most of the time. No, I’m afraid you’re stuck with me.”

But, as it turned out, I hadn’t been stuck with him, nor he with me. A day before he was due back I received a phone call from a woman with a curt voice who introduced herself as a colleague of Dr. A.’s and told me that he wouldn’t be returning from his “vacation,” a vacation he had obviously invented to cover a hideous final absence. No niceties, no anything. When I asked what was wrong, the woman became evasive and suggested I come in to talk. I made an appointment for the next day, which wasn’t soon enough, so filled was I with panic. The woman proved to be a psychiatrist herself, with the diplomas and seating arrangement to prove it, and she seemed intent on keeping Dr. A.’s whereabouts a secret. It was the most hideous of possible scenarios, Kafkaesque really: I found myself sitting in a strange doctor’s office asking the same questions over and over again, as though persistence would yield up answers. No, he wasn’t dead, it seemed, but he was sick, too sick to plan on seeing me again even if he did get well. What about his other patients, I wanted to know. And what was wrong with him exactly? Nothing, it appeared, could be divulged. All I was entitled to know was that Dr. A. wouldn’t be coming back and that it was important I find myself another doctor. “I can give you names,” she told me. “There are other good people who can help you.” Names! I didn’t want names. “I want Dr. A. back,” I said and started to cry.

I went home and wrote Dr. A. a long letter as he lay dying, for that clearly was what he was doing. In it I expressed all the gratitude and love I had failed — not wanted — to tell him about while he was irritatingly alive. I wrote him: “I am so sad my tears could fill your swimming pool.” I was alluding to a longstanding joke between us, about his needing the money I so reluctantly paid him so he could regrout the bottom of his pool. I started paying attention to the death notices, and I came upon what I was dreading one morning in early May. It was a couple of lines, rather anonymous sounding if you weren’t familiar with the subject. Dr. A. was dead, his last bill to me still unpaid.

This past April, while I was trying to decide on whether to stay with Dr. L., in spite of her failings, or to embark on a new treatment or to take a break from therapy altogether, I went for yet another consultation. Dr. F. was famous for his tough way with patients and his theoretical contributions to the field. Although he was short and slight, there was a palpable aura of power around him, a sense that here was a man who was used to whipping patients into shape. He spent three sessions on an intake of my history, jotting down notes. I listened to my self-accounting with a tired and critical ear, wondering why I was still so out to sea, still so mired in conflict. I had decided to spare Dr. F. none of my myriad doubts, fears, fantasies and unfulfilled wishes, even though articulating some of them made me inwardly wince. I wondered whether I had become too used to seeing myself through a pathological prism, one that didn’t leave room for small pleasures — for the fleeting nature of satisfaction. If there were many things I wished I had done, there were also things I was proud of, but there seemed to be no room for them here, in this cloistered space devoted to unearthing the clouds behind the silver linings. Happiness, as we all know, can’t be pursued directly, but what was the gain in tracking down every nuance of unhappiness, meticulously uncovering origins that left a lot to be desired but that could never be changed, no matter how skillfully you tried to reconstruct them?

Dr. F. and I made a fourth appointment for him to give me his impressions as well as his suggestions on what I should do next. Knowing his reputation for being confrontational with his patients, I braced myself for the worst. Even so, I wasn’t prepared for his ruthlessly pragmatic line of thinking, which had less to do with any inner torment I alluded to and more to do with the face I presented to the world, as if I were applying for a position as a flight attendant or a sales rep. He wondered, for instance, whether I thought of losing weight. Dumbstruck, I momentarily lost my footing, and then I answered that I had. He nodded and then coldly observed, “But you lack the motivation.” No, I said, I didn’t lack the motivation forever, I just lacked it for right now. Dr. F. looked entirely unconvinced and went on to ask me if I didn’t long to be part of a couple, to have someone to visit art galleries with. I said I did but that it hadn’t worked out thus far. “You are alone,” he repeated, as if I were in a state of denial. “I know,” I answered. “Many women are alone.” He then noted that I hadn’t written as much as I might have, that I procrastinated and was often late on coming through with assignments. His tone was smug and self-congratulatory, as if he had adduced these aspects of my character on his own when in reality he was simply throwing back at me the bits of incriminating information that I had willingly offered up. I found myself growing ever more defensive, ready to rise up and fight for the rights of unsvelte, unattached and underachieving women everywhere. Who was he to cast me in his patriarchal, bourgeois mold? Sure, I could lose some weight, but how had this come to be the main diagnostic issue? And I wasn’t completely alone: I had a daughter, I had friends, I had had my share of passion, ex-boyfriends and an ex-husband, there were more things in heaven and earth than were dreamed of in Dr. F.’s philosophy.

Dr. F. concluded with the recommendation that I see him or someone like him, who was trained in his methodology, which involved focusing on the transference between patient and analyst. I thanked him for his time and, a bit dazed from the encounter, went off to get a cup of coffee and think things over. At one point in my life I would have been thrilled to be offered the chance to see Dr. F. in all his brutal confidence, hoping that he could rearrange the shape of my character where no one had succeeded before. Now, however, in my 50s, I only felt persuaded that the last thing I wanted was to put myself into Dr. F.’s hands. I realized that I had been carrying a “Wizard of Oz”-like fantasy with me all these years, hoping to find someone who would not turn out to be just another little man behind a velvet curtain. It was not that I found all my shrinks to be impostors, exactly, but it dawned on me that I no longer had the requi­site belief in the process — perhaps had never had it in sufficient quantity. After 40-odd years of trying to find my perfect therapist, I didn’t want to explore my transferential relationship with Dr. F. or anyone. I didn’t want to pay high fees for 45 minutes of conversation with someone sitting opposite me whom I knew little about but who knew shameful facts about me. I didn’t want another one-way attachment, which would come to an end when I stopped paying for it. My skeptical 20-year-old daughter once referred to therapy as “emotional prostitution,” and although I thought the term a bit reductive, there was a piece of unpleasant truth to it.

I WENT BACK a week later to Dr. L., the woman I had been seeing for the past year and a half, and told her I wanted to stop therapy — for a while or for good, I wasn’t sure. To her credit, she didn’t try to persuade me that I was making a terrible mistake or suggest that we needed to discuss my wish to leave for the next 20 sessions. She simply let me go, with warm assurances that I could return whenever I felt the urge to. I left her office feeling liberated and scared at the same time. I started walking down the block, placing my feet deliberately one after the other, as if to confirm the reality of my un-propped-up existence. The world, it was good to see, was still standing, even as I detached myself from the ur-figure of the therapist.

All those years, I thought, all that money, all that unrequited love. Where had the experience taken me and was it worth the long, expensive ride? I couldn’t help wondering whether it kept me too cocooned in the past to the detriment of the present, too fixated on an unhappy childhood to make use of the opportunities of adulthood. Still, I recognized that therapy served me well in some ways, providing me with a habit of mind that enabled me to look at myself with a third eye and take some distance on my own repetitive patterns and compulsions. In the offices of countless therapists — some gifted, some less so — I sharpened my perceptions about myself and came to a deeper understanding of the persistent claim of early, unmet desires in all of us.

Therapy, you might say, became a kind of release valve for my life; it gave me a place to say the things I could say nowhere else, express the feelings that would be laughed at or frowned upon in the outside world — and in so doing helped to alleviate the insistent pressure of my darker thoughts. It buffered me as well as prodded me forward; above all, it provided a space for interior examination, an education in disillusioned realism that existed nowhere else on this cacophonous, frantic planet. If after many years of an almost-addictive attachment, I decided it was time to come up for air, I also knew it is in the nature of addicts never to be cured, but always to be in recovery. Good as it felt to strike out on my own, I was sure that one day in the not too distant future I would be making my way to a new therapist’s office, ready to pick up the story where I left off.

Daphne Merkin is a contributing writer. She is working on a book based on an article she wrote for the magazine about her struggle with chronic depression.

__________

Full article and photos: http://www.nytimes.com/2010/08/08/magazine/08Psychoanalysis-t.html

A Journey Through Darkness

IT IS A SPARKLING DAY IN MID-JUNE, the sun out in full force, the sky a limpid blue. I am lying on my back on the grass, listening to the intermittent chirping of nearby birds; my eyes are closed, the better to savor the warmth on my face. As I soak up the rays I think about summers past, the squawking of seagulls on the beach and walking along the water with my daughter, picking out enticing seashells, arguing over their various merits. My mind floats away into a space where chronology doesn’t count: I am back on the beach of my adolescence, lost in a book, or talking to my old college chum Bethanie as we brave the bay water in front of her parents’ house in Connecticut, where she comes to visit every summer.

In the 20 or so minutes of “fresh air” allotted after lunch (one of four such breaks on the daily schedule), I try to forget where I am, imaging myself elsewhere than in this fenced-off concrete garden bordered by the West Side Highway on one side and Riverside Drive on the other, planted with patches of green and a few lonely flowers, my movements watched over by a more or less friendly psychiatric aide. Soggy as my brain is from being wrenched off a slew of antidepressants and anti-anxiety medications in the last 10 days, I reach for a Coleridgian suspension of disbelief, ignoring the roar of traffic and summoning up the sound of breaking waves.

I have only to open my eyes for the surreal scene to come back into my immediate line of vision, like a picnic area without picnickers: two barbecue grills, bags of mulch that seem never to be opened, empty planters, clusters of tables and chairs, the entire area cordoned off behind a high mesh fence. Looking out onto the highway overpass there is a green-and-white sign indicating “Exit — West 178th Street”; nearer to the entrance another sign explains: “The Patients’ Park & Garden is for the use of patients and their families only, and for staff escorting patients. It is NOT for staff use.”

I can see R., the most recent addition to our dysfunctional gang of 12 on 4 Center, sitting on a bench in his unseasonal cashmere polo, smoking a cigarette and tapping his foot with equal intensity. On either side of him are ragtag groups of people culled from several units of the hospital, including the one I am on, which is devoted primarily to the treatment of patients with depression or eating disorders. (The anorexic girls, whom R. refers to as “the storks,” are in various phases of imperceptible recovery and tend to stick together.) The garden is also home to patients from 4 South, which caters to patients from within the surrounding Washington Heights community, and 5 South, which treats patients with psychotic and substance-abuse disorders.

The people on 4 Center, hidden away as it is in a small building, have next to no contact with the other units; we might as well be on different planets. Then again, as those who suffer from it know, intractable depression creates a planet all its own, largely impermeable to influence from others except as shadow presences, urging you to come out and rejoin the world, take in a movie, go out for a bite, cheer up. By the time I admitted myself to the hospital last June after a downhill period of six months, I felt isolated in my own pitch-darkness, even when I was in a room full of conversation and light.

DEPRESSION — THE THICK BLACK paste of it, the muck of bleakness — was nothing new to me. I had done battle with it in some way or other since childhood. It is an affliction that often starts young and goes unheeded — younger than would seem possible, as if in exiting the womb I was enveloped in a gray and itchy wool blanket instead of a soft, pastel-colored bunting. Perhaps I am overstating the case; I don’t think I actually began as a melancholy baby, if I am to go by photos of me, in which I seem impish, with sparkly eyes and a full smile. All the same, who knows but that I was already adopting the mask of all-rightness that every depressed person learns to wear in order to navigate the world?

I do know that by the age of 5 or 6, in my corduroy overalls, racing around in Keds, I had begun to be apprehensive about what lay in wait for me. I felt that events had not conspired in my favor, for many reasons, including the fact that in my family there were too many children and too little attention to go around. What attention there was came mostly from an abusive nanny who scared me into total compliance and a mercurial mother whose interest was often unkindly. By age 8 I was wholly unwilling to attend school, out of some combination of fear and separation anxiety. (It seems to me now, many years later, that I was expressing early on a chronic depressive’s wish to stay home, on the inside, instead of taking on the outside, loomingly hostile world in the form of classmates and teachers.) By 10 I had been hospitalized because I cried all the time, although I don’t know if the word “depression” was ever actually used.

As an adult, I wondered incessantly: What would it be like to be someone with a brighter take on things? Someone possessed of the necessary illusions without which life is unbearable? Someone who could get up in the morning without being held captive by morose thoughts doing their wild and wily gymnastics of despair as she measures out tablespoons of coffee from their snappy little aluminum bag: You shouldn’t. You should have. Why are you? Why aren’t you? There’s no hope, it’s too late, it has always been too late. Give up, go back to bed, there’s no hope. There’s so much to do. There’s not enough to do. There is no hope.

Surely this is the worst part of being at the mercy of your own mind, especially when that mind lists toward the despondent at the first sign of gray: the fact that there is no way out of the reality of being you, a person who is forever noticing the grime on the bricks, the flaws in the friends — the sadness that runs under the skin of things, like blood, beginning as a trickle and ending up as a hemorrhage, staining everything. It is a sadness that no one seems to want to talk about in public, at cocktail-party sorts of places, not even in this Age of Indiscretion. Nor is the private realm particularly conducive to airing this kind of implacably despondent feeling, no matter how willing your friends are to listen. Depression, truth be told, is both boring and threatening as a subject of conversation. In the end there is no one to intervene on your behalf when you disappear again into what feels like a psychological dungeon — a place that has a familiar musky smell, a familiar lack of light and excess of enclosure — except the people you’ve paid large sums of money to talk to over the years. I have sat in shrinks’ offices going on four decades now and talked about my wish to die the way other people might talk about their wish to find a lover.

Then there is this: In some way, the quiet terror of severe depression never entirely passes once you’ve experienced it. It hovers behind the scenes, placated temporarily by medication and renewed energy, waiting to slither back in, unnoticed by others. It sits in the space behind your eyes, making its presence felt even in those moments when other, lighter matters are at the forefront of your mind. It tugs at you, keeping you from ever being fully at ease. Worst of all, it honors no season and respects no calendar; it arrives precisely when it feels like it.

MY MOST RECENT BOUT, the one that landed me on 4 Center, an under-the-radar research unit at the New York State Psychiatric Institute, asserted itself on New Year’s Eve, the last day of 2007. The precipitating factors included everything and nothing, as is just about always the case — some combination of vulnerable genetics and several less-than-optimal pieces of fate.

Despite my grim mood, I had somehow or other managed to put on makeup, pull on clothes, affix pearl earrings and go to a civilized old-New York type of dinner, where we talked of ongoing things — children, schools, plays to see, reasons to live as opposed to reasons to die. But even as I talked and laughed with the other guests, my thoughts were dark, scrambling ones, ruthless in their sniping insistence. You’re a failure. A burden. Useless. Worse than useless: worthless. Shortly past midnight, I watched the fireworks over Central Park and stared into the exploding bursts of color — red, white and blue, squiggles of green, streaks of purple, balls of silver, sparks of champagne. My 17-year-old daughter, Zoë, was standing nearby, and as I looked into the fireworks I sent entreaties into the sky. Make me better. Make me remember this moment of absorption in fireworks, the energy of the thing. Make me go forward. Stop listening for drum rolls. Pay attention to the ordinary calls to engage, messages on your answering machine telling you to buck up, it’s not so bad, from the ex, siblings, people who care.

For the next six months I countered the depression with everything I had, escaping into the narcotic of reading, taking on a few writing assignments (all of which I delivered weeks, if not months, late), meeting friends for dinner, teaching a writing class and even taking a trip to St. Tropez with a close friend. I gobbled down my usual medley of pills — Lamictal, Risperdal, Wellbutrin and Lexapro — and wore an Emsam patch. (I have not been free of psychotropic medication for any substantial period since my early 20s.) But this was not a passing episode that a schedule full of distractions and medication could assuage. Although many depressions resolve themselves within a year, with or without treatment, sometimes they take hold and won’t let go, becoming incrementally worse with each passing day, until suicide seems like the only exit. This was one of those depressions.

In the weeks leading up to my checking into 4 Center, I had gone from being able to put on a faltering imitation of mental health to giving up all pretense of a manageable disguise. Since I found it painful to be conscious, I had stopped doing much of anything except sleeping. Mornings were the worst: I got up later and later, first 11, then noon, and now it was more like 2 in the afternoon, the day three-quarters gone. “I wake and feel the fell of dark, not day,” observed the poet Gerard Manley Hopkins, a depressive 19th-century Jesuit priest. I don’t think I’ve ever met a depressed person who wanted to get out of bed in the morning — who didn’t experience the appearance of day as a call to burrow further under the covers, the better to embrace the vanished night.

When I was awake (the few hours that I was), I felt a kind of lethal fatigue, as if I were swimming through tar. Phone messages went unanswered, e-mail unread. In my inert but agitated state I could no longer concentrate long enough to read — not so much as a newspaper headline — and the idea of writing was as foreign to me as downhill racing. (James Baldwin: “No one works better out of anguish at all; that’s an incredible literary conceit.”) I barely ate — there is no more effective diet than clinical depression — and had dropped 30 pounds. I had essentially withdrawn from communication. When I did speak, it was mostly about my wish to commit suicide, a wish that was never all that far from my mind but at times like these became insistent.

Although some tiny part of me retained a dim sense of the more functioning person I once was — like a room with a closed door that was never entered anymore — it became increasingly difficult to envision myself ever inhabiting that version of myself again. There had been too many recurrent episodes, too many years of trying to fight off this debilitating demon of a thing. It has been called by different names at different times in history — melancholia, malaise, cafard, brown study, the blues, the black dog, acedia — and has been treated as a spiritual malady, a failure of will, a biochemical malfunctioning, a psychic conundrum, sometimes all at once. Whatever it was, it had come to define me, filling out all the available space, leaving no possibility of a “before” or an “after.” Instead I harbored the hallucinatory conviction that I had stayed around the scene of my own life too long — that I was, in some unyielding sense, ex post facto.

I had also quite literally ground to a halt, like a machine that had hit a glitch and frozen on the spot. I moved at a glacial pace and talked haltingly, in a voice that was lower and flatter than my usual one. As I discovered from my therapist and psychopharmacologist — both of whom argued that I belonged in a hospital now that my depression had taken on “a life of its own,” beyond the exertions of my will — there was a clinical name for my state: “psychomotor retardation.” My biology, that is, had caught up and joined hands with the immediate psychodynamic stressors that precipitated my nosedive — the lingering aftermath of the death two years earlier of my mother, with whom I had a complicated relationship; the imminent separation from my college-age daughter, who was my boon companion; therapy that took a wrong turn; a romance that went awry. (Much as we would like to explain clinical depression by making it either genetics or environment, bad wiring or bad nurturing, it is usually a combination of the two that sets the illness off.)

And yet I resisted my doctors’ suggestion that I check myself into a hospital. It seemed safer to stay where I was, no matter how out on a ledge I felt, than to lock myself away with other desperadoes in the hope that it would prove effective. Whatever fantasies I once harbored about the haven-like possibilities of a psychiatric facility or the promise of a definitive, once-and-for-all cure were shattered by my last stay 15 years earlier. I had written about the experience, musing on the gap between the alternately idealized and diabolical image of mental hospitals versus the more banal bureaucratic reality. I discussed the continued stigma attached to going public with the experience of depression, but all this had been expressed by the writer in me rather than the patient, and it seemed to me that part of the appeal of the article was the impression it gave that my hospital days were behind me. It would be a betrayal of my literary persona, if nothing else, to go back into a psychiatric unit.

What’s more, after a lifetime of talk therapy and medication that never seemed to do more than patch over the holes in my self, I wasn’t sure that I still believed in the concept of professional intervention. Indeed, I probably knew more about antidepressants than most analysts, having tried all three categories of psychotropics separately or in combination as they became available — the classic tricyclics, the now-unfashionable MAO inhibitors (which come with a major drawback in the form of dietary restrictions) as well as the newer S.S.R.I.’s. and S.N.R.I.’s. I was originally reluctant to try pills for something that seemed so intrinsic to who I was — the state of mind in which I lived, so to speak — until one of my first psychiatrists compared my emotional state to an ulcer. “You can’t speak to an ulcer,” he said. “You can’t reason with it. First you cure the ulcer, then you go on to talk about the way you feel.” My current regime of pills incorporated the latest approach, which called for the augmentation of a classic antidepressant (Effexor) with a small dose of a second-generation antipsychotic (Risperdal). From the time I was prescribed Prozac in my early 20s before it was approved by the Food and Drug Administration, you could say that the history of depression medication and my personal history came of age together, with me in the starring role of a lab rat.

Of course, none of the drugs work conclusively, and for now we are stuck with what comes down to a refined form of guesswork — 30-odd pills that operate in not completely understood ways on neural pathways, on serotonin, norepinephrine, dopamine and what have you. No one, not even the psychopharmacologists who dispense them after considering the odds, totally comprehends why they work when they work or why they don’t when they don’t. All the while the repercussions and the possible side effects (which include mild trembling on the one end to tardive dyskinesia, a rare condition that causes uncontrollable grimacing, on the other end) are shunted to the side until such time as they can no longer be ignored.

THE ONE THING PSYCHIATRIC hospitals are supposed to be good for is to keep you safe. But I was conflicted even about so primary an issue as survival. I wasn’t sure I wanted to ambush my own downward spiral, where the light at the end of the tunnel, as the mood-disordered Robert Lowell once said, was just the light of the oncoming train. I saw myself go splat on the pavement with a kind of equanimity, with a sense of a foretold conclusion. Self-inflicted death had always held out a stark allure for me: I was fascinated by people who had the temerity to bring down the curtain on their own suffering — who didn’t hang around, moping, in hopes of a brighter day. I knew all the arguments about the cowardice and selfishness (not to mention anger) involved in committing suicide, but nothing could persuade me that the act didn’t require a perverse sort of courage, some steely embrace of self-extinction. At one and the same time, I have also always believed that suicide victims don’t realize they won’t be coming this way again. If you are depressed enough, it seems to me, you begin to conceive of death as a cradle, rocking you gently back to a fresh life, glistening with newness, unsullied by you.

Still, one flesh-and-blood reality stood in my way: I had a daughter I loved deeply, and I understood the irreparable harm it would cause her if I took my own life, despite feeling that if I truly cared about her I would free her from the presence of a mother who was more shade than sun. (What had Sylvia Plath and Anne Sexton done with their guilt feelings? I wondered. Were they more narcissistic than I or just more strong-willed?) It was because of my daughter, after all, that I had given voice to my “suicidal ideation,” as it’s called, in the first place, worrying how she would get along without me. At the same time, I recognized that, for a person who was really set on ending it all, speaking your intention aloud was an act of self-betrayal. After all, in the process of articulating your death wish you were alerting other people, ensuring that they would try to stop you.

The real question was why no one ever seemed to figure this grim scenario out on her own, just by looking at you. This was enraging in and of itself — the fact that severe depression, much as it might be treated as an illness, didn’t send out clear signals for others to pick up on; it did its deadly dismantling work under cover of normalcy. The psychological pain was agonizing, but there was no way of proving it, no bleeding wounds to point to. How much simpler it would be all around if you could put your mind in a cast, like a broken ankle, and elicit murmurings of sympathy from other people instead of skepticism (“You can’t really be feeling as bad as all that”) and in some cases outright hostility (“Maybe if you stopped thinking about yourself so much . . . ”).

One more factor worked to keep me where I was, exiled in my own apartment, a prisoner of my affliction: the specter of ECT (electro-convulsive therapy). My therapist, a modern Freudian analyst whom I had been seeing for years and who had always struck me as only vaguely persuaded of the efficacy of medication for what ailed me — when I once experienced some bad side effects, he proposed that I consider going off all my pills just to see how I would fare, and after doing so I plummeted — had suddenly, in the last 10 days before I went into the hospital, become a cheerleader for undergoing ECT. I don’t know why he grabbed on to this idea, why the sudden flip from chatting to zapping, other than for the fact that I had once wildly thrown it out — for the drowning, any life raft will do. Then, too, ECT, which causes the brain to go into seizure, was back in fashion for treatment-resistant depression after going off the radar in the ’60s and ’70s in the wake of “One Flew Over the Cuckoo’s Nest.” Perhaps I had frightened him with my insistent talk of wanting to cut out for good; perhaps he didn’t want to be held responsible for the death of a patient who compulsively wrote about herself and would undoubtedly leave evidence that would tie him to her. But his shift from a psychoanalytic stance that focused on the subjective mind to a neurobiological stance that focused on the hypothesized workings of the physical brain left me scared and distrustful.

What if ECT would just leave me a stranger to myself, with chopped-up memories of my life before and immediately after? I may have hated my life, but I valued my memories — even the unhappy ones, paradoxical as that may seem. I lived for the details, and the writer I once was made vivid use of them. The cartoonish image of my head being fried, tiny shocks and whiffs of smoke coming off it as the electric current went through, haunted me even though I knew that ECT no longer was administered with convulsive force, jolting patients in their straps.

But in the end, no matter how much I wanted to stay put, I ran out of resistance. I spent the weekend before going into the hospital in my oldest sister’s apartment, lost in the Gothic kingdom of depression: I was unable to move from the bed, trapped in interior debates about jumping off a roof versus throwing myself in front of a car. Yet somewhere in the background were other voices — my sister’s, my doctors’ — arguing on behalf of my sticking around; I could half-hear them. I wanted to die, but at the same time I didn’t want to, not completely. Suicide could wait, my sister said. Why didn’t I give the hospital a chance? She relayed messages from each of my doctors that they would look out for me on the unit. No one would force me to do anything, including ECT. I felt too tired to argue.

THAT MONDAY MORNING, I returned home and packed up two small bags. I threw in a disproportionate number of books (especially given the fact that I couldn’t read), a couple of pairs of linen pants and cotton T-shirts, my favorite night cream (although I hadn’t touched it in weeks) and a photo of my daughter, the last with the thought of anchoring myself. In return for agreeing to undergo one of several available protocols — either switching my medication or availing myself of ECT — I would get to stay at 4 Center as long as I needed at no cost. My sister picked me up in a cab, and as I recall, I cried the whole ride up there, watching the passing view with an elegaic sense of leave-taking.

As soon as my sister gave my name to the nurse whose head appeared in the window of the locked door to the unit and we were both let in, I knew immediately that this wasn’t where I wanted to be. Everything seemed empty and silent under the fluorescent lighting except for one 40-ish man pacing up and down the hallway in a T-shirt and sweat pants, seemingly oblivious to what was going on around him. Underneath the kind of baldfaced clock you see in train stations were two run-down pay phones; there was something sad about the glaring outdatedness of them, especially since I associated them almost exclusively with hospitals and certain barren corners of Third Avenue. And then, in what seemed like an instant, my sister was saying goodbye, promising that all would turn out for the better, and I was left to fend for myself.

My bags were taken behind the glassed-in nurse’s station and checked for potential weapons of self-destruction referred to as “sharps” — razors, scissors, mirrors — which were taken away until your departure. Cellphones were also forbidden for reasons that seemed unclear even to the staff but had something to do with their photo-taking ability. In my intake interview, I alternated between breaking down in tears and repeating that I wanted to go home, like a woeful 7-year-old left behind at sleep-away camp. The admitting nurse, who was pleasant enough in a down-to-earth way, was hardly swept away by gusts of empathy with my bereft state. And yet I wanted to stay in the room and keep talking to her forever, if only to avoid going back out on to the unit, with its pathetically slim collection of out-of-date magazines, ugly groupings of wooden furniture cushioned with teal and plum vinyl and airless TV rooms — one overrun, the other desolate. Anything to avoid being me, feeling numb and desperate, thrust into a place that felt like the worst combination of exposure and anonymity.

I emerged in time for dinner, which was served at the premature hour of 5:30, as if the night ahead were so chockablock with activities that we had to get this necessary ritual out of the way. Since in reality dinner led to nothing more strenuous than another bout of “fresh air” and lots of free time until the lights went out at 11, I would have thought that it would be a good occasion to dally. But as it turned out, the other patients were finished eating within 10 or 15 minutes, and I found myself alone at the table, not yet having realized that the point was to get in and out as quickly as possible.

It didn’t help that the room we ate in was beyond dismal, featuring an out-of-tune piano and a Ping-Pong table that was never used. Or that, despite its being summer, there was barely any fresh fruit in sight except for autumnal apples and the occasional banana. There would be culinary bright moments — cream puffs were served on Father’s Day, and one Tuesday the staff set up a barbecue lunch in the patients’ park, where I munched on hot dogs and joined in a charadeslike game called Guesstures — but the general standard was determinedly low. After a while, I began requesting bottles of Ensure Plus, the liquid nutrition supplement that came in chocolate and vanilla and was a staple of the anorexics’ meal plans; if you closed your eyes it could pass for a milkshake.

It wasn’t only the anorexics’ Ensure that I coveted. From the very first night, when sounds of conversation and laughter floated over from their group to the gloomy, near-silent table of depressives I had joined, I yearned to be one of them. Unlike our group, they were required to remain at lunch and dinner for a full half-hour, which of necessity created a more congenial atmosphere. No matter that one or two had been brought on to the floor on stretchers, as I was later informed, or that they were victims of a cruel, hard-to-treat disease with sometimes fatal implications; they still struck me as enviable. However heartbreakingly scrawny, they were all young (in their mid-20s or early 30s) and expectant; they talked about boyfriends and concerned parents, worked tirelessly on their “journaling” or on art projects when they weren’t participating in activities designed exclusively for them, including “self-esteem” and “body image.” They were clearly and poignantly victims of a culture that said you were too fat if you weren’t too thin and had taken this message to heart. No one could blame them for their condition or view it as a moral failure, which was what I suspected even the nurses of doing about us depressed patients. In the eyes of the world, they were suffering from a disease, and we were suffering from being intractably and disconsolately — and some might say self-indulgently — ourselves.

I SHARED A SMALL ROOM right across from the nurse’s station with a pretty, middle-aged woman who introduced herself before dinner — the only one to do so — with a remarkable amount of good cheer, as if we were meeting at a cocktail party. For a minute I felt that things couldn’t be so terrible, that the unit couldn’t be as abject a destination as I conceived it to be if this woman had deigned to throw her lot in with the rest of us. She wore “Frownies” — little patented patches that were supposed to minimize wrinkles — to bed, which only furthered the impression she conveyed of an ordinary adjustment to what I saw as extraordinary circumstances. Clearly, she had a future in mind, even if I didn’t — one that required her to retain a fetching youthfulness. I hadn’t so much as washed my face for the past few months, but here was someone who understood the importance of keeping up appearances, even on a psychiatric unit.

The room itself, on the other hand, couldn’t have been less welcoming. Like the rest of the unit, it was lighted by overhead fluorescent bulbs that didn’t so much illuminate as bring things glaringly into view. There were two beds, two night tables and two chests of drawers. In keeping with the Noah’s-ark design ethos, the room was also furnished with a pair of enormous plastic trash cans; one stood near the door, casting a bleak plastic pall over things, and the other took up too much space in the tiny shared bathroom. The shower water came out of a flat fixture on the wall — the presence of a conventional shower head, I soon learned, was seen as a potential inducement to hanging yourself — and the weak flow was tepid at best.

I got into bed that first night, under the ratty white blanket, and tried to calm myself. The lack of a reading lamp added to my panic; even if my depression prevented me from losing myself in a book, the absence of a light source by which to read after dark represented the end of civilization as I had known it. (It turned out that you could bring in a battery-powered reading lamp of your own, albeit with the Kafkaesque restriction that it didn’t make use of glass light bulbs.) My mind went round and round the same barrage of questions, like a persistent police inspector. How did I get here? How did I allow myself to get here? Why didn’t I have the resolve to stay out? Why hadn’t anything changed with the passage of years? It was one thing to be depressed in your 20s or 30s, when the aspect of youth gave it an undeniable poignancy, a certain tattered charm; it was another thing entirely to be depressed in middle age, when you were supposed to have come to terms with life’s failings, as well as your own. Now that my mother was gone — I always thought she’d outlive me, but her lung cancer took precedence over my suicidal impulses — there was no one to blame for my depressions, no one to whom I could turn for some magical, longed-for compensation. But the truly intolerable part was that I had acquiesced in this godforsaken plan; there was ultimately no one to blame for my banishment to this remote-seeming outpost but myself.

I plumped the barracks-thin pillow, pulled up the sheet and blanket around me — the entire hospital was air-conditioned to a fine chill — and curled up, inviting sleep. There was nothing to feel so desperate about, I tried soothing myself. You’re not a prisoner. You can ask to leave tomorrow. I listened to my roommate’s calm, even breathing and wished I were her, wished I were anyone but myself. Mostly, I wished I were a person who wasn’t consumingly depressed. All over the city, less depressed or entirely undepressed people were leading their ordinary lives, watching TV or blogging or having a late dinner. Why wasn’t I among them? After staring into the darkness for what seemed like hours, I finally got up and put on my robe, having decided that I’d overcome my sense of being a specimen on display — here comes Mental Patient No. 12 — and approach the nurses’ station about getting more sleeping meds.

Outside the room the light was blinding. Two of the aides were at the desk, playing some sort of word game on the computer screen. They looked up at me impassively and waited for me to state my case. I explained that I couldn’t sleep, my voice sounding furry with anxiety. My hands were clammy and my mouth was dry. One of them got up and went into the back to check whether the resident in psychiatry who was assigned to me had approved the request. She handed me a pill in a little cup, and I mumbled something about how nervous I was feeling. “You’ll feel better after you get some sleep,” she said. I nodded and said, “Good night,” feeling dismissed. “Night,” she said, casual as could be. I was no one to her, no one to myself.

I SUPPOSE IT WOULD MAKE for some kind of symmetry — a glimpse of an upward trajectory, at least — if I said that the first night was the hardest, but the truth is that it never got any easier. My frantic sense of dislocation and abandonment persisted for the entire three weeks I spent on 4 Center, yielding only at rare moments to a slightly less anxious state of hibernation. I would eventually discover several friendlier nurses or nurses’ aides with whom it was possible to talk about the bizarre reality of being on a psychiatric unit with a locked door and fiercely regulated visiting hours (5:30 to 8 on weekday evenings and 2 to 8 on weekends) without feeling like an official mental patient. By the end of the second week, when I was no longer chained to the unit, one of the male nurses would invite me for coffee breaks to the little eatery on the sixth floor where the hospital staff repaired for their meals.

These outings were always kept short — we never lingered for more than 15 minutes — and they always brought home to me how artificial the dividing line between 4 Center and the outside world really was. It could cause vertigo if you weren’t careful. One minute you were in the shuttered-down universe of the verifiably unwell, of people who talked about their precarious inner states as if that were all that mattered, and the next you were admitted back into ordinary reality, where people were free to roam as they pleased and seemed filled with a sense of larger purpose. As I cradled my coffee, I looked on at the medical students who flitted in and out, holding their clipboards and notebooks, with a feeling verging on awe. How had they figured out a way to live without getting bogged down in the shadows? From what source did they draw all their energy? I couldn’t imagine ever joining this world again, given how my time had become so aimlessly filled, waiting for calls to come in on the pay phone or sitting in “community meetings,” in which people made forlorn and implausible requests for light-dimmers and hole-punchers and exiting patients tearfully thanked everyone on the unit for their help.

It wasn’t as if there weren’t attempts made to organize the days as they went sluggishly by. A weekly schedule was posted that gave the impression that we patients were quite the busy bees, what with therapy sessions, yoga, walks and creative-writing groups. Friday mornings featured my favorite group, “Coffee Klatch.” This was run by the same amiable gym-coach-like woman who oversaw exercise, and it was devoted to board games of the Trivial Pursuit variety. The real draw was the promise of baked goods and freshly brewed coffee.

But in truth there was more uncharted time than not, especially for the depressives — great swaths of white space that wrapped themselves around the day, creating an undertow of lassitude. Forging friendships on the unit, which would have passed the time, was touch-and-go because patients came and went and the only real link was one of duress. The other restriction came with the territory: people were either comfortably settled into being on the unit, which was off-putting in one kind of way, or raring to get out, which was off-putting in another. I had become attached to my roommate, who was funny and somehow seemed above the fray, and I felt inordinately sad when she left, in possession of a new diagnosis and new medication, halfway into my stay.

Still, the consuming issue as far as I was concerned — the question that colored my entire stay — was whether I would undergo ECT. It was on my mind from the very beginning, if only because the first patient I encountered when I entered the unit, pacing up and down the halls, was in the midst of getting a series of ECT treatments and insisted loudly to anyone who would listen that they were destroying his brain. And indeed, the patients I saw returning from ECT acted dazed, as if an essential piece of themselves had been misplaced.

During the first week or so the subject lay mostly in abeyance as I was weaned off the medications I came in on and tried to acclimatize to life on 4 Center. I met daily with Dr. R., the young resident I saw the first evening, mostly to discuss why I shouldn’t leave right away and what other avenues might be explored medicationwise. She sported a diamond engagement ring and a diamond wedding band that my eye always went to first thing; I took them as painful reminders that not everyone was as full of holes as I was, that she had made sparkling choices and might indeed turn out to be one of those put-together young women who had it all — the career, the husband, the children. During our half-hour sessions I tried to borrow from Dr. R.’s outlook, to see myself through her charitable eyes. I reminded myself that people found me interesting even if I had ceased to interest myself, and that the way I felt wasn’t all my fault. But the reprieve was always short-lived, and within an hour of her departure I was back to staving off despair, doing battle with the usual furies.

One day early into my second week, I was called out of a therapy session to meet with a psychiatrist from the ECT unit. I still wonder whether this brief encounter was the defining one, scaring me off forever. She might as well have been a prison warden for all her interpersonal skills; we had barely said two words before she announced I was showing clear signs of being in a “neurovegetative” condition. She pointed out that I spoke slurringly and that my mind seemed to be crawling along as well, adding grimly that I would never be able to write again if I remained in this state. Her scrutiny seemed merciless: I felt attacked, as if there were nothing left of me but my illness. Obviously ECT was in order, she briskly concluded. I nodded, afraid to say much lest I sound imbecilic, but in my head the alarms were going off. No, it wasn’t, I thought. Not yet. I’m not quite the pushover you take me to be. It was the first stirring of positive will on my own behalf, a delicate green bud that could easily be crushed, but I felt its force.

The strongest and most benign advocate for ECT was a psychiatrist at the institute who saw me three decades earlier and was instrumental in convincing me to come into 4 Center. In his formal but well-meaning way he pointed out that I lived with a level of depression that was unnecessary to live with and that my best shot for real relief was ECT. He came in to make his case once again as I was sitting at dinner on a Friday evening, pretending to nibble at a rubbery piece of chicken. The other patients had gone and my sister was visiting. I turned to her as he waxed almost passionate on my account, going on about the horror of my kind of treatment-resistant depression and the glorious benefits of ECT that would surely outweigh any downside. I didn’t trust him, much as I wished to. Help me, I implored my sister without saying a word. I don’t want this. Tears trickled down my cheeks as if I were a mute, wordless but still able to feel anguish. My sister spoke for me as if she were an interpreter of silence. It looked like I didn’t want it, she said to the doctor, and my wishes had to be respected.

I COULD SEE MYSELF LINGERING on in the hospital, not because I had grown any more fond of the atmosphere but because after a certain amount of time it became easier to stay than to leave. The picayune details of my life — bills, appointments, deadlines — had been suspended during my last few months at home, then left outside the hospital confines altogether, and it began to seem inconceivable that I’d ever have the wherewithal to take them on again. Instead of growing stronger on the unit, I felt a kind of further weakening of my psychological muscle. The new medication I was on left me exhausted, and I took to going back to sleep after breakfast. I was tired even of being visited, of sitting in the hideous little lounge and making conversation, of expressing gratitude for the chocolates, smoked salmon and change for the pay phones that people brought. I felt as if I were being wished bon voyage over and over again, perennially about to leave on a trip that never happened.

I went out on several day passes in the week leading to my departure, as a kind of preparation for re-entry, but none of them were particularly successful. On one, I went out on a broiling Saturday afternoon with my daughter for a walk to the nearby Starbucks on 168th and Broadway. I felt thick-headed with the new sedating medication I was on and far away from her. When she left me for a minute to make a phone call on her cell, I started crying, as if something tragic had happened. I wondered uneasily what effect seeing me in this state was having on my daughter, what she made of my being in the hospital — did she view me as a burden that she would need to shoulder for the rest of her life? Would my depression rub off on her? — but in between we laughed at small, odd things as we always did, and it occurred to me that I wasn’t as much a stranger to her as I was to myself.

With the staff’s tentative agreement — they didn’t think I was ready to go home but had no real reason to prevent me from doing so — I left 4 Center three weeks to the day I arrived, my belongings piled up on a trolley for greater mobility through the annex to the exit. It was a hot June day similar to the one I checked in on, the heat pouring off the windows of parked cars. Everything felt noisy and magnified. It felt shocking to be outside, knowing I was on a permanent pass this time, that I wouldn’t be returning to the unit.

I was sent home on Klonopin, an anti-anxiety drug I’d been on forever, as well as a duet of pills — Remeron and Effexor — that were referred to as California rocket fuel for its presumed igniting effect. As it turned out, the combo wasn’t destined to work on me. At home, I was gripped again by thoughts of suicide and clung to my bed, afraid to go out even on a walk around the block with my daughter. When I wasn’t asleep, I stared into space, lost in the terrors of the far-off past, which had become the terrors of the present. It was decided that I shouldn’t be left alone, so my sister and my good friend took turns staying with me. But it was clear this arrangement was short term, and by the end of the weekend, after phone calls to various doctors, it was agreed that I would go back into the hospital to try ECT.

And then, the Sunday afternoon before I planned to return to 4 Center, something shifted ever so slightly in my mind. I had gone off the Remeron and started a new drug, Abilify. I was feeling a bit calmer, and my bedroom didn’t seem like such an alien place anymore. Maybe it was the fear of ECT, or perhaps the tweaked medication had kicked in, or maybe the depression had finally taken its course and was beginning to lift. I had — and still have — no real idea what did it. For a brief interval, no one was home, and I decided to get up and go outside. I stopped at Food Emporium and studied the cereal section, as amazed at the array as if I had just emerged from the gulag. I bought some paper towels and strawberries, and then I walked home and got back into bed. It wasn’t a trip to the Yucatan, but it was a start. I didn’t check into the hospital the next day and instead passed the rest of the summer slowly reinhabiting my life, coaxing myself along. I spent time with people I trusted, with whom I didn’t have to pretend.

Toward the end of August I went out for a few days to the rented Southampton house of my friend Elizabeth. It was just her, me and her three annoying dogs. I had brought a novel along, “The Gathering,” by Anne Enright, the sort of book about incomplete people and unhappy families that has always spoken to me. It was the first book to absorb me — the first I could read at all — since before I went into the hospital. I came to the last page on the third afternoon of my visit. It was about 4:30, the time of day that, by mid-August, brings with it a whiff of summer’s end. I looked up into the startlingly blue sky; one of the dogs was sitting at my side, her warm body against my leg, drying me off after the swim I had recently taken. I could begin to see the curve of fall up ahead. There would be new books to read, new films to see and new restaurants to try. I envisioned myself writing again, and it didn’t seem like a totally preposterous idea. I had things I wanted to say.

Everything felt fragile and freshly come upon, but for now, at least, my depression had stepped back, giving me room to move forward. I had forgotten what it was like to be without it, and for a moment I floundered, wondering how I would recognize myself. I knew for certain it would return, sneaking up on me when I wasn’t looking, but meanwhile there were bound to be glimpses of light if only I stayed around and held fast to the long perspective. It was a chance that seemed worth taking.

Daphne Merkin is a contributing writer for the magazine. Her last article was about the Kabbalah Center.

__________

Full article and photos: http://www.nytimes.com/2009/05/10/magazine/10Depression-t.html

Phys Ed: Your Brain on Exercise

What goes on inside your brain when you exercise? That question has preoccupied a growing number of scientists in recent years, as well as many of us who exercise. In the late 1990s, Dr. Fred Gage and his colleagues at the Laboratory of Genetics at the Salk Institute in San Diego elegantly proved that human and animal brains produce new brain cells (a process called neurogenesis) and that exercise increases neurogenesis. The brains of mice and rats that were allowed to run on wheels pulsed with vigorous, newly born neurons, and those animals then breezed through mazes and other tests of rodent I.Q., showing that neurogenesis improves thinking.

But how, exactly, exercise affects the staggeringly intricate workings of the brain at a cellular level has remained largely mysterious. A number of new studies, though, including work published this month by Mr. Gage and his colleagues, have begun to tease out the specific mechanisms and, in the process, raised new questions about just how exercise remolds the brain.

Some of the most reverberant recent studies were performed at Northwestern University’s Feinberg School of Medicine in Chicago. There, scientists have been manipulating the levels of bone-morphogenetic protein or BMP in the brains of laboratory mice. BMP, which is found in tissues throughout the body, affects cellular development in various ways, some of them deleterious. In the brain, BMP has been found to contribute to the control of stem cell divisions. Your brain, you will be pleased to learn, is packed with adult stem cells, which, given the right impetus, divide and differentiate into either additional stem cells or baby neurons. As we age, these stem cells tend to become less responsive. They don’t divide as readily and can slump into a kind of cellular sleep. It’s BMP that acts as the soporific, says Dr. Jack Kessler, the chairman of neurology at Northwestern and senior author of many of the recent studies. The more active BMP and its various signals are in your brain, the more inactive your stem cells become and the less neurogenesis you undergo. Your brain grows slower, less nimble, older.

But exercise countermands some of the numbing effects of BMP, Dr. Kessler says. In work at his lab, mice given access to running wheels had about 50 percent less BMP-related brain activity within a week. They also showed a notable increase in Noggin, a beautifully named brain protein that acts as a BMP antagonist. The more Noggin in your brain, the less BMP activity exists and the more stem cell divisions and neurogenesis you experience. Mice at Northwestern whose brains were infused directly with large doses of Noggin became, Dr. Kessler says, “little mouse geniuses, if there is such a thing.” They aced the mazes and other tests.

Whether exercise directly reduces BMP activity or increases production of Noggin isn’t yet known and may not matter. The results speak for themselves. “If ever exercise enthusiasts wanted a rationale for what they’re doing, this should be it,” Dr. Kessler says. Exercise, he says, through a complex interplay with Noggin and BMP, helps to ensure that neuronal stem cells stay lively and new brain cells are born.

But there are caveats and questions remaining, as the newest experiment from Dr. Gage’s lab makes clear. In that study, published in the most recent issue of Cell Stem Cell, BMP signaling was found to be playing a surprising, protective role for the brain’s stem cells. For the experiment, stem cells from mouse brains were transferred to petri dishes and infused with large doses of Noggin, hindering BMP activity. Without BMP signals to inhibit them, the stem cells began dividing rapidly, producing hordes of new neurons. But over time, they seemed unable to stop, dividing and dividing again until they effectively wore themselves out. The same reaction occurred within the brains of living (unexercised) mice given large doses of Noggin. Neurogenesis ramped way up, then, after several weeks, sputtered and slowed.  The “pool of active stem cells was depleted,” a news release accompanying the study reported. An overabundance of Noggin seemed to cause stem cells to wear themselves out, threatening their ability to make additional neurons in the future.

This finding raises the obvious and disturbing question: can you overdose on Noggin by, for instance, running for hours, amping up your production of the protein throughout? The answer, Dr. Gage says, is, almost certainly, no. “Many people have been looking into” that issue, he says. But so far, “there has not been any instance of a negative effect from voluntary running” on the brain health of mice. Instead, he says, it seems that the effects of exercise are constrained and soon plateau, causing enough change in the activity of Noggin and BMP to shake slumbering adult stem cells awake, but not enough to goose them into exhausting themselves.

Still, if there’s not yet any discernible ceiling on brain-healthy exercise, there is a floor. You have to do something. Walk, jog, swim, pedal — the exact amount or intensity of the exercise required has not been determined, although it appears that the minimum is blessedly low. In mice, Mr. Gage says, “even a fairly short period” of exercise “and a short distance seems to produce results.”

Gretchen Reynold, New York Times

__________

Full article: http://well.blogs.nytimes.com/2010/07/07/your-brain-on-exercise/

No end to dementia

Ten years ago people talked confidently of stopping Alzheimer’s disease in its tracks. Now, they realise they have no idea how to do that

DRUG companies are notoriously secretive. The clock starts running on a patent when it is filed, so the longer something can be kept under wraps before that happens, the better for the bottom line. You know something is up, then, when a group of these firms announce they are banding together to share the results of abandoned drug trials. And on June 11th several big companies did just that. They publicised the profiles of 4,000 patients from 11 trials so that they could learn from each other’s failures. An act of selflessness, perhaps, but also one of desperation.

Alzheimer’s disease is one of those things that policymakers would rather hide from. It is, perhaps, the classic illness of old age. Physical frailty is expected, and can be coped with. Mental frailty is much scarier for the sufferer and more demanding for those who have to look after him. It is expensive, too. Alzheimer’s is estimated to cost America alone some $170 billion a year. And it is getting commoner as average lifespans increase. The number of people suffering from the disease is expected to triple by 2050. Effective treatments would thus be embraced with enthusiasm by sufferers and society alike. The right Alzheimer’s drug could earn a drugmaker a lot of money. The incentives are there. But the science has still failed to deliver.

At the turn of the century, Alzheimer’s research seemed promising. A flurry of drugs which treated symptoms of the disorder had just hit the market and researchers were setting out confidently on a deeper investigation of its causes. Understanding those, they felt sure, would result in a cure. It still might, but the truth is that the hoped-for understanding has not come. As a consequence, a long list of would-be cures have failed in late-stage clinical trials, at enormous cost to the companies producing them. The latest of these, Dimebon, made by Pfizer, was abandoned as recently as March, after $725m had been spent on research and development.

Beta testing

The problem of what causes Alzheimer’s is profound. The physical manifestations of the disease that Alois Alzheimer noticed in 1906 are sticky plaques of one type of protein, now known as beta-amyloid, and nerve-cell-engulfing tangles of a second type, called tau protein. Since 1991 the smart money has been on the hypothesis that the disease is caused by the plaques, and that the tangles are mere consequence. For the past two decades, therefore, most attention has been given to developing drugs that will remove amyloid plaques from an affected brain. Five drugs that do this are on the market, but they only delay the onset of dementia. Once their effectiveness has run its course, memory loss and cognitive decline progress unimpeded, and sometimes even accelerate.

Partly as a consequence of this, the plaque theory is waning. Most researchers still believe beta-amyloid is the culprit, but the idea that free-floating protein molecules, rather than the proteins in the plaques, are to blame is gaining ground. This idea is supported by a study published in April in the Annals of Neurology, which showed that mice without plaques, but with floating beta-amyloid, were just as weakened by the disease as mice with both. If that is true in people, too, many more drugs now in clinical trials may prove to be ineffective.

Another fundamental problem is that, whatever is causing the damage, treatment is starting too late. By the time someone presents behavioural symptoms, such as forgetfulness, his brain is already in a significant state of disrepair. Even a “cure” is unlikely to restore lost function. A biochemical marker that indicates the progress of the disease would thus help identify those for whom early action would be advisable, and might help to distinguish people with Alzheimer’s from those with the less hostile forms of forgetfulness that tend to come with old age. Such a marker would also benefit the organisers of clinical trials. They would be able to see more easily whether a drug was working.

To this end, the Alzheimer’s Disease Neuroimaging Initiative (ADNI), established by America’s National Institutes of Health (NIH) in 2004, is measuring the levels of certain proteins in the cerebrospinal fluid of people who may have Alzheimer’s or may go on to develop it. Though the project still has a long way to go, it has already helped develop a test to diagnose the early stages of the disease.

ADNI’s anagram DIAN, the Dominantly Inherited Alzheimer Network, based at Washington University in St Louis, is taking another approach to the biomarker question. Its researchers are studying families with a genetic mutation that triggers the early onset of Alzheimer’s. That terrible knowledge means it is possible to predict which members of a family are destined to get the disease, and compare their biochemistry with that of relatives who do not have the mutation.

It is hard pounding, however, and—as the drug companies’ confession suggests—it is the “R” rather than the “D” of research and development that needs to be emphasised at the moment. A bad time, then, to be cutting back on “R”. That tripling of future sufferers is going to be expensive. Yet Alzheimer’s research, on which the NIH spent $643m in 2006, is to receive only $480m in 2011. It has not been singled out for these cuts. They are part of a general belt-tightening at the agency. But in this as in everything, you get what you pay for. And that might, in the future, be an awful lot of witless, wandering elderly.

__________

Full article and photo: http://www.economist.com/node/16374470?story_id=16374470&source=hptextfeature

For the birds

What regulates the lengths of human fingers?

I’ll show you mine if you show me yours

FROM financial traders’ propensity to make risky decisions to badly behaved schoolboys’ claims to be suffering from attention deficit hyperactivity disorder, testosterone makes a perfect scapegoat. In both of these cases, and others, many researchers reckon that the underlying cause is exposure to too much of that male hormone in the womb. Positive effects are claimed, too. Top-flight female football players and successful male musicians may also have fetal testosterone exposure to thank for their lot in life.

Yet the evidence that it is exposure in utero to testosterone that causes all these things relies on a shaky chain of causation. What these people actually share is a tendency for their ring fingers to be longer than their index fingers. This peculiarity of anatomy is often ascribed to fetal testosterone exposure because it is common in men and much rarer in women, and because there seems to be a correlation between the point in gestation when it appears and surges of testosterone in the womb. But the link has never been proved decisively. It has, rather, just become accepted wisdom.

Research carried out on birds now suggests that the accepted wisdom could be wrong. In their study of the feet of zebra finches published this week in the Proceedings of the Royal Society, Wolfgang Forstmeier and his colleagues at the Max Planck Institute for Ornithology in Seewiesen, Germany, conclude that oestrogen—the hormone of femininity—rather than testosterone, may be to blame.

Although it is well over 300m years since people and finches had a common ancestor, the basic vertebrate body plan is the same in both. So, a few years ago Dr Forstmeier, an expert on finch behaviour, wondered if the link between digit ratio and behaviour might show up in his animals, too.

It did. The ratio between a zebra finch’s second and fourth digits (which are not fingers but toes in birds) is associated with more courtship songs by males and fewer flirtatious hops by females—in other words with more masculine behaviour, regardless of the sex of the individual.

Dr Forstmeier probed the matter further. He has been investigating the birds’ oestrogen and androgen receptors—molecules that respond to female and male hormones, respectively.


Zebra crossing

The receptors in question orchestrate both behavioural and physical development, including some types of bone growth, in many vertebrate species. Different versions of a receptor (encoded by genes that have slightly different DNA sequences) can be more or less sensitive to the appropriate hormone. That led Dr Forstmeier to ask whether the type of hormone receptor a bird has influences its digit ratio, its sexual behaviour or both.

To find out, he looked for correlations between genes, ratios and behaviour in more than 1,100 zebra finches. Surprisingly, in view of the working assumption about humans, the type of testosterone receptor that a bird had proved to be irrelevant. Its oestrogen-receptor variant, however, had a significant impact on both digit ratio and courtship behaviour. This suggests that the sorts of predispositions that in people are blamed on fetal testosterone are caused in birds by fetal oestrogen (or, rather, the response to it).

That does not, of course, mean the same thing is true in people: 300m years is quite a long time for differences to emerge. It is also true that the digit-ratio that predicts male-like behaviour in birds is the opposite of the one found in humans (ie, the second digit, rather than the fourth, is the longer of the two). But it does suggest that it would be worth double-checking. Though science likes to think of itself as rational, it is just as prone to fads and assumptions as any other human activity. That, plus the fact that most scientists are men, may have led to some lazy thinking about which hormone is more likely to control gender-related behaviour. Just possibly, the trader’s finger should be pointing at oestrogen, not testosterone.

__________

Full article and photo: http://www.economist.com/node/16316949?story_id=16316949&source=hptextfeature

Genetically Engineered Distortions

A REPORT by the National Research Council last month gave ammunition to both sides in the debate over the cultivation of genetically engineered crops. More than 80 percent of the corn, soybeans and cotton grown in the United States is genetically engineered, and the report details the “long and impressive list of benefits” that has come from these crops, including improved soil quality, reduced erosion and reduced insecticide use.

It also confirmed predictions that widespread cultivation of these crops would lead to the emergence of weeds resistant to a commonly used herbicide, glyphosate (marketed by Monsanto as Roundup). Predictably, both sides have done what they do best when it comes to genetically engineered crops: they’ve argued over the findings.

Lost in the din is the potential role this technology could play in the poorest regions of the world — areas that will bear the brunt of climate change and the difficult growing conditions it will bring. Indeed, buried deep in the council’s report is an appeal to apply genetic engineering to a greater number of crops, and for a greater diversity of purposes.

Appreciating this potential means recognizing that genetic engineering can be used not just to modify major commodity crops in the West, but also to improve a much wider range of crops that can be grown in difficult conditions throughout the world.

Doing that also requires opponents to realize that by demonizing the technology, they’ve hindered applications of genetic engineering that could save lives and protect the environment.

Scientists at nonprofit institutions have been working for more than two decades to genetically engineer seeds that could benefit farmers struggling with ever-pervasive dry spells and old and novel pests. Drought-tolerant cassava, insect-resistant cowpeas, fungus-resistant bananas, virus-resistant sweet potatoes and high-yielding pearl millet are just a few examples of genetically engineered foods that could improve the lives of the poor around the globe.

For example, researchers in the public domain have been working to engineer sorghum crops that are resistant to both drought and an aggressively parasitic African weed, Striga.

In a 1994 pilot project by the United States Agency for International Development, an experimental variety of engineered sorghum had a yield four times that of local varieties under adverse conditions. Sorghum, a native of the continent, is a staple throughout Africa, and improved sorghum seeds would be widely beneficial.

As well as enhancing yields, engineered seeds can make crops more nutritious. A new variety of rice modified to produce high amounts of provitamin A, named Golden Rice, will soon be available in the Philippines and, if marketed, would almost assuredly save the lives of thousands of children suffering from vitamin A deficiency.

There’s also a sorghum breed that’s been genetically engineered to produce micronutrients like zinc, and a potato designed to contain greater amounts of protein.

To appreciate the value of genetic engineering, one need only examine the story of papaya. In the early 1990s, Hawaii’s papaya industry was facing disaster because of the deadly papaya ringspot virus. Its single-handed savior was a breed engineered to be resistant to the virus. Without it, the state’s papaya industry would have collapsed. Today, 80 percent of Hawaiian papaya is genetically engineered, and there is still no conventional or organic method to control ringspot virus.

The real significance of the papaya recovery is not that genetic engineering was the most appropriate technology delivered at the right time, but rather that the resistant papaya was introduced before the backlash against engineered crops intensified.

Opponents of genetically engineered crops have spent much of the last decade stoking consumer distrust of this precise and safe technology, even though, as the research council’s previous reports noted, engineered crops have harmed neither human health nor the environment.

In doing so, they have pushed up regulatory costs to the point where the technology is beyond the economic reach of small companies or foundations that might otherwise develop a wider range of healthier crops for the neediest farmers. European restrictions, for instance, make it virtually impossible for scientists at small laboratories there to carry out field tests of engineered seeds.

As it now stands, opposition to genetic engineering has driven the technology further into the hands of a few seed companies that can afford it, further encouraging their monopolistic tendencies while leaving it out of reach for those that want to use it for crops with low (or no) profit margins.

The stakes are too high for us not to make the best use of genetic engineering. If we fail to invest responsibly in agricultural research, if we continue to allow propaganda to trump science, then the potential for global agriculture to be productive, diverse and sustainable will go unfulfilled. And it’s not those of us here in the developed world who will suffer the direct consequences, but rather the poorest and most vulnerable.

Pamela C. Ronald, a professor of plant pathology at the University of California, Davis, is the co-author of “Tomorrow’s Table: Organic Farming, Genetics and the Future of Food.” James E. McWilliams, a history professor at Texas State University at San Marcos, is the author of “Just Food.”

___________

Full article: http://www.nytimes.com/2010/05/15/opinion/15ronald.html

Doubt Is Cast on Many Reports of Food Allergies

Many who think they have food allergies actually do not.

A new report, commissioned by the federal government, finds the field is rife with poorly done studies, misdiagnoses and tests that can give misleading results.

While there is no doubt that people can be allergic to certain foods, with reproducible responses ranging from a rash to a severe life-threatening reaction, the true incidence of food allergies is only about 8 percent for children and less than 5 percent for adults, said Dr. Marc Riedl, an author of the new paper and an allergist and immunologist at the University of California, Los Angeles.

Yet about 30 percent of the population believe they have food allergies. And, Dr. Riedl said, about half the patients coming to his clinic because they had been told they had a food allergy did not really have one.

Dr. Riedl does not dismiss the seriousness of some people’s responses to foods. But, he says, “That accounts for a small percentage of what people term ‘food allergies.’ ”

Even people who had food allergies as children may not have them as adults. People often shed allergies, though no one knows why. And sometimes people develop food allergies as adults, again for unknown reasons.

For their report, Dr. Riedl and his colleagues reviewed all the papers they could find on food allergies published between January 1988 and September 2009 — more than 12,000 articles. In the end, only 72 met their criteria, which included having sufficient data for analysis and using more rigorous tests for allergic responses.

“Everyone has a different definition” of a food allergy, said Dr. Jennifer J. Schneider Chafen of the Department of Veterans Affairs’ Palo Alto Health Care System in California and Stanford’s Center for Center for Primary Care and Outcomes Research, who was the lead author of the new report. People who receive a diagnosis after one of the two tests most often used — pricking the skin and injecting a tiny amount of the suspect food and looking in blood for IgE antibodies, the type associated with allergies — have less than a 50 percent chance of actually having a food allergy, the investigators found.

One way to see such a reaction is with what is called a food challenge, giving people a suspect food disguised so they do not know if they are eating it or a placebo food. If the disguised food causes a reaction, the person has an allergy.

But in practice, most doctors are reluctant to use food challenges, Dr. Riedl said. They believe the test to be time consuming, and worry about asking people to consume a food, like peanuts, that can elicit a frightening response.

The paper, to be published Wednesday in The Journal of the American Medical Association, is part of a large project organized by the National Institute of Allergy and Infectious Diseases to try to impose order on the chaos of food allergy testing. An expert panel will provide guidelines defining food allergies and giving criteria to diagnose and manage patients. They hope to have a final draft by the end of June.

“We were approached as in a sense the honest broker who could get parties together to look at this question,” said Dr. Matthew J. Fenton, who oversees the guidelines project for the allergy institute.

Authors of the new report — and experts on the guidelines panel — say even accepted dogma, like the idea that breast-fed babies have fewer allergies or that babies should not eat certain foods like eggs for the first year of life, have little evidence behind them.

Part of the confusion is over what is a food allergy and what is a food intolerance, Dr. Fenton said. Allergies involve the immune system, while intolerances generally do not. For example, a headache from sulfites in wine is not a food allergy. It is an intolerance. The same is true for lactose intolerance, caused by the lack of an enzyme needed to digest sugar in milk.

And other medical conditions can make people think they have food allergies, Dr. Fenton said. For example, people sometimes interpret acid reflux symptoms after eating a particular food as an allergy.

The chairman of the guidelines project, Dr. Joshua Boyce, an associate professor of medicine at Harvard and an allergist and pediatric pulmonologist, said one of the biggest misconceptions some doctors and patients have is that a positive test for IgE antibodies to a food means a person is allergic to that food. It is not necessarily so, he said.

During development, he said, the immune system tends to react to certain food proteins, producing IgE antibodies. But, Dr. Boyce said, “these antibodies can be transient and even inconsequential.”

“There are plenty of individuals with IgE antibodies to various foods who don’t react to those foods at all,” Dr. Boyce said.

The higher the levels of IgE antibodies to a particular food, the greater the likelihood the person will react in an allergic way. But even then, the antibodies do not necessarily portend a severe reaction, Dr. Boyce said. Antibodies to some foods, like peanuts, are much more likely to produce a reaction than ones to other foods, like wheat or corn or rice. No one understands why.

The guidelines panel hopes its report will lead to new research as well as clarify the definition and testing for food allergies.

But for now, Dr. Fenton said, doctors should not use either the skin-prick test or the antibody test as the sole reason for thinking their patients have a food allergy.

“By themselves they are not sufficient,” Dr. Fenton said.

Gina Kolata, New York Times

__________

Full article: http://www.nytimes.com/2010/05/12/health/research/12allergies.html

More, Please

Weight-loss schemes were popular long before Americans en masse became so massive.

A recent article in The Wall Street Journal quoted a doctor saying: “If there were a drug with the same benefits as exercise, it would instantly be the standard of care.” In other words, if Americans could take a pill to secure all the immunity-boosting, morale-lifting, ab-tightening and weight-reducing side effects of physical activity, minus the activity, we would gobble it up.

Alas, that pill doesn’t exist. Despite years of eat-right, get-fit propaganda, we remain a tubby, sedentary people much more inclined to seek a magic bullet—a supplement or a miracle diet—than to subject ourselves to the tiresome practices of daily exertion and self-denial that would reduce our bulk.

That Americans are over-large is uncontroversial, but the scale of our amplitude has become stunning to contemplate. In 1980, 15% Americans were obese, with bodies weighing 20% more than is ideal. In the intervening years, the rate has more than doubled. Today a whopping 34.3% of Americans are obese, some morbidly so; another third are overweight. All told, then, two-thirds of the American populace weighs far more than is healthy—or conducive to happiness: Mounting data suggest that poor diet and a surplus of fat burden the emotions, producing ennui and even depression. No wonder Americans have a propensity for weight-loss schemes, even as evidence accrues that fad diets may actually foster long-term weight gain.

Yet worries over overeating, as Susan Yager interestingly reminds us “The Hundred Year Diet,” preoccupied the public long before Americans en masse became so massive. These days we may track the content of trans-fats and high fructose corn syrup—the staples of processing that make much food so cheap and unhealthful—but in the 1970s we were already measuring out our lives with tablespoons, trying to follow the Atkins or Pritkin or Beverly Hills diets. Indeed, back in 1960 John F. Kennedy was worrying in the pages of Sports Illustrated that the nation’s youth had become flabby and dangerously “soft.”

As Ms. Yager explains, America’s obsession with diet began long ago in New England, in the person of the Rev. Sylvester Graham. In the early 19th century, Graham was on a mission “to save souls from what he deemed the most serious problem of all: the evil torment of gluttony,” a condition that, in his view, led to “sexual excess” and “violence.”

Graham, nicknamed “the Peristaltic Persuader” in the press, felt that purity could be achieved by adhering to a diet free of meat, alcohol, coffee, tea, salt, pepper or spices. He was, we discover, “uncomfortable with the concept of yeast,” which seemed to him alive. So to supplant leavened bread he invented the whole-wheat cracker that still bears his name, though there is not much resemblance between the Graham cracker in its current sugary form and the Spartan original.

In Graham’s wake came John Harvey Kellogg, a Seventh Day Adventist, the director of Michigan’s Battle Creek Sanitarium and the progenitor of the cornflake. Kellogg shared Graham’s horror of sexual desire (and of caffeinated beverages) and, as Ms. Yager slyly notes, “was the first in a long line of physicians to write about nutrition with passion for, and limited knowledge of, the subject.” Late in the 19th century, with Kellogg at the height of his influence, a diet treatise published two decades earlier in Britain suddenly caught on in the U.S. William Banting’s “Letter on Corpulence Addressed to the Public” told how its author had lost weight by following what Ms. Yager explains was a sensible “high-protein, low-calorie, low-fat, modified carbohydrate plan.” Americans seized on the Englishman’s scheme with such zeal that, for decades afterward, dieting was known as “banting.”

Americans also “fletcherized” at this time in history, meaning that they mashed each mouthful into a ghastly puree as dictated by “The Great Masticator,” Horace Fletcher. Like Banting, Fletcher had suffered personally from overweight and felt it vital to share with others the benefits he had achieved by chewing his food 100 times before swallowing. He was taken very seriously and counted among his devotees such prominent men as Henry James, Henry Ford and Thomas Edison. “Masticating had no social boundaries,” we read. “Sing-Sing inmates, schoolchildren, the highest of high society and the most middle of the middle class all chewed and chewed.”

Ms. Yager’s bite-sized chapters are easy and pleasant to digest as she takes us through America’s fat-fighting history, from its now comical-seeming beginnings through the wild pendulum swings of the late 20th century (when carbohydrates and fats alternated as public enemy No. 1) to the promise of the fat-substitute Olestra (with its regrettable intestinal consequences) and today’s gastric bypass surgery for the severely obese.

After all this, the reader can’t help asking: Why us? “Globesity” may be the coming thing, but why, in the first place, have Americans been so susceptible to the chimerical appeal of fast food and so faddish in their dieting? Ms. Yager notes that nationalities with “revered culinary traditions,” such as the Italians and the Japanese, have resisted American eating patterns and are thinner because of it. But she doesn’t otherwise explore our singularity in this regard, which is a shame.

Researchers, meanwhile, are hotly pursuing a pharmaceutical remedy, fiddling with hormones and neurotransmitters in an effort “to override what humans seem hardwired to do—overeat in a world where cheap, tasty food is ubiquitous.”

Mrs. Gurdon is a regular contributor to the book pages of The Wall Street Journal.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703674704575234470932092904.html

The magic cure

Startled by the power of placebos, doctors consider how to use them as real treatment

You’re not likely to hear about this from your doctor, but fake medical treatment can work amazingly well. For a range of ailments, from pain and nausea to depression and Parkinson’s disease, placebos–whether sugar pills, saline injections, or sham surgery–have often produced results that rival those of standard therapies.

In a health care industry fueled by ever newer and more dazzling cures, this phenomenon is usually seen as background noise, or even as something of an annoyance. For drug companies, the placebo effect can pose an obstacle to profits–if their medications fail to outperform placebos in clinical trials, they won’t get approved by the FDA. Patients who benefit from placebos might understandably wonder if the healing isn’t somehow false, too.

But as evidence of the effect’s power mounts, members of the medical community are increasingly asking an intriguing question: if the placebo effect can help patients, shouldn’t we start putting it to work? In certain ways, placebos are ideal drugs: they typically have no side effects and are essentially free. And in recent years, research has confirmed that they can bring about genuine improvements in a number of conditions. An active conversation is now under way in leading medical journals, as bioethicists and researchers explore how to give people the real benefits of pretend treatment.

In February, an important paper was published in the British medical journal the Lancet, reviewing the discoveries about the placebo effect and cautiously probing its potential for use by doctors. In December, the Michael J. Fox Foundation announced plans for two projects to study the promise of placebo in treating Parkinson’s. Even the federal government has taken an interest, funding relevant research in recent years.

But any attempt to harness the placebo effect immediately runs into thorny ethical and practical dilemmas. To present a dummy pill as real medicine would be, by most standards, to lie. To prescribe one openly, however, would risk undermining the effect. And even if these issues were resolved, the whole idea still might sound a little shady–offering bogus pills or procedures could seem, from the patient’s perspective, hard to distinguish from skimping on care.

“In the last 10 years we’ve made tremendous strides in demonstrating the biological veracity of the placebo effect,” says Ted Kaptchuk, an associate professor at Harvard Medical School and one of the coauthors of the Lancet article. “The frontier is, how do we utilize what is clearly an important phenomenon in a way that’s consistent with patient-practitioner trust, and informed consent?”

There are limits to even the strongest placebo effect. No simulation could set a broken arm, of course, or clear a blocked artery. As a rule, placebos appear to affect symptoms rather than underlying diseases–although sometimes, as in the case of depression or irritable bowel syndrome, there’s no meaningful distinction between the two. Moreover, placebos have often received undue credit for recovery that might have occurred anyway. Indeed, the effect is famously difficult to identify, measure, and even coherently define. There is debate about the magnitude of the response, with some calling it modest at best, and opposing the idea of using placebos clinically.

But according to advocates, there’s enough data for doctors to start thinking of the placebo effect not as the opposite of medicine, but as a tool they can use in an evidence-based, conscientious manner. Broadly speaking, it seems sensible to make every effort to enlist the body’s own ability to heal itself–which is what, at bottom, placebos seem to do. And as researchers examine it more closely, the placebo is having another effect as well: it is revealing a great deal about the subtle and unexpected influences that medical care, as opposed to the medicine itself, has on patients.

Phony treatment is hardly a novel concept in medicine. The word “placebo”–Latin for “I shall please”–has been used in a medical context since at least the late 1700s, referring to inert treatments given to placate patients. Arguably, until the scientific breakthroughs of the 20th century, medical history was little more than one long series of placebos.

But in the postwar era, the profession changed in a way that relegated placebos to the shadows. New medicines began to emerge that actually cured diseases. At the same time, the longstanding paternalism of doctors was yielding to a new ethos that respected the patient’s right to understand and consent to treatment. Gradually, fake pills began to seem less like a benign last resort, and more like a breach of trust. To be sure, some doctors continued to use placebos–typically, “impure” placebos such as vitamins that had no specific effect on the malady in question. But they did so quietly, knowing the practice was frowned upon.

As sugar pills were losing their place in the physician’s arsenal, they assumed a different role: as a neutral placeholder in drug testing. This development is usually traced back to a 1955 paper by Henry Beecher, a Harvard anesthesiologist who argued that the placebo effect was so potent that researchers needed to account for it when testing new drugs. Today, the “gold standard” of medical testing is the randomized clinical trial, in which the new drug must beat a placebo to prove its worth.

In the last decade-plus, however, the accumulating data have sparked a renewed interest in the placebo as a treatment in its own right. Numerous studies have shown that it can trigger verifiable changes in the body. Brain scans have shown that placebo pain relief is not only subjectively experienced, but that in many cases the brain releases its own internal painkillers, known as endogenous opioids. (This placebo effect can even be reversed by the opioid-blocker naloxone.) Another study, published in Science in 2009, found that patients given a topical cream for arm pain showed much less pain-related activity in the spinal cord when told it was a powerful painkiller. A 2009 study found that patients benefited as much from a fake version of a popular spinal surgery as they did from the real one; asthma patients have shown strong responses to a mock inhaler.

Impressed by such findings, some researchers and clinicians hope to import them somehow from the laboratory into the doctor’s office–adding placebo, in a systematic way, to the doctor’s repertoire.

The first conundrum doctors face is how to honor the principle of informed consent, their ethical and legal obligation to fully explain a treatment. Clearly, a doctor would violate this rule by passing a sugar pill off as a real prescription drug, and thinkers have begun to wrestle with this challenge.

One audacious tack would be to tell the truth: to notify patients that they are about to be given a fake pill. The idea sounds absurd, and doctors have long assumed that would ruin the effect. But there’s almost no research on the question, and it may not be as unthinkable as it seems. One reason it could work involves “classical conditioning”–the notion that we can learn on a subconscious level, like Pavlov’s dogs, to biologically respond to certain stimuli. This concept suggests that the brain could automatically react to the placebo in a way that doesn’t require conscious faith in the drug. (The placebo effect has been observed in rodents, which bolsters this theory.) Another reason is that, according to many researchers, the trappings of medical care contribute to the response. So in certain circumstances, doctors could conceivably give a placebo with total transparency, conspiring with patients to trick their own brains– though the lack of research means there is little evidence to support this hypothesis now.

A second approach may be to integrate placebos with real treatments, and to reconsider whether this should still be viewed as fakery. A groundbreaking study published in February in the journal Psychosomatic Medicine found that in one group, psoriasis patients who received a topical cream treatment, alternated with placebo “reinforcements,” did as well as patients who got up to four times more of the active drug. The authors hypothesized that the effect was due primarily to conditioning–the brain learned to associate the cream with healing and sent the same signals even when the cream was inert.

This is just one study, but the implications could be profound. It suggests that in some cases doctors could essentially dilute medications, perhaps dramatically, and get the same results. Robert Ader, the study’s lead author and a psychiatry professor at the University of Rochester, says this approach has the potential to address maladies that operate through the nervous systems, such as pain, some auto-immune diseases, and hypertension. Ader envisions a future in which a physician writes a prescription consisting of the drug, the dosage, and the “reinforcement schedule.” Under a reinforcement schedule of 80 percent, the patient would get a bottle of 100 pills, 80 of which were dummies.

“You’re talking about many, many, many millions of dollars a year in drug treatment costs,” says Ader. He adds, “If [doctors] can produce approximately the same therapeutic effect with less drug, then it’s obviously safer for the patient, and I can’t believe they wouldn’t want to look into doing this.”

In either scenario–prescribing ersatz medicine alone or cutting active treatment with it–it’s easy to predict the concerns and controversies that would ensue. Might cost-conscious health care providers and insurers be tempted to push placebos for financial reasons? Would patients feel cheated and confused? Whether placebos can be successfully reframed as novel medicines and helpful “reinforcements” remains to be seen.

For other researchers, the data have led to very different territory: They’re looking for ways to elicit the placebo effect while jettisoning the placebo altogether.

Some researchers argue that the real source of a placebo’s effect is the medical care that goes along with it–that the practice of medicine exerts tangible healing influences. This notion has received support from experiments known as “open-hidden” studies. Fabrizio Benedetti, a professor at the University of Turin Medical School, has conducted a number of these, in which patients receive painkiller either unknowingly (they are connected to a machine that delivers it covertly) or in an open fashion (the doctor is present, and announces that relief is imminent). Patients in the “open” group need significantly less of the drug to attain the same outcome. In other words, a big part of the effect comes from the interactions and expectation surrounding the drug. Some call the disparity between the two scenarios the placebo effect. (Others, however, say the word “placebo” should be reserved for inert treatments, and press for different terms, such as “meaning response” or “context effect.”)

“Medicine is intensely meaningful,” says Daniel Moerman, a professor emeritus of anthropology at the University of Michigan at Dearborn who coined the phrase “meaning response.” “It’s this highly stylized, highly ritualized thing.” He urges us to “forget about the stupid placebo and start looking at the system of meaning involved.”

A recent study by Harvard’s Kaptchuk suggests the importance of ritual and the doctor-patient relationship. A 2008 paper published in the British Medical Journal described experiments conducted on patients with irritable bowel syndrome. Two groups underwent sham acupuncture, while a third remained on a waiting list. The patients receiving the sham treatment were divided into two subgroups, one of which was treated in a friendly, empathetic way and another with whom the doctors were businesslike. None of the three groups had received “real” treatment, yet they reported sharply different results. After three weeks, 28 percent of patients on the waiting list reported “adequate relief,” compared with 44 percent in the group treated impersonally, and fully 62 percent in the group with caring doctors. This last figure is comparable to rates of improvement from a drug now commonly taken for the illness, without the drug’s potentially severe side effects.

“It’s amazing,” says Kaptchuk. “Connecting with the patient, rapport, empathy . . . that few extra minutes is not just icing on the cake. It has biology.”

It may be, then, that the simplest and least ethically hazardous way to capitalize on the placebo effect is to acknowledge that medicine isn’t just a set of approved treatments–it’s also a ritual, with symbolism and meaning that are key to its efficacy. At its best, that ritual spurs positive expectations, sparks associations with past healing experiences, and eases distress in ways that can alleviate suffering. These meanings, researchers say, are what the placebo effect is really about.

If this is true, then the takeaway is not necessarily that we should be dispensing more fake pills–it’s that we should think less about any pill and more about the context in which it’s given. Whether we call it the placebo effect or use new terms, the research in this field could start to put a measurable healing value on doctors’ time and even demeanor, rather than just on procedures and pills. And that could change medicine in a way that few blockbuster drugs ever could.

Rebecca Tuhus-Dubrow is a contributing writer for Ideas.

__________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2010/05/09/the_magic_cure/

Exercise can counter effects of age on brain

Physical exercise may help to rebuild parts of the brain that are lost with age, a study suggests.

 Epileptic seizures might also trigger brain cell regeneration, according to research in animals.

Scientists believe the discovery may lead to new ways of tackling age-related memory loss or the effects of brain injuries and Alzheimer’s disease.

It used to be thought that from birth onwards, brain cells died off but were not replaced. Now it is known that at least some nerve cells can be replenished in the hippocampus, the brain region that plays a key role in learning and memory.

However, a large proportion of the stem cells that give rise to new neurons remain dormant in adults.

The new research in mice shows how these cells can be “kick-started” into action by physical activity or epileptic seizures.

Scientists in Germany found that physically active mice developed more newborn hippocampal neurons than inactive animals.

“Running promotes the formation of new neurons,” said the study’s leader, Dr Verdon Taylor, from the Max Planck Institute of Immunobiology in Freiburg.

Abnormal brain activity, as occurs during epileptic seizures, also appeared to trigger neuron generation.

Excessive formation of new nerve cells is thought to play a role in epilepsy, said Dr Taylor, whose research appears in the journal Cell Stem Cell.

In physically active mice, some previously dormant stem cells were seen to come back to life and start to divide.

Other sporadically dormant stem cells were unaffected by physical activity but awakened by epileptic seizures.

A similar pattern of active and inactive stem cells probably occurs in the human brain, said the scientists. It was likely that dormant stem cells could be reactivated in humans in the same way they were in mice.

John von Radowitz, The Independent

__________

Full article: http://www.independent.co.uk/news/science/exercise-can-counter-effects-of-age-on-brain-1965613.html

The Estrogen Dilemma

Here we are, two fast-talking women on estrogen, staring at a wall of live mitochondria from the brain of a rat. Mitochondria are cellular energy generators of unfathomably tiny size, but these are vivid and big because they were hit with dye in a petri dish and enlarged for projection purposes. They’re winking and zooming, like shooting stars. “Oh, my God,” Roberta Diaz Brinton said. “Look at that one. I love these. I love shooting mitochondria.”

Brinton is a brain scientist. Estrogen, particularly in its relationship to the health of the brain, is her obsession. At present it is mine too, but for more selfish reasons. We’re inside a darkened lab room in a research facility at the University of Southern California, where Brinton works. We are both in our 50s. I use estrogen, by means of a small oval patch that adheres to my skin, because of something that began happening to me nine years ago — to my brain, as a matter of fact. Brinton uses estrogen and spends her work hours experimenting with it because of her own brain and also that of a woman whose name, Brinton will say, was Dr. A. She’s dead now, this Dr. A. But during the closing years of her life she had Alzheimer’s, and Brinton would visit her in the hospital. Dr. A. was a distinguished psychotherapist and had vivid stories she could still call to mind about her years in Vienna amid the great European psychologists. “We’d spend hours, me listening to her stories, and I’d walk out of the room,” Brinton told me. “Thirty seconds later, I’d walk back in. I’d say, ‘Dr. A., do you remember me?’ And she was so lovely. She’d say: ‘I’m so sorry. Should I?’ ”

The problem with the estrogen question in the year 2010 is that you set out one day to ask it in what sounds like a straightforward way — Yes or no? Do I or do I not go on sticking these patches on my back? Is hormone replacement as dangerous in the long term as people say it is? — and before long, warring medical articles are piling up, researchers are raising their voices and gesticulating excitedly and eventually you’re in Los Angeles staring at a fluorescent rodent brain in the dark. “You want a statistic?” Brinton asked softly. Something about the shooting mitochondria has made us reverent. “Sixty-eight percent of all victims of Alzheimer’s are women. Is it just because they live longer? Let’s say it is, for purposes of discussion. Let’s say it’s just because these ladies get old. Do we just say, ‘Who cares?’ and move them into a nursing home? Or alternatively, maybe they are telling us something.”

With their brains, she means. Their sputtering, fading Alzheimer’s brains, which a few decades earlier were maybe healthy brains that might have been protected from eventual damage if those women had taken estrogen, and taken it before they were long past their menopause, while their own neural matter still looked as vigorous as those rat cells on the wall. This proposition, that estrogen’s effects on our minds and our bodies may depend heavily upon when we first start taking it, is a controversial and very big idea. It has a working nickname: “the timing hypothesis.” Alzheimer’s is only one part of it. Because the timing hypothesis adds another layer of complication to the current conventional wisdom on hormone replacement, it has implications for heart disease, bone disease and the way all of us women now under 60 or so — the whole junior half of the baby boomers, that is, and all our younger sisters — could end up re-examining, again, everything the last decade was supposed to have taught us about the wisdom of taking hormones.

I first met Brinton at a scientific symposium at Stanford University in January that was entirely devoted to the timing hypothesis. The meeting was called Window of Opportunity of Estrogen Therapy for Neuroprotection, and it drew research scientists and physicians from all over the country. When I asked to listen in, the organizers hesitated; these are colleagues around a conference table, they pointed out. They’re probing, interrogating, poking holes in one another’s work in progress.

But I was finally permitted to take a chair in a corner, and as the day went on, I became aware of my patch, in a distracted, hallucinatory sort of way, as if I had started fixating on a smallish scar. One after another, their notes and empty coffee cups piling up around them, heart experts and brain experts and mood experts got up to talk about estrogen — experiments, clashing data, suppositions, mysteries. There are new hormone trials under way that are aimed at the 40-year-old to 60-year-old cohort, with first results due in 2012 and 2013. There are depression studies involving estrogen. There are dementia studies involving estrogen. There are menopausal lab monkeys taking estrogen, ovariectomized lab mice taking estrogen and young volunteers undergoing pharmaceutically induced menopause so researchers at the National Institutes of Health can study exactly what happens when the women’s estrogen and progesterone are then cranked back up. I typed notes into my laptop for hours, imagining the patch easing its molecules into the skin of my back, and the whole time I was typing, working hard to follow the large estrogen-replacement thoughts of the scientists around the table, I had one small but persistent estrogen-replacement thought of my own: If I make the wrong decision about this, I am so screwed.

I started taking estrogen because I was under the impression that I was going crazy, which turns out to be not as unusual a reaction to midlife hormonal upheaval as I thought. This was in 2001. The year is significant, because the prevailing belief about hormone replacement in 2001 was still, as it had been for a quarter century, the distillation of extensive medical and pharmaceutical-company instruction: that once women start losing estrogen, taking replacement hormones protects against heart disease, cures hot flashes, keeps the bones strong, has happy effects on the skin and sex life and carries a breast-cancer risk that’s worth considering but not worrying about too much, absent some personal history of breast cancer or a history of breast cancer in the immediate family.

At first, as I was trying to locate a psychiatrist who would take me on, I wasn’t aware I had reason to pay attention to advice about hormones at all. That year I turned 47, a normal age for beginning the drawn-out hormonal-confusion period called perimenopause, but I had none of the familiar signs. Menopausal holdouts run in the family; one of my grandmothers was nearly 60 by the time hers finally kicked in. My only problem was a new tendency to wake up some mornings with a great dark weight shoving my shoulders toward the floor and causing me to weep inside my car and basically haul myself around as if it were the world’s biggest effort to stand up straight and carry on a conversation. Except for its having shown up so arbitrarily and then coming and going in waves, there was nothing interesting about my version of what my husband and I came to think of as the Pit; anybody who has been through a depression knows what a stretch of semidisabling despair feels like, and for my part I had a very nice life, a terrific family and a personal interior chorus of quarreling voices demanding to know why I didn’t pull up my socks and carry on, which in fact was the first question I planned to ask a psychiatrist.

But I went to my gynecologist first, so she could check my blood pressure or whatever seemed the prepsychiatrist thing to do. How often would you say you feel this way, she asked; and I said I didn’t know, maybe every few weeks; and she told me to start keeping records. Note each day, she said. Check for patterns.

She was right. There was a pattern. I was falling into the Pit on schedule, around 11 days before each menstrual period, or M.P., which is one of many abbreviations I was to learn in my efforts to keep track of the ferocious hormones debate that started up in North America in 2002, one year after I stuck on the first estrogen patch that my gynecologist prescribed. The study at the center of the ruckus was called the Women’s Health Initiative, or W.H.I. It was a federally financed examination of adult women’s health, extraordinary in scale and ambition, that started up in the early 1990s; one of its drug trials enrolled more than 16,000 women for a multiyear comparison of hormone pills versus placebos. On July 9, 2002, W.H.I. investigators announced that they had ended the trial three years early, because they were persuaded that it was dangerous to the hormone-taking participants to let them continue.

The women on hormones were having more heart trouble than their placebo-taking counterparts, the investigators said, not less. Their risk for stroke went up. Their risk for blood clots went up. Their risk for breast cancer increased by 24 percent. The W.H.I. bulletins dominated medical news all summer and long into the fall, and so alarming were their broad-scale warnings that millions of women, myself included, gave up hormone replacement and resolved to forge ahead without it.

The patches my gynecologist prescribed worked, by the way. I didn’t understand how, beyond the evident quieting of some vicious recurring hormonal hiccup, and neither did the gynecologist. But she had other women who came in sounding like me and then felt better on estrogen, and I would guess many of them, too, decided after the W.H.I. news that they could surely find other ways to manage their “mood swings,” to use the wondrously bland phrasing of the medical texts. (I’m sorry, but only someone who has never experienced one could describe a day of “I would stab everyone I know with a fork if only I could stop weeping long enough to get out of this car” as a “mood swing.”) We muddled along patchless, my mood swings and my patient family and I, until there came a time in 2006 when in the midst of some work stress, intense but not unfamiliar, I found myself in a particularly bad Pit episode and this time unable to pull out.

It was profoundly scary. In retrospect, I managed a surprising level of public discretion about what was going on; competence at the cover act is a skill commonly acquired by midlife women, I think, especially those with children and work lives. If the years have taught us nothing else, they have taught us how to do a half dozen things at once, at least a couple of them decently well. Like other women I have met recently with stories like this one, I relied for a few months on locked office doors, emergency midday face-washings and frequent visits to an increasingly concerned talk therapist. But one afternoon I got off my bicycle in the middle of a ride with my husband, because I had been crying so hard that I couldn’t see the lane lines, and I sat down on the sidewalk and told him how much I had come to hate knowing that family obligations meant I wasn’t allowed to end my life. The urgent-care people at my health clinic arranged a psychiatric consult fast, and after listening and nodding and grabbing scratch paper to draw me an explanatory graph with overlapping lines that peaked and plunged, the psychiatrist wrote me two prescriptions. One was for an antidepressant.

The other — I recognized the name as soon as she wrote it down — was for Climara, my old estrogen patch.

By this time we were four years past the 2002 W.H.I. hormone news. So I knew a few more things. I knew there had been a surge of industrious scrambling among former hormone-taking women, some of whom had tried multiple alternatives or going cold turkey and then changed their minds and re-upped on estrogen, deciding that life without it was so unpleasant that they no longer cared what the statistical prognoses said. I knew the prevailing medical sentiment had shifted slightly since the bombshell of 2002; certain articles and books still urged women to shun hormone replacement at all costs, but the more typical revised counsel was, essentially, proceed with great caution. If some menopausal malady is genuinely making you miserable, the new conventional wisdom advised, and no alternative remedy is working for you, then go ahead and take hormones — but keep the dose low and stop them as soon as possible.

I would like to be able to tell you that I weighed these matters thoughtfully, comparing my risks and benefits and bearing in mind the daunting influence of a drug industry that stands to profit handsomely from the medicalizing of normal female aging. But that would be nonsense, of course. I was too crazy. I went straight to the pharmacy and took everything they gave me.

You don’t read the fine print on package labels when you’re being ushered through a psychiatric crisis, but after a while, I did. By last winter I was nearing the cumulative five-year mark as an estrogen user, and although “low dose, stop soon” is often an advisory without specifics attached, five years seemed to turn up here and there as an informal outer-limit guideline. And because it had worked again, because the estrogen so clearly helped repair something that was breaking (there’s no way for me to separate the effects of estrogen from the effects of the antidepressant, except that on the few occasions when I’ve been haphazard about replacing the estrogen patches on time, I’ve experienced prompt and unmistakable intimations of oncoming Pit), I now had some rational faculties with which to go looking for explanations that might help me decide what to do. This was when I first began learning that in the controversy over hormone replacement, the fine print matters a very great deal.

First of all, the kind of estrogen in my patches — there are different forms of estrogenic molecules — is called estradiol. It’s not the estrogen used in the W.H.I. study. Pharmaceutical estradiol like mine comes from plants whose molecules have been tweaked in labs until they are atom for atom identical to human estradiol, the most prominent of the estrogens premenopausal women produce naturally on their own. The W.H.I. estrogen, by contrast, was a concentrated soup of a pill that is manufactured from the urine of pregnant mares. The drug company Wyeth (now owned by Pfizer) sells it in two patented products, the pills Premarin and Prempro, and it’s commonly referred to as “conjugated equine estrogens.”

There was more in the fine print. Two years ago, after warning me that women who haven’t had a hysterectomy run a higher risk of uterine cancer when they take only estrogen as hormone replacement, a new doctor added in progesterone, which has been shown to protect the uterus. The progesterone he prescribed for me, like the estradiol, is a molecular replica of the progesterone women make naturally. It’s different from the progesteronelike synthetic hormone that was used for the W.H.I. study that ended in 2002. That medication was a formulation whose multisyllabic chemical name shortens to MPA and which has a problematic back story of its own: MPA takes care of the uterine-cancer risk, but there’s reason to suspect it may be a factor in promoting breast cancer. And it’s ingested as a pill, which means that like equine estrogens (and unlike, for example, my patch), MPA metabolizes through the liver, possibly creating additional complications en route, before going about its business.

The biggest difference between me and the W.H.I. women, though, has to do with age and timing. I started on the patches while my own estrogen, pernicious though its spikes and plummets may have been, was still floating around at more or less full strength. The average age of the W.H.I. women was just over 63, though the study accepted women as young as 50. More significant, though, most of them were many years past their final menstrual period, which is the technical definition of menopause, when they began their trial hormones. The bulk of the group was at least 10 years past; factoring in the oldest women, the average number of years between the volunteers’ menopause and their start on the trial medications was 13.4.

Because women generally make decisions about hormones while they are in the throes of perimenopause — that term is now used to extend through the year following the final M.P. — you may find this as perplexing as I did. Why would the largest drug trial in the history of women’s health select, for most of its participants, women already long past the critical phase? I heard one undiplomatic critic sum up the W.H.I. as “the wrong drugs, tested on the wrong population,” and those two factors, the drugs and the population, are actually directly linked. Equine estrogens and MPA were the only forms of hormones used in the W.H.I. trials. Among other reasons, that’s because drug trials are expensive; this one was huge, and Wyeth was going to provide without cost an average of eight years’ worth of its equine estrogens and MPA to 40 clinical centers.

And millions of women were using those very hormones already, partly because aggressive Wyeth marketing had for three decades insisted that hormone replacement was the ticket to a vigorous and sexually satisfactory postmenopausal life. To a certain extent, evidence backed up that claim; wide-scale though less rigorous earlier studies appeared to demonstrate hormone replacement’s benefits so clearly that many physicians were suggesting it almost automatically to midlife women, whether or not they had perimenopausal complaints. Hormones raised the breast-cancer risk in those earlier studies, but nearly every other health factor showed improvement when women who took hormones were compared with those who didn’t. Hot flashes disappeared, osteoporosis was milder, women reported feeling better and women who took hormones showed a markedly lower rate of heart disease than women who did not.

Because heart disease ultimately kills many more women than all cancers combined, some doctors had also taken to urging older women, even those past menopause, to start hormones for cardiac-health purposes. The W.H.I. trials were supposed to provide conclusive evidence, finally, as to whether all this wide-scale prescribing was truly a sound idea. But cardiovascular disease tends to make its bids for attention — its “events,” as clinicians say, like heart attack and death — when we’re quite a bit past 51, the average age at which American women hit menopause. The only way the W.H.I. was going to tally up a scientifically useful number of cardiac events was to enroll plenty of women already old enough to reach that danger stage before the study’s time ran out. So that’s what they did, and once the final data was reparsed many times, it was clear that the trial had shown physicians something highly important about the perils of starting older postmenopausal women (that’s qualifier No. 1) on pills (No. 2) containing equine estrogens (No. 3) plus MPA (No. 4).

Those four qualifiers make the chief message of the W.H.I. — that taking hormones, in the long run, is more likely to hurt you than help — far more specific than the one most women heard. For those of us not yet on the far side of menopause, or who don’t match the other qualifiers (as I write this, for example, I’m zero for four), a daunting proportion of what we thought we learned about hormone replacement over the last eight years remains unsettled, more confusing than ever and conceivably — we don’t know yet — wrong. “I mean, if you’re a 70-year-old,” says S. Mitchell Harman, a Phoenix-based endocrinologist and coordinator of one of the national trials currently examining hormones’ effects on younger women, “and your question is, Should I start taking estrogen? the W.H.I. answered that for you beautifully. No. Unfortunately, it wasn’t designed to answer that question for a 50-year-old. So now we’re trying to fill in the blanks.”

One afternoon last month, I reported to the Northern California site for an N.I.H.-financed cognitive trial that is part of the Kronos Early Estrogen Prevention Study that Harman is leading. Keeps, as it’s called, has enrolled women at nine such sites around the country; this one was inside a medical building at the University of California, San Francisco, and the cognition test I asked to try proved to be a low-tech experience: a table with chairs, pens and pencils and a gentle-voiced psychologist asking me to do things with my brain. Number sequences repeated backward, lists of random objects to recall, designs to remember and copy — I promised not to describe specifics, because making details public could compromise the trial results. But imagine a stranger holding up a stopwatch and giving you 30 seconds to name every dessert item you can think of. The brain charges off into a comical panic grope, and it’s like a cross between a back-seat car game and the SATs.

The only grading marker, though, is self compared to self. If I were a Keeps participant, I would be on a four-year regimen of some mystery medication — either estrogen, in one of two forms (estradiol patches or equine-estrogen pills, to see whether differences emerge between the two), or placebo patches or placebo pills. Then in another year, I would retake the cognition test, which lasted about an hour and a half, so researchers could track any change. Brain function is a major element of the Keeps agenda; the other is heart health, so the test administrators would conduct annual ultrasounds of my carotid artery, to check for the thickening that signals heart disease. That’s how they are trying to circumvent the doesn’t-manifest-until-you’re-older problem, by measuring for known warning markers rather than waiting for the actual big events. They would check my blood and cholesterol for signs of other cardiovascular trouble.

With about 730 participants, Keeps is relatively small; hormone research has been tough to finance in the post-W.H.I. years, and every scientist and physician I’ve spoken to said there will never again be another hormone trial as costly and ambitious as the W.H.I. A second study, based in Los Angeles, called the Early Versus Late Intervention Trial With Estradiol, is following more than 600 women — comparing a group that has been post-menopausal for an average of 15 years and that is on estradiol or on a placebo with a second, younger group that is an average of three years post-menopausal. “This is the age when we should really study estrogen,” says Sanjay Asthana, a University of Wisconsin medical professor who is a designer of the cognition component of Keeps. “People like me are really waiting to see what this data looks like. Either way. We need to know.”

Asthana is a geriatrician, with a specialty in Alzheimer’s and other forms of age-related memory loss. That makes him a member of what I came to think of, in my travels among estrogen researchers this winter, as the brain contingent. Their working material includes neuroimaging; magnified slices of rodent brains; and live cells that carry on in petri dishes, shooting mitochondria around or struggling under the burden of disease. All these things allow the brain contingent to see, sometimes literally, estrogen in action. It’s an amazing process. When cells are healthy, estrogenic molecules slide right in, searching for special receptors that are shaped precisely for the estrogens: the receptors are tiny locks, waiting for the right molecular keys to turn them on. Then, once they are activated by the key-turning process, the work estrogen receptors do is richly complex, if only partly understood. They prod genes into action; they raise good cholesterol; they affect the neurotransmitter chemicals associated with mood and stress, like serotonin and dopamine.

And the brain, scientists have learned in recent decades, is loaded with these receptors. Knowing this makes it easier to understand how perimenopause could start inside aging ovaries and set off such a wild cascade of effects. If you’re a typical woman moving through your 40s or 50s, your lifetime egg supply is running out; as that happens, the intricate, multihormone reproductive-signaling loop grows confounded, its triggers altered by the biology of change. The brain and ovaries, the primary stops along this loop, start misreading each other’s demands for action. This can make estrogen production crank up frantically, crash and then crank up again. Something also goes awry with most women’s thermoregulatory systems, producing hot flashes in around three-quarters of us — nobody yet knows why, exactly, nor why certain women go on flashing for many years while some escape the whole must-remove-outer-garments-now phenomenon entirely. There’s an admirably clear explanation of the complete process in a recent book called “Hot Flashes, Hormones and Your Health,” by JoAnn Manson, a Harvard medical professor who worked with both W.H.I. and Keeps. My favorite illustration in Manson’s book shows an actual woman’s hormone fluctuations as measured before, during and after peri­menopause; the “before” graph is a row of calm, evenly spaced ups and downs, various hormones rising and falling in counterpoint and on cue. The lines in the “after” graph are virtually flat. The “during” graph looks as if somebody dynamited a mountain range.

Not all women, Manson notes, experience disruptions as robust as this unidentified patient’s. But consider the mess of internal rearrangement we’re looking at: the body’s overall estrogen production is waning as the ovaries start atrophying into full retirement; and here simultaneously, at least for some of us, is this great Upheaval of During. The combination of the two can be — how could it not, I thought, the first time I studied the three graphs — a hellacious strain on the brain. Tracing the exact mechanics is still a work in progress, but they surely include some disruption of signaling to the neurotransmitters that make us remember things, experience emotions and generally choreograph the whole thinking operation of the human self.

“There are all these fundamental cognitive functions that many perimenopausal women complain about, and one of those fundamentals is attention,” Roberta Brinton, the U.S.C. scientist, told me. “When you can’t hold your attention to a thought. Where you’re in constant start mode, and you never reach the finish mode. That is devastating.”

This was Brinton, as it happens, describing herself. It’s why she first went on estrogen (estradiol, accompanied by natural progesterone) when her own perimenopause kicked in a few years ago. We were sitting in a campus garage in her Prius one day, and I asked her what made her so sure her own midlife difficulties — she had the hot flashes, which were obvious, but also the sleep disruption and the infuriating distractibility — were the product of hormonal events, not some womanly existential crisis. We get a lot of that, societally. It’s meant to be empathetic. Your role in life is changing, Mrs. Brain Seized by Aliens! Your children are growing up, you’re buying expensive wrinkle cream, ice cream makes you gain weight now, of course you’re distraught! “Because with estrogen — ” Brinton looked at me sharply, and then smiled — “I don’t have attention-deficit disorder.”

We walked back up to her laboratories, which are spread along a many-roomed warren full of cell incubators, centrifuges and computers. Brinton has thick black hair and a demeanor of lively, good-humored authority; it’s easy to envision her as the passionate science professor in crowded lecture halls. But in her labs the work is all rats and mice, many of them surgically or genetically altered to serve as surrogates for adult humans in various stages of maturation or disease. Removing the ovaries from female rats, for example, sends them into low-estrogen mode. Mice can be ordered bred with Alzheimer’s. The plaque that clogs the brains of Alzheimer’s sufferers, a noxious memory-disrupting substance called beta amyloid, is available as a chemical distillate, which means Brinton’s team can experiment with that too — beta amyloid dropped into the brain cells of healthy low-estrogen rodents; or estrogen dropped into cells already damaged by beta amyloid.

That’s why Brinton says that the timing hypothesis — the proposition that estrogen could bring great benefit to a woman who starts it in her 50s while having the reverse effect on a woman 10 years older — makes sense even though it is still experimental. She and other scientists know there are ways estrogen improves and protects the brain when it is added to healthy tissue. It makes new cells grow. It increases what’s called “plasticity,” the brain’s ability to change and respond to stimulation. It builds up the density and number of dendritic spines, the barbs that stick out along the long tails of brain cells, like thorns on a blackberry stem, and hook up with other neurons to transmit information back and forth. (The thinning of those spines is a classic sign of Alzheimer’s.)

But when estrogen hits cells that are already sick — because they’re dying off as part of the natural aging process or because they’ve been damaged by beta amyloid — something else seems to happen. Dropped in as a new agent, like the wrong kind of chemical solvent sloshed onto rusting metal, estrogen doesn’t strengthen or repair. It appears useless. Sometimes it sets off discernible harm. You may recall additional W.H.I. news a few years ago about hormones increasing the risk for aging-related dementia; those stories emerged from a subgroup of W.H.I. participants who were all at least 65 when they started the hormones. There are arguments about that data, like nearly everything else connected to the W.H.I., but the age factor alone reinforces what Brinton and other timing-hypothesis researchers observe in the labs when they give estrogen to ailing cells. “It’s like the estrogen is egging on the negative now, rather than the positive,” she said. “We know that if you give neurons estrogen, and then expose them to beta amyloid, many more will survive. But when we expose them to amyloid and then give them estrogen — now you don’t have survival of the neurons. In some instances, you can actually exacerbate their death.”

The heart contingent exploring the timing hypothesis is reasoning the same way. Monkeys get both cardiovascular disease and their own version of menopause; there is a primate team at Wake Forest University in North Carolina that has found estrogen to be a strong protectant for females against future heart disease — but only when it’s given at monkey perimenopause. Give estrogen the equivalent of six human years later, says Tom Clarkson, the pathology professor who has been leading this work for decades, and there is no protective effect at all.

Clarkson, who is 78, told me that if he were 30 years younger and a woman, with hot flashes or sleep trouble or sudden crashes of mood, he would have no hesitation about taking hormones. “I absolutely believe in the timing hypothesis,” he said. Then, being a scientist, he corrected himself. “I would have to say my level of certainty is 95 percent or greater,” he said. “I live a life of believing in the experimental evidence.”

So noted, I replied. And what if the symptoms were annoying but bearable or there were no symptoms at all? I’ve asked the same questions to every researcher I talked to this spring, and nearly all of them reply the same way: if they were deciding for themselves personally, they would tip the risk-benefit scale strongly in favor of hormones as a remedy for immediate ailments of perimenopause. But estrogen solely as a protectant for the heart and brain, to be taken for many years, absent any immediate serious complaints? There was a pause, and I heard Clarkson sigh. “We just don’t know about that yet,” he said.

The personal calculus of risk is an exhausting exercise in the modern era, what with litigation-jumpy physicians, the researchers’ candid “We just don’t know” and the bottomless learn-it-yourself maw of the Internet. Of all the conversations I had this winter, as I weighed and reweighed the stopping of the patch, the one that most resonates took place on a snowy morning in Washington, in the office of a nursery-school director named Julia Berry. Berry lives not far from the headquarters of the National Institutes of Health in Bethesda, Md., which is why last September she pulled from her mailbox a card the N.I.H. has been mailing to local women within a certain age range. “If you struggle with irritability, anxiety, sadness or loss of enjoyment at the time of the menopausal transition,” the card reads, “please call us and help yourself while helping others.”

The N.I.H., it turns out, has been quietly conducting mood and hormone studies for more than two decades under the direction of a psychiatrist named Peter Schmidt and his predecessor, David Rubinow, who is now chairman of the psychiatry department at the University of North Carolina. The research was first set into motion by Rubinow’s postgraduate interest in premenstrual syndrome; the idea of giving younger women drugs to lower and flatten temporarily their estrogen and progesterone levels, essentially inducing menopause, was initially conceived to determine the role of hormones in PMS — to see whether these young women got relief when their hormones stopped the cresting and dropping of the normal menstrual cycle. (It often worked as a short-term treatment and yes, the young women often got hot flashes.) In recent years, the induced-menopause experiments have continued, among many other studies, as part of an effort to try to understand the chemistry of women like Julia Berry and me — women for whom perimenopause turns into what Berry described to me as “psychological misery, not myself and absent from the world.”

Berry is 55, ponytailed and roundish and pretty. She was divorced a long time ago, raised three good kids mostly on her own and has a firm handshake and a job she loves. Her troubles started in her late 40s, in the standard way, with hot flashes and jerking awake at 3 a.m. and then escalated into something much fiercer. Like me, at the worst of it, she occasionally found herself in traffic, wishing silently for an oncoming truck that might exit her swiftly from this life without qualifying as a suicide. A physician prescribed antidepressants. They helped, with both the anguish and the flashes, but not enough. “I am one of the most steady, even-keeled, hard to ruffle, really unflappable . . . truly,” Berry told me. “I am. I, generally speaking, can be completely relied upon to do the sensible right thing almost all the time. Which is one of the reasons this period in my life has been so weird.”

She called the N.I.H. number at once. She was quickly evaluated, enrolled in a double-blind study of the effects of estrogen on perimenopausal depression and sent home with a paper bag containing a mystery patch. When I asked Berry to describe the sensation of the next few weeks, she looked up at the ceiling for a second to think. “Kind of like having been in a smoky room, waving your arms and now seeing that the exhaust fan is taking a little at a time,” she said. “My mood lifted. First time in three years I wasn’t waking up at 3 in the morning. That’s when I knew I wasn’t on the placebo. It was very clear to me that there was something fundamentally wrong with my chemical systems, and that whatever was in this patch was setting things right, so that I could function like a regular human being — the human being I was familiar with.”

What medicine doesn’t know about the chemistry of mood, including clinical depression, dwarfs what medicine doesn’t know about hormones. It would be handy for science if Berry and I could have made our heads available for dissection at certain points in recent years; as it is, we’re able to answer as many elaborations on “I feel bad” or “I feel good” as researchers might wish to throw at us, but they still have no way of pinning down where we belong on the scale of menopausal distress, or what exactly we’re doing there. We could be extra-high-volume versions of the women who are having an ordinary rough time of it, like Roberta Brinton — the women who hot-flash and can’t sleep and cast about for vocabulary with which to describe feeling, as Brinton puts it, “just off.” Or we could belong to some subcategory of anomalies, women with a wired-in susceptibility to depression — gene pools, childhoods, whatever — that was fired up by abrupt hormonal change.

Some psychological surveys will tell you there’s no evidence for a surge of clinical depression at menopause. I believe that, given how many other phases of life can unhinge us, but I also believe — no, actually, I ­know — that there is a difficult thing that happens to some women in the perimenopausally affected brain. Hostile as I am to generalizations involving women rendered fragile by biology, here I am, and here, too, is Berry, both of us pulled out of something terrible by a pharmaceutical infusion of estrogen. Two physicians who specialize in hormones and mood, Louann Brizendine, a neuropsychiatrist at the University of California, San Francisco, and Claudio Soares, a Canadian research psychiatrist who works at McMaster University in Ontario, told me that women who seek them out tell variations of the same story Berry and I took to our doctors: I know that something is wrong with me because I also know, somewhere in the noncrazy part of myself, that there is such pleasure to be offered by the circumstances of my grown-up life.

“These women thought they were losing their minds,” Brizendine told me, describing the 40-to-60-year-old patients she began seeing when she opened the Women’s Mood and Hormone Clinic at the university in 1994. “In 1994 we didn’t have words for it,” she said. “Now we do. It’s called perimenopausal depression.”

Brizendine and Soares, like Schmidt and Rubinow, have found that various combinations work with varying degrees of effectiveness for many of us — hormones with an antidepressant, hormones without an antidepressant, sometimes antidepressants on their own. The alternatives-to-hormones recommendations are mostly fine things in their own right, varying from certainly useful to harmless: exercise regularly, keep the weight down, easy on the caffeine, calm yourself with deep breathing or yoga, try black cohosh. (You could start a bar brawl over the efficacy of black cohosh, but the general consensus seems to be: if it works for you, go for it.) But the troubles set off by ricocheting hormones are reliably fixed by making the hormones stop ricocheting. And the laborious weighing of hormones’ benefits versus hormones’ harms — maybe not at the crisis moment, for those of us at our most distraught, but later, one or two or five years down the road — is something still undertaken by millions of women along the full breadth of the perimenopausal spectrum.

How in the world to do it wisely enough so the calculation is as right for each of us as it can possibly be? JoAnn Manson’s book contains the most careful checklist I’ve seen yet; by the time you answer all the personal-history questions the book asks you to consider, you’ve read 82 pages. Breast cancer is a factor, to be sure, but so are colorectal cancer, ovarian cancer, stroke, hip fracture and diabetes. If the timing hypothesis proves right and estrogen really does protect our brains and our hearts as long as we start it early enough, the calculation only grows that much more important and complex. There are moving pieces involved in working out every one of these risks in relation to everything else, and anyone who thinks there’s a bumper-sticker answer to the hormones question — don’t take them, you’re sure to be better off — is, like me that day in the psych unit, neither listening to scientific argument nor reading the fine print.

Here’s one example from the many to which researchers have pointed me this winter. Remember MPA? The synthetic progesteronelike substance used along with equine estrogens in the W.H.I.? There was a second W.H.I. hormones-versus-placebo trial, of nearly 11,000 women, that was also started in the early 1990s, just like the one that was halted in 2002. All the women enrolled in this second study had undergone hysterectomies, which meant they had zero risk for uterine cancer. So the women on medications in this trial were taking only equine estrogens — no MPA, which you’ll recall is given to protect the uterus. Their study was stopped in 2004, also before its planned end date, because the estrogen-taking women were showing a higher risk of stroke than the women on the placebo. But their breast-cancer rate was lower. The hormone-taking women with hysterectomies in that second study, who used estrogen without MPA, showed a 23 percent lower risk of invasive breast cancer than their counterparts who were taking no hormones at all.

Nobody’s persuaded that this means MPA promotes breast cancer while estrogen does not. It’s clear that estrogen acts aggressively on certain breast malignancies and that any woman who has had breast cancer or has a history of it in her immediate family should stay off estrogen. This is one of the principal reasons such intense work is under way right now, in labs like Roberta Brinton’s, to develop estrogenic variants — molecular substances designed to latch only to certain receptors (in the brain, say, where the activated receptors can do their good works) while ignoring receptors in the breast and uterus. And there are plenty of confounding factors, as scientists say, with regard to the women in the no-MPA trial. They all had undergone hysterectomies, for one thing; maybe whatever caused them to require uterine removal in the first place affected their reactions to the estrogen.

Or it could have been a fluke. But the MPA wrinkle adds suspicion and urgency to the timing-hypothesis questions about what really goes on when women of our demographic use hormones, and Julia Berry and I spent a long time talking about this, the adding and subtracting, the guessing and weighing, the balancing of what we think we know about ourselves against what we cannot possibly foresee. We will both, for the present, continue wearing estrogen patches. Berry turned out to be right, of course; she wasn’t on the placebo, which the N.I.H. doctors told her when she finished the study. And as she hurried to fill her own patch prescription, she found her gratitude mixed with more than a little frustration. “Why did my primary-care physician give me an antidepressant when I could have had something simple, like estrogen?” she asked. “Why don’t they know?”

We talked about breast cancer, because that is the nightmare illness in nearly all our calculations, for most of us the visual closest to hand. Three of my best friends have endured the full breast-cancer horror show and by now have retired their wigs. All have survived. None had been on hormone replacement. This is information that batters me steadily but not helpfully, like my ex-smoker paternal aunt’s fatal lung cancer and the fact that I’m a lifetime nonsmoker and regular exerciser with extremely good cholesterol levels. How do my lowered risks from one column balance against my question marks over in another column? What to do?

“I’d rather monitor something I know can go wrong than go on living in the state I was in,” Berry said. “I could have my breasts removed. I like them. But they’re not my life.”

We’ve spent a fair bit of time by now, Julia Berry and I, shaking these uncertainties out and squinting at them. Do we wear these patches forever? We don’t know. What happens when we do take them off, if we do? We don’t know. Have we done nothing except delay a biological process, complete with hot flashes and another round of truck-crash fantasies, that at some point we’ll have to bully our way through? We don’t know, nor does any researcher I talked to this spring.

And there’s this: Should luck and longevity cooperate, we are going to grow old. We’re already old, by the standards of our children and our ancestors, but the generation to which we belong expects to live a rich messy life full of extremely loud rock music for another 30 years after menopause. Every midlife woman I know keeps redrawing for herself the defensible lines of intervention in the “natural” sequence of human aging. Obsessive multiple plastic surgeries are silly and desperate. Muscles kept in good working order are not. Where on that spectrum is a hormones-saturated pharmaceutical patch? What if the timing hypothesis is even partly right? Suppose all we learn about replacement estrogen, in the end, is that if it’s started early enough it might protect the heart and the brain, and that its chemistry makes some of us feel more the way we did at 40 than the way our mothers did at 65? Not an elixir of youth. More like . . . reading glasses. Or calcium supplements, or painkillers that stop the knee from hurting but carry risk warnings of their own. It has occurred to me that the better analogy might be a 13-year-old trying to ward off puberty by binding her breasts, but most of the time I don’t think so, and if I do try stopping the patches, I know this to a certainty: I will keep a few extras in reserve, just in case.

Cynthia Gorney is a contributing writer to the magazine. She teaches at the Graduate School of Journalism at the University of California, Berkeley.

__________

Full article and photo: http://www.nytimes.com/2010/04/18/magazine/18estrogen-t.html

The Claim: Green Tea Can Help Lower Blood Pressure

THE FACTS

Few foods have a reputation for soothing stress quite like a hot cup of tea.

Green tea, in particular, has been linked to reduced stress and anxiety, and it contains compounds that are said to relax blood vessels. But when scientists have looked at whether it lowers blood pressure, even by a little, the evidence is fairly weak. Some small studies have found that a few cups a day can shave some points from blood pressure levels, but others have found that it provides no help at all, and may even be counterproductive.

Still, the news is not all bad for tea drinkers.

In a recent randomized study financed in part by the Department of Agriculture, scientists at Tufts University recruited 65 men and women with modestly high blood pressure who were not taking medication. Some were randomly assigned to drink a cup of hibiscus tea three times a day, while others received a tea-flavored placebo.

After six weeks, the tea group saw a respectable drop in systolic pressure — the top number in the reading — compared with the placebo group, suggesting that the tea made a small impact.

Of course, replication is the cornerstone of good science, and one study is nothing to base conclusions on. Experts say more study is needed.

THE BOTTOM LINE

Green tea doesn’t seem to have much effect on blood pressure; hibiscus tea may have potential.

Anahad O’Connor, New York Times

__________

Full article: http://www.nytimes.com/2010/05/04/health/04real.html

When the Ties That Bind Unravel

Therapists for years have listened to patients blame parents for their problems. Now there is growing interest in the other side of the story: What about the suffering of parents who are estranged from their adult children?

While there are no official tallies of parents whose adult children have cut them off, there is no shortage of headlines. The Olympic gold medal skier Lindsey Vonn reportedly hasn’t spoken to her father in at least four years. The actor Jon Voight and his daughter, Angelina Jolie, were photographed together in February for the first time since they were estranged in 2002.

A number of Web sites and online chat rooms are devoted to the issue, with heartbreaking tales of children who refuse their parents’ phone calls and e-mail and won’t let them see grandchildren. Some parents seek grief counseling, while others fall into depression and even contemplate suicide.

Joshua Coleman, a San Francisco psychologist who is an expert on parental estrangement, says it appears to be growing more and more common, even in families who haven’t experienced obvious cruelty or traumas like abuse and addiction. Instead, parents often report that a once-close relationship has deteriorated after a conflict over money, a boyfriend or built-up resentments about a parent’s divorce or remarriage.

“We live in a culture that assumes if there is an estrangement, the parents must have done something really terrible,” said Dr. Coleman, whose book “When Parents Hurt” (William Morrow, 2007) focuses on estrangement. “But this is not a story of adult children cutting off parents who made egregious mistakes. It’s about parents who were good parents, who made mistakes that were certainly within normal limits.”

Dr. Coleman himself experienced several years of estrangement with his adult daughter, with whom he has reconciled. Mending the relationship took time and a persistent effort by Dr. Coleman to stay in contact. It also meant listening to his daughter’s complaints and accepting responsibility for his mistakes. “I tried to really get what her feelings were and tried to make amends and repair,” he said. “Over the course of several years, it came back slowly.”

Not every parent is so successful. Debby Kintner of Somerville, Tenn., sought grief counseling after her adult daughter, and only child, ended their relationship. “It hit me like a freight train,” she said. “I sit down and comb through my memories and try to figure out which day was it that it went wrong. I don’t know.”

Ms. Kintner talks of life as a single parent, raising an honor student who insisted her mother accompany her on a class trip to London, a college student who made frequent calls and visits home. Things changed after her daughter began an on-again, off-again relationship with a boyfriend and moved back home after becoming pregnant. Arguments about her daughter’s decision to move in with the man and Ms. Kintner’s refusal to give her daughter a car eventually led to estrangement. She now has no contact with her daughter or three grandchildren.

“I knew parents and children had fights, but there was enough love to come back together,” Ms. Kintner said. “This is your mother who gave you a nice life and loved you.’ “

Judith, a mother in Augusta, Ga., who asked that her last name not be used, tells of a loving, creative daughter who experienced a turbulent adolescence. At college graduation, the parents were shocked when their daughter unleashed an angry tirade about her childhood. Later, the daughter asked for financial help paying for an Ivy League graduate school. The parents agreed, but a visit to see her on the East Coast was marred by another round of harsh words and accusations. They withdrew their financial support and returned home.

“I’ve done a lot of crying,” said Judith, who has sought therapy to cope. “I’m very depressed. All the holidays are sad, and we don’t have any closure on this. She was so wanted. She was so loved. She still is loved. We want her in our life.”

Dr. Coleman says he believes parental estrangement is a “silent epidemic,” because many parents are ashamed to admit they’ve lost contact with their children.

Often, he said, parents in these situations give up too soon. He advises them to continue weekly letters, e-mail messages or phone calls even when they are rejected, and to be generous in taking responsibility for their mistakes — even if they did not seem like mistakes at the time.

After all, he went on, parents and children have very different perspectives. “It’s possible for a parent to feel like they were doing something out of love,” he said, “but it didn’t feel like love to that child.”

Friends, other family members and therapists can often help a parent cope with the loss of an estranged child. So can patience: reconciliation usually takes many conversations, not just one.

“When I was going through this, it was a gray cloud, a nightmare,” Dr. Coleman said. “Don’t just assume if your child is rejecting you that that’s the end of the conversation. Parents have to be on a campaign to let the child know that they’re in it for the long haul.”

Tara Parker-Pope, New York Times

__________

Full article and photo: http://well.blogs.nytimes.com/2010/05/03/when-the-ties-that-bind-unravel/