When Procreation Is a Matter of Real Estate

Sexual female rotifers, top, carrying resting eggs (the darker eggs) along with asexual females carrying lighter-shaded amictic eggs, and a single asexual female with an attached asexual egg.

The choice to have sex has everything to do with location, at least for tiny freshwater creatures called rotifers.

Rotifers can reproduce sexually or asexually, and the decision to go one way or another depends on the animals’ habitat, according to a new study in the journal Nature.

The researchers bred rotifers in three different environments: one in which the quality of available food was high, one in which it was low and one in which it was mixed. The rate of sexual reproduction remained the same where the food quality was consistently high or low, but it increased significantly in the mixed region over generations, the researchers found.

In the mixed environment, asexual females were more likely to produce sexually reproducing female offspring. In the two homogenous regions, females tended to produce asexual females — carbon copies of themselves.

The researchers believe that a more diverse set of genes is a useful survival tool in a heterogeneous environment.

“That would be the explanation as to why sex is beneficial and why the rate of sex goes up,” said Lutz Becks, an evolutionary ecologist at the University of Toronto and the study’s first author. “You are mixing your genes.”

After 12 weeks, or about 80 generations of rotifers, the researchers found that about 80 percent of the population in the heterogeneous group was sexual, compared with only about 40 percent of the homogenous groups.

“Nature is, of course, different from our simple laboratory environment, but this allows us to follow the rate of sex in real time,” Dr. Becks said.

Sindya N. Bhanoo, New York Times


Full article and photo: http://www.nytimes.com/2010/10/19/science/19obrotifer.html

Caterpillars That Thrive in Water and on Land

Plenty of animals can live equally well in air or water. They don’t call frogs amphibians for nothing, after all. But insects — whoever heard of an amphibious insect?

Daniel Rubinoff, a biologist at the University of Hawaii, has. In a paper in The Proceedings of the National Academy of Sciences, he and a post-doctoral researcher, Patrick Schmitz, report on not one but a dozen very small caterpillars that can feed and breathe indefinitely both in and out of water. They are the first insects known to be truly amphibious.

The caterpillars are members of the moth genus Hyposmocoma that is endemic to Hawaii and has about 400 known species, almost all of which are strictly terrestrial. The amphibious ones live around rocks in Hawaii’s mountain streams. Dr. Rubinoff said he did not know why collectors had not noticed them before. “It’s almost like they closed their eyes when they crossed the streams,” he said.

The caterpillars, which build silk cases in different shapes according to the species, breathe in air like other insects, through small openings called spiracles. When under water, Dr. Rubinoff said, the caterpillars most likely obtain oxygen through diffusion across the skin.

The researchers performed genetic analyses of close to 90 Hyposmocoma species and discovered that the amphibious ones “pop up essentially unrelated to each other,” Dr. Rubinoff said. That suggests the amphibious trait evolved independently several times rather than once.

Dr. Rubinoff has an idea as to how that evolution occurred. Given Hawaii’s high rainfall, the water level in the streams these species inhabit fluctuates greatly. A caterpillar that spends its time on a river rock cannot move fast enough when the water level quickly rises and the rock becomes submerged. “If you’re a little caterpillar you’ve got to hunker down and hold on,” he said.

Henry Fountain, New York Times


Full article: http://www.nytimes.com/2010/04/06/science/06obbugs.html

Evolving Sexual Tensions

male and female sage-grouse


The female sage-grouse, left, and her decorative male counterpart.

Males and females are different.

This is so obvious that, at first, it hardly seems worth pointing out. But in fact, it is remarkable. It is also the cause of a profound sexual tension.

The problem is, often, the pressures on males and females are not the same. In the fruit fly Drosophila melanogaster, for example, males must perform an elaborate song-and-dance routine to seduce each female; females, in contrast, must give off a certain smell to be attractive to a male. Females need to eat a high protein diet so as to be able to produce eggs; males can skimp on the proteins.

male sage-grouse

A strutting male sage-grouse.

Among greater sage-grouse, Centrocercus urophasianus, females are smaller than males and have straw-colored feathers. Males have flamboyant feathers and strut and cavort and puff themselves up to seduce females. Needless to say, in this species females do all the childcare: they choose a nest site, sit on the eggs, then feed and protect the chicks.

In sum, the traits that make a “good” male are often different from those that make a “good” female. (Note: I’m only talking about “good” in evolutionary terms. That means a trait that improves your chance of having surviving offspring.) Since many of these traits have a genetic underpinning, male and female genes are thus being sculpted by different forces.
Continue reading

Working as a Team, Bacteria Spin Gears

One bacterium, working alone, can’t accomplish much. But put a bunch of them together, and they can move mountains.

Researchers harnessed the collective swimming behavior of bacteria to turn these tiny gears.

Well, maybe not mountains. But how about a tiny gear?

Researchers at Argonne National Laboratory, Northwestern and Princeton have shown that the collective swimming behavior of bacteria can be harnessed for work. While the process is not very efficient, it is a promising step toward the development of hybrid biological and micromechanical machines.

In some respects, bacterial swimming resembles Brownian motion, the random movement of particles or molecules in a medium. But Igor S. Aranson, an Argonne researcher who is the senior author of a paper describing the work in The Proceedings of the National Academy of Sciences, said that in equilibrium conditions, it was impossible to extract useful energy from Brownian motion — the laws of thermodynamics did not allow it.

“But bacteria, they don’t know about this law,” Dr. Aranson said.

The researchers used tiny polymer gears with asymmetric teeth floating in a thin film teeming with Bacillus subtilis, a bacterial species known for its swimming ability. Above a concentration of about 10 billion bacteria per cubic centimeter, the gear would rotate. Dr. Aranson said that unlike molecules in Brownian motion, which reflect off whatever they strike, when the bacteria hit a tooth, “they just keep pushing.” They slide along the edge of the tooth until they reach the “V” junction where the next tooth starts. Since one edge of each tooth is longer than the other, more bacteria slide along the long edges, transferring more momentum to them and rotating the gear in one direction.

One of the limitations of the process, Dr. Aranson said, is that the bacteria eventually run out of nutrients. But they can stop pushing even before that. “Bacteria behave too much like people,” he said. “They start to do something else.”

Henry Fountain, New York Times


Full article and photo: http://www.nytimes.com/2009/12/22/science/22obgear.html

Cell Death Occurs In Same Way In Plants And Animals

celldeathIn both plant and animal cells that undergo programmed cell death, the protein TUDOR-SN is broken down. In pollen, from the model plant mouse-ear cress, a reduction in TUDOR-SN leads to fragmentation of DNA (red signal) and premature cell death.

Research has previously assumed that animals and plants developed different genetic programs for cell death. Now an international collaboration of research teams, including one at the Swedish University of Agricultural Sciences, has shown that parts of the genetic programs that determine programmed cell death in plants and animals are actually evolutionarily related and moreover function in a similar way.

The findings were published in Nature Cell Biology October 11.

For plants and animals, and for humans as well, it is important that cells both can develop and die under controlled forms. The process where cells die under such forms is called programmed cell death. Disruptions of this process can lead to various diseases such as cancer, when too few cells die, or neurological disorders such as Parkinson’s, when too many cell die.

The findings are published jointly by research teams at SLU (Swedish University of Agricultural Sciences) and the Karolinska Institute, the universities of Durham (UK), Tampere (Finland), and Malaga (Spain) under the direction of Peter Bozhkov, who works at SLU in Uppsala, Sweden. The scientists have performed comparative studies of an evolutionarily conserved protein called TUDOR-SN in cell lines from mice and humans and in the plants norway spruce and mouse-ear cress. In both plant and animal cells that undergo programmed cell death, TUDOR-SN is degraded by specific proteins, so-called proteases.

The proteases in animal cells belong to a family of proteins called caspases, which are enzymes. Plants do not have caspases – instead TUDOR-SN is broken down by so-called meta-caspases, which are assumed to be ancestral to the caspases found in animal cells. For the first time, these scientists have been able to demonstrate that a protein, TUDOR-SN, is degraded by similar proteases in both plant and animal cells and that the cleavage of TUDOR-SN abrogate its pro-survival function. The scientists have thereby discovered a further connection between the plant and animal kingdoms. The results now in print will therefore play a major role in future studies of this important protein family.

Cells that lack TUDOR-SN often experience premature programmed cell death. Furthermore, functional studies at the organism level in the model plant mouse-ear cress show that TUDOR-SN is necessary for the development of embryos and pollen. The researchers interpret the results to mean that TUDOR-SN is important in preventing programmed cell death from being activated in cells that are to remain alive.

The research teams maintain that the findings indicate that programmed cell death was established early on in evolution, even before the line that led to the earth’s multicellular organisms divided into plants and animals. The work also shows the importance of comparative studies across different species to enhance our understanding of how fundamental mechanisms function at the cellular level in both the plant and animal kingdoms, and by extension in humans.


Full article and photo: http://www.sciencedaily.com/releases/2009/10/091013105335.htm

Seeing Blue: Fish Vision Discovery Makes Waves In Evolutionary Biology

fish ss

The scabbardfish (Lepidopus fitchi) is now the only fish known to have switched from ultraviolet to violet vision, or the ability to see blue light.

Emory University researchers have identified the first fish known to have switched from ultraviolet vision to violet vision, or the ability to see blue light. The discovery is also the first example of an animal deleting a molecule to change its visual spectrum.

Their findings on scabbardfish, linking molecular evolution to functional changes and the possible environmental factors driving them, were published Oct. 13 in the Proceedings of the National Academy of Sciences.

“This multi-dimensional approach strengthens the case for the importance of adaptive evolution,” says evolutionary geneticist Shozo Yokoyama, who led the study. “Building on this framework will take studies of natural selection to the next level.”

The research team included Takashi Tada, a post-doctoral fellow in biology, and Ahmet Altun, a post-doctoral fellow in biology and computational chemistry.

Vision ‘like a painting’

For two decades, Yokoyama has done groundbreaking work on the adaptive evolution of vision in vertebrates. Vision serves as a good study model, since it is the simplest of the sensory systems. For example, only four genes are involved in human vision.

“It’s amazing, but you can mix together this small number of genes and detect a whole color spectrum,” Yokoyama says. “It’s just like a painting.”

The common vertebrate ancestor possessed UV vision. However, many species, including humans, have switched from UV to violet vision, or the ability to sense the blue color spectrum.

From the ocean depths

Fish provide clues for how environmental factors can lead to such vision changes, since the available light at various ocean depths is well quantified. All fish previously studied have retained UV vision, but the Emory researchers found that the scabbardfish has not. To tease out the molecular basis for this difference, they used genetic engineering, quantum chemistry and theoretical computation to compare vision proteins and pigments from scabbardfish and another species, lampfish. The results indicated that scabbardfish shifted from UV to violet vision by deleting the molecule at site 86 in the chain of amino acids in the opsin protein.

“Normally, amino acid changes cause small structure changes, but in this case, a critical amino acid was deleted,” Yokoyama says.

More examples likely

“The finding implies that we can find more examples of a similar switch to violet vision in different fish lineages,” he adds. “Comparing violet and UV pigments in fish living in different habitats will open an unprecedented opportunity to clarify the molecular basis of phenotypic adaptations, along with the genetics of UV and violet vision.”

Scabbardfish spend much of their life at depths of 25 to 100 meters, where UV light is less intense than violet light, which could explain why they made the vision shift, Yokoyama theorizes. Lampfish also spend much of their time in deep water. But they may have retained UV vision because they feed near the surface at twilight on tiny, translucent crustaceans that are easier to see in UV light.

A framework for evolutionary biology

Last year, Yokoyama and collaborators completed a comprehensive project to track changes in the dim-light vision protein opsin in nine fish species, chameleons, dolphins and elephants, as the animals spread into new environments and diversified over time. The researchers found that adaptive changes occur by a small number of amino acid substitutions, but most substitutions do not lead to functional changes.

Their results provided a reference framework for further research, and helped bring to light the limitations of studies that rely on statistical analysis of gene sequences alone to identify adaptive mutations in proteins.

“Evolutionary biology is filled with arguments that are misleading, at best,” Yokoyama says. “To make a strong case for the mechanisms of natural selection, you have to connect changes in specific molecules with changes in phenotypes, and then you have to connect these changes to the living environment.”


Full article and photo: http://www.sciencedaily.com/releases/2009/10/091016121827.htm

The Wonderful World of the Teeny-Tiny

Microscopic Photography

There are millions of photo competitions. But very few of them deal with objects that are normally invisible to the naked eye. SPIEGEL ONLINE brings you the winners of this year’s microscopic photo competition.

It isn’t uncommon for scientists to spend countless hours staring into a microscope. Only rarely, however, do they take pictures of what they see. And even then the images tend to be gray and amorphous, depicting malignant tissue or the activity of a particular protein inside a cell.


For the uninitiated, such images are impenetrable. Yet the micro-world can also be a beautiful place, full of splendour that normally remains hidden to the naked eye. Capturing that beauty is the aspiration of micro-photographers, those who magnify the miniature and take pictures of the tiny. The images that result are often full of unfamiliar shapes and forms — and surprisingly colorful. Only rarely is it possible to identify the subject being photographed.

Since 1974, though, depictions of the diminutive have been the subject of an annual photo contest, called the Nikon Small World Competition. A jury of photographers, science journalists and researchers choose the best of the best among microscopic photos.


micro 1

First place in this year’s Nikon Small World Competition went to Heiti Paves of Estonia. The image shows the anther of a thale cress (arabidopsis thaliana) magnified 20 times. The plants pollinate themselves and reproduce quickly, making them a favorite for genetics researchers.

micro 2

Second place, Gerd Günther of Germany. The spiny sow thistle (sonchus asper) can be found in Austria and Germany. This image is part of the plant’s flower stem magnified 150 times.

micro 3

Third place, Pedro Barrios-Perez of Canada. The image shows a wrinkled photoresist, a light-sensitive material used in a number of industrial processes, such as micro-electronics. The image was magnified 200 times.

micro 4

Fourth place, James Hayden of the US. This image is the result of viewing the ovary of an anglerfish through a special fluorescent microscope. Magnified four times.

micro 5

Fifth place, Bruno Vellutini of Brazil. A researcher at the University of Sao Paolo, Vellutini’s picture shows a young sea star magnified 40 times.

micro 6

Eleventh place, Dominik Paquet of Germany. Zebra fish are often used in the study of genetic Alzheimer’s. In this image, magnified 10 times, the nerve cells are stained green while the Alzheimer’s genes are colored blue and red.


One of those honored this year, Dominik Paquet of the Adolf Butenandt Institute in Munich, is a prime example as to how many of the images in the contest come into existence. His image, which came in 11th place, sprang from his research into the cellular processes related to Alzheimer’s disease. Zebra fish are often used to make the death of nerve cells visible. Tiny fish larvae are injected with an Alzheimer’s-causing gene, which is then colored using an antibody to make it easily perceptible. His laser microscope does the rest.

A Simple Sow Thistle

Paquet entered one of the resulting images to the photo contest. “Compelling images are important for research,” Paquet, 29, says. “And they help communicate what we are doing to the broader public.”

Some 2,000 photographers sent in their work to the contest, and the subject matter varies widely. Some photographers took pictures of magnified chemical compounds, others show details from the world of microbiology. And not all those who submitted photographs come from the world of science. Anyone with a microscope can participate in the contest. Although standard instruments are enough, many of the images were taken with highly specialized microscopes that can cost hundreds of thousands of euros.

But even the simplest of microscopes can result in impressive photos. An image submitted by photographer Gerd Günther from Düsseldorf took second place in this year’s contest — and was created using a simple, store-bought device. His subject? A simple sow thistle.


Full article and photos: http://www.spiegel.de/international/zeitgeist/0,1518,654690,00.html

A Neuron’s Obsession Hints at Biology of Thought

Brain Cells Are Discovered That Only Respond to Certain Celebrities; One May Worship Homer Simpson but Ignore Madonna.

Researchers have discovered that in the vast neural network of the brain, some cells are, to use a technical term, celebrity groupies.

Probing deep into human brains, a team of scientists discovered a neuron roused only by Ronald Reagan, another cell smitten by the actress Halle Berry and a third devoted solely to Mother Teresa. Testing other single human neurons, they located a brain cell that would rather watch an episode of “The Simpsons” than Madonna.


In one sense, these findings are merely noise. They arise from rare recordings of electrical activity in brain cells, collected by neuroscientists at the University of California, Los Angeles, during a decade of experiments with patients awaiting brain surgery for severe epilepsy. These tingles of electricity, however, gave the researchers the opportunity to locate neurons that help link our perceptions, memories and self-awareness.

In their most recent work this year, the research team reported that a single human neuron could recognize a personality through pictures, text or the sound of a name — no matter how that person was presented. In tests, one brain cell reacted only to Oprah Winfrey; another just to Luke Skywalker; a third singled out Argentine soccer star Diego Maradona.

Each neuron appeared to join together pieces of sensory information into a single mental impression. The researchers believe these cells are evidence that it only takes a simple circuit of neurons to encode an idea, perception or memory.

“These neurons will fire to the person no matter how you present them,” says bioengineer Rodrigo Quian Quiroga at the U.K.’s University of Leicester who studied the neurons with colleagues at UCLA and the California Institute of Technology. “All that we do, all that we think, all that we see is encoded by neurons. How do the neurons in our brain create all our perceptions of the world, all our emotions, all our thinking?”

At its simplest, a neuron is a nerve cell, one of the myriads that make up our central nervous system. Each cell can send and receive the electro-chemical signals that charge our thoughts and emotions.

On average, there are more neurons in the human brain than there are galaxies in the known universe — about 100 billion in all, arranged on a scaffold of one trillion or so supporting, thread-like glial cells. Our inspirations race through thousands of miles of nerve fibers and axons so compacted that our entire neural network is no larger than a coconut. No two brains are alike, not even those of identical twins.

To these researchers, neurons are the Lego bricks of the brain — a construction kit that can self-assemble into a cathedral of thought. “The idea of justice is probably generated by a small set of neurons firing,” says Caltech biophysicist Christof Koch, who studies the biological basis of consciousness. “It must be true of all the things that we think about … the number pi …God.”

In some ways, each neuron does act as if it has a mind of its own. Some fire only when they perceive a straight line; others just when they detect a right angle. New neurons form every day. No one knows how the cells can encode a complex thought or how so many neurons can make a mind.


Recommended Reading

In the August edition of Current Biology, researchers at UCLA, Caltech and the University of Leicester reported on celebrity watching among human neurons in “Explicit Encoding of Multimodal Percepts by Single Neurons in the Human Brain.”

Researchers at Weizmann Institute of Science reported that these neurons were crucial to our recollections of people, places and things in “Internally Generated Reactivation of Single Neurons in Human Hippocampus During Free Recall,” published in Science.

In the Proceedings of the National Academy of Sciences, the researchers showed how neurons reflect our conscious recognition of familiar people, places and things in “Human single-neuron responses at the threshold of conscious recognition.”

Reporting in Nature, the researchers identified neurons responding to celebrities and landmarks in “Invariant visual representation by single neurons in the human brain.”

In Psychological Review, researchers recently explored the role of single cells in memory in “On the Biological Plausibility of Grandmother Cells: Implications for Neural Network Theories in Psychology and Neuroscience.”

Caltech neuroscientist Christof Koch explores the research challenge posed by the biology of conscious thought in “The Quest for Consciousness: A Neurobiological Approach.”

Nobel laureate Gerald M. Edelman and neuroscientist Giulio Tononi ponder biology and awareness in “A Universe Of Consciousness How Matter Becomes Imagination.”


Most of what we have learned about their neurobiology comes through imaging studies, post-mortem analysis or animal experiments. Under normal circumstances, researchers can’t directly probe the cells of an awake, living human brain for ethical reasons.

In 1997, though, UCLA neurosurgeon Itzhak Fried and his colleagues started studying epilepsy patients who, as part of normal preparation for surgery, have electrodes implanted deep in their brain tissue. These electrodes are used to record neural activity that could identify the source of the patients’ intractable seizures. They also detect the activity of healthy cells around the electrodes, which gives the scientists an opportunity to study the biology of perception and memory. “This really offers us a glimpse into the human mind,” says Dr. Fried.

In five provocative experiments since 2005, the researchers used pictures of famous faces and places to screen neurons in brain areas that gather information from all our senses about a person or place we know and blend them into a long-term memory.

To start, Dr. Fried and his colleagues showed eight epilepsy patients 80 images of celebrities, animals, common objects and landmarks while recording the electrical activity of neurons wired to electrodes. They flashed each image for a second, shuffled the sequence into random order and then repeated it. They did that six times.

“You would present hundreds of stimuli — faces or celebrities or famous landmarks — and the neuron would respond to only one or two,” Dr. Fried says. “The incredible specificity was striking.”

In the magazine rack of the mind, some cover girls have a neuron all their own. Testing one patient, the researchers found a neuron that reacted instantly when shown almost any picture of Jennifer Aniston. This cell ignored other celebrities. It gave the cold shoulder to pictures of the actress with her former husband Brad Pitt. “The cells seemed to respond to the idea of Jennifer Aniston,” says Dr. Koch.

Testing a second patient, the researchers found a neuron that responded only to Halle Berry. The cell’s electrical activity jumped no matter how the actress was posed or how she was dressed. Again, this neuron showed no interest in other celebrities or to any other images of common objects or places.

Subsequent tests turned up single neurons in patients that fired selectively to pictures of former President Bill Clinton, The Beatles, or basketball player Michael Jordan. Each of these individual neurons behaved in a way that made the researchers believe that the cell was responding to a distillation of experience. “The neuron is responding to a concept, not a picture,” says Dr. Quian Quiroga. Moreover, each neuron acted as a trigger for recalling the concept they helped encode.

During a follow-up study at UCLA last year, the researchers showed 13 new volunteers wired to neural electrodes a set of 48 short video segments. In part, they wanted to see if neurons attuned any differently to moving pictures and changing scenery.

In fact, some cells did respond strongly to one video clip but not to others. In one patient, the researchers found a neuron that acted up only to The Simpsons cartoon series. “The neuron would spring to life when you showed the video of The Simpsons,” says Dr. Fried.

To be sure, few of us likely have a special brain cell devoted to Jennifer Aniston or Homer Simpson. Our cells are sensitive to more than brand names. They can attune themselves quickly to new people or places, often within a day. While monitoring one new patient’s brain, Dr. Quian Quiroga was surprised to encounter a neuron that already had him in mind.

“Suddenly,” he says, “I find a neuron firing in response to me.”

Robert Lee Hotz, Wall Street Journal

Full article and photo: http://online.wsj.com/article/SB125503611739074321.html

Canadian researchers make breast cancer breakthrough

Breast Tumour DNA 20091007

Researchers and co-authors Sam Aparicio, right, head of the B.C. Cancer Agency Dept. of Breast and Molecular Oncology, and Marco Marra, director of the Agency’s Genome Sciences Centre, share a laugh following a presentation in Vancouver on Oct. 7, 2009.

The possibility of using a patient’s genetic information to create personalized therapies to battle cancer is one step closer to reality after Canadian scientists decoded, for the first time, the entire genome of a patient’s metastatic breast cancer.

It’s a landmark achievement that is helping to rewrite old notions about the way cancer develops and provides new insights into which drugs could benefit patients the most.

“I’m excited by the possibilities,” said Samuel Aparicio, the head of the department of breast and molecular oncology at the B.C. Cancer Agency and one of the lead scientists involved with the discovery. “In fact, I never thought I would see in my professional lifetime that it would become possible to routinely sequence genomes in the way that we’re now doing.”

Genomes contain all of the biological information of a living organism, and that information is housed in DNA. There are about three billion “letters” or building blocks in the human genome. When cells divide, all three billion building blocks must be copied. But mistakes in the copying process can sometimes occur, and those mutations can, in some cases, cause cells to grow in an uncontrolled way – which is how cancer develops.

In decoding the metastatic breast-cancer genome, which contains all of the genetic information of a patient’s cancer, scientists were able to identify all of the mutations in the tumour, a feat that has never before been accomplished.

But the breakthrough didn’t stop there. Once all of the tumour mutations of the developed cancer were identified – a total of 32 were found – scientists had the information to look back and see which of those mutations were present in the patient’s original, primary tumour.

They discovered that only 11 of the 32 mutations were present in the original tumour, with only five of those present in all of the original cancer cells, meaning that even in the early stages, cancer cells aren’t uniform. That’s significant because it proves even from the outset, cancer cells contain different mutations which change over time.

While scientists have theorized that cancer cells can differ, even in a single individual, until now it has never been possible to sequence the cancer genome and determine what mutations are present and how they evolve.

“I think we’re getting used to the idea an individual patient’s cancer is itself multiple individual cancers that may behave differently,” said Dr. Aparicio, who holds the Canada research chair in molecular oncology and is the Nan and Lorraine Robertson chair of breast-cancer research at the University of British Columbia.

The findings, discovered by a research team led by Dr. Aparicio and Marco Marra, director of the Michael Smith Genome Sciences Centre at the B.C. Cancer Agency, are published Thursday in the journal Nature.

A major portion of the money used to fund the research came from the B.C. Cancer Foundation’s Weekend to End Breast Cancer walk, as well as donations raised across the province during the annual breast-cancer walk over the past six years. Funding also came from other groups, including the Canadian Breast Cancer Foundation.

The next major challenge will be interpreting the mutations to understand their significance and determine which mutations are vulnerable to which treatments.

Eventually, scientists hope to decode cancer genomes from a large number of patients to determine if there are any patterns in the genetic mutations or the overall significance of various mutations. Dr. Aparicio said their work could help usher in a new era in which scientists will be able to decode cancer genomes in all patients to help create therapies targeted to the mutations present in their tumours.

This also means patients may have to undergo numerous tests as their disease progresses to account for the fact that an individual’s cancer goes though multiple changes over time.

A growing number of researchers have put stock in recent years into the idea that genome sequencing holds the key to understanding cancer development and creating targeted therapies. Yet, the exorbitant cost and complex process of genome sequencing has always put those lofty goals out of reach.

But now, a series of technological advancements has brought down the cost dramatically, which means the “possibility of obtaining genome sequence from every patient’s tumour is coming closer,” Dr. Aparicio said.

The scientists involved in this advance used next-generation DNA sequencing technology to decode the cancer genome, new technology that is lowering costs and helping to fuel genome-related research and discoveries around the world.

“We’ve been dreaming about the possibility to capture complete genome information from cancers in a routine way for decades,” Dr. Aparicio said. “The moment has arrived.”

Carly Weeks, Globe and Mail


Full article and photo: http://www.theglobeandmail.com/news/national/canadian-researchers-make-breast-cancer-breakthrough/article1315753/

Gene That Regulates Breast Cancer Metastasis Identified

Researchers at The Wistar Institute have identified a key gene (KLF17) involved in the spread of breast cancer throughout the body. They also demonstrated that expression of KLF17 together with another gene (Id1) known to regulate breast cancer metastasis accurately predicts whether the disease will spread to the lymph nodes. Previously, the function of KLF17 had been unknown.

Deaths of most breast-cancer patients are the result of metastasis, a complex, multi-step, and poorly understood process. “Identifying the gene that suppresses the spread of tumor cells and the mechanisms by which this suppression occurs can lead to the discovery of new markers of metastasis and potential targets for cancer prevention and treatment,” says Qihong Huang, M.D., Ph.D., assistant professor at The Wistar Institute and senior author of the study.

In this study, which appears in the October on-line issue of Nature Cell Biology, Huang and colleagues introduced a genetic screen targeting 40,000 mouse genes into mammary tumor cells that do not usually spread, and then transplanted those cells to the mammary fat pads in mice where they would be expected to remain. Through RNA interference (RNAi) technology, they then reduced the expression of a metastasis-suppressor gene in five mice, one of which developed lung metastases in seven weeks. RNA retrieved from the metastasized cells corresponded to KLF17.

To determine whether KLF17 played a similar role in human breast-cancer metastasis, the researchers knocked down KLF17 expression in a tagged human-breast-cancer cell line and then transplanted these cells—along with a control group still expressing KLF17—into mammary fat pads of mice. Within eight to 10 weeks, lung metastases developed in the KLF17-deficient cells, whereas the control cell set did not metastasize, demonstrating that knockdown of KLF17 expression also promotes the spread of human breast-cancer cells.

The researchers also were interested specifically in genes whose expression were increased in KLF17 knockdown cells but decreased in KLF17 overexpressing cells or vice versa. In collaboration with Professor Louise C. Showe, Ph.D. at the Wistar Institute, they found the significant genes that met these criteria. Among them, the gene Id1 was found to be up-regulated in KLF17 knockdown cells and down-regulated in KLF17 overexpressing cells. Recent findings suggest that Id1 is deregulated in various types of cancers and is important in the development of embryonic stem cell–like phenotypes in cancer cells.

To further investigate the interactions of KLF17 and Id1, the Huang lab scanned a DNA segment of mouse Id1 and found two potential KLF17 binding sites. To examine the effect of Id1 upregulation in tumor metastasis in vivo, the team generated tagged mouse and human cell lines expressing mouse or human Id1, respectively. Following transplantation back into the mice, lung metastasis developed from Id1-upregulated cells but not in controls, demonstrating that Id1 expression promotes tumor metastasis in vivo.

Further characterization of KLF17 is an ongoing subject of study for Wistar researchers. “We are continuing to examine ways to activate KLF17 and the methods by which that process slows or prevents cancer metastasis,” Huang says.

Full article: http://www.sciencedaily.com/releases/2009/10/091005161322.htm

‘Closed Heart Surgery’: Scientists Jump-start The Heart By Gene Transfer


Human heart.

Scientists from the Universities of Michigan and Minnesota show in a research report published online in the FASEB Journal that gene therapy may be used to improve an ailing heart’s ability to contract properly. In addition to showing gene therapy’s potential for reversing the course of heart failure, it also offers a tantalizing glimpse of a day when “closed heart surgery” via gene therapy is as commonly prescribed as today’s cocktail of drugs.

“We hope that our study will lead some day to the development of new genetic-based therapies for heart failure patients,” said Todd J. Herron, Ph.D., one of the researchers involved in the study and research assistant professor of molecular and integrative physiology at the University of Michigan. “The advent of molecular motor-based gene transfer for the failing heart will hopefully improve cardiac function and quality of life for heart failure patients.”

To make this advance, Herron and colleagues treated heart muscle cells from the failing hearts of rabbits and humans with a virus (adenovirus) modified to carry a gene which produces a protein that enables heart cells to contract normally (fast molecular motor) or a gene that becomes active in failing hearts, which is believed to be part of the body’s way of coping with its perilous situation (slow molecular motor). Heart cells treated with the gene to express the fast molecular motor contracted better, while those treated with the gene to express the slow molecular motor were unaffected.

“Helping hearts heal themselves, rather than prescribing yet another drug to sustain a failing organ, would be a major advance for doctors and patients alike,” said Gerald Weissmann, M.D., Editor-in-Chief of the FASEB Journal. “Equally important, it shows that gene therapy remains one of the most promising approaches to treating the world’s most common and deadliest diseases.”

According to the U.S. Centers for Disease Control and Prevention, heart failure is a condition where the heart cannot pump enough blood and oxygen to meet the needs of other body organs. Approximately 5 million people in the United States have heart failure, about 550,000 new cases are diagnosed each year, and more than 287,000 people in the United States die each year of heart failure. The most common causes of heart failure are coronary artery disease, hypertension or high blood pressure, and diabetes. Current treatments usually involve three to four medicines: ACE inhibitors, diuretics, digoxin, and beta blockers.

Current clinical agents and treatments focus on the amount of calcium available for contraction, which can provide short-term cardiac benefits, but are associated with an increased mortality in the long-term. Results from this study show that calcium-independent treatments could have implications for heart diseases associated with depressed heart function, due to the effectiveness of fast molecular motor gene transfer on the improved contractions of human heart muscle cells.


Full article and photo: http://www.sciencedaily.com/releases/2009/10/091005102649.htm

Three Win Nobel for Ribosome Research

nobel chemistry

From left, Venkatraman Ramakrishnan of the MRC Laboratory of Molecular Biology in Cambridge, England; Thomas A. Steitz of Yale University; and Ada E. Yonath of the Weizmann Institute of Science in Rehovot, Israel will share the 2009 Nobel Prize in Chemistry

Three researchers whose work delves into how information encoded on strands of DNA is translated by the chemical complexes known as ribosomes into the thousands of proteins that make up living matter will share the 2009 Nobel Prize in Chemistry, the Swedish Academy of Sciences said Wednesday.

The trio are Venkatraman Ramakrishnan of the MRC Laboratory of Molecular Biology in Cambridge, England; Thomas A. Steitz of Yale University; and Ada E. Yonath of the Weizmann Institute of Science in Rehovot, Israel.

Each scientist will get a third of the prize, worth 10 million Swedish kronors in total, or $1.4 million, in a ceremony in Stockholm on Dec. 10.

If the sequence of lettered proteins in the DNA forms the blueprint for life, ribosomes are the factory floor. In a news release the Swedish academy said the three, who worked independently, were being honored “for having showed what the ribosome looks like and how it functions at the atomic level.”

The ribosome research, the academy said, is being used to develop new antibiotics.

Dr. Ramakrishnan was born in Chidambaram, Tamil Nadu, India, in 1952 and obtained his Ph.D. at Ohio University, and holds American citizenship. Dr. Steitz was born in Milwaukee in 1940 and received his Ph.D. from Harvard in 1966. Dr. Yonath was born in Jerusalem in 1939 and received her Ph.D. at the Weizmann Institute in 1968.

Full article and photos: http://www.nytimes.com/2009/10/08/science/08nobel.html

New Mathematical Model Suggests How The Brain Might Stay In Balance

The human brain is made up of 100 billion neurons — live wires that must be kept in delicate balance to stabilize the world’s most magnificent computing organ. Too much excitement and the network will slip into an apoplectic, uncomprehending chaos. Too much inhibition and it will flatline. A new mathematical model describes how the trillions of interconnections among neurons could maintain a stable but dynamic relationship that leaves the brain sensitive enough to respond to stimulation without veering into a blind seizure.

Marcelo O. Magnasco, head of the Laboratory of Mathematical Physics at The Rockefeller University, and his colleagues developed the model to address how such a massively complex and responsive network such as the brain can balance the opposing forces of excitation and inhibition. His model’s key assumption: Neurons function together in localized groups to preserve stability. “The defining characteristic of our system is that the unit of behavior is not the individual neuron or a local neural circuit but rather groups of neurons that can oscillate in synchrony,” Magnasco says. “The result is that the system is much more tolerant to faults: Individual neurons may or may not fire, individual connections may or may not transmit information to the next neuron, but the system keeps going.”

Magnasco’s model differs from traditional models of neural networks, which assume that each time a neuron fires and stimulates an adjoining neuron, the strength of the connection between the two increases. This is called the Hebbian theory of synaptic plasticity and is the classical model for learning. “But our system is anti-Hebbian,” Magnasco says. “If the connections among any groups of neurons are strongly oscillating together, they are weakened because they threaten homeostasis. Instead of trying to learn, our neurons are trying to forget.” One advantage of this anti-Hebbian model is that it balances a network with a much larger number of degrees of freedom than classical models can accommodate, a flexibility that is likely required by a computer as complex as the brain.

In work published this summer in Physical Review Letters, Magnasco theorizes that the connections that balance excitation and inhibition are continually flirting with instability. He likens the behavior to an indefinitely large number of public address systems tweaked to that critical point at which a flick of the microphone brings on a screech of feedback that then fades to quiet with time.

This model of a balanced neural network is abstract — it does not try to recreate any specific neural function such as learning. But it requires only half of the network connections to establish the homeostatic balance of exhibition and inhibition crucial to all other brain activity. The other half of the network could be used for other functions that may be compatible with more traditional models of neural networks, including Hebbian learning, Magnasco says.

Developing a systematic theory of how neurons communicate could provide a key to some of the basic questions that researchers are exploring through experiments, Magnasco hopes. “We’re trying to reverse-engineer the brain and clearly there are some concepts we’re missing,” he says. “This model could be one part of a better understanding. It has a large number of interesting properties that make it a suitable substrate for a large-scale computing device.”


Full article: http://www.sciencedaily.com/releases/2009/09/090927152049.htm

Self-Destructive Behavior in Cells May Hold Key to a Longer Life

Deep down, we are all cannibals. Our cells are perpetually devouring themselves, shredding their own complex molecules to pieces and recycling them for new parts. Many of the details of our endless self-destruction have come to light only in the past few years. And to the surprise of many scientists, links are now emerging between this inner cannibalism and diseases like Alzheimer’s disease and cancer.

“There’s been an explosion,” said Daniel Klionsky of the University of Michigan. “All of a sudden, researchers in different fields are seeing a connection.”

In fact, as Dr. Klionsky wrote in a paper published online in Trends in Cell Biology, this cannibalism may extend our lifespan. Increasing our body’s ability to self-destruct may, paradoxically, let us live longer.

Our cells build two kinds of recycling factories. One kind, known as the proteasome, is a tiny cluster of proteins. It slurps up individual proteins like a child sucking a piece of spaghetti. Once inside the proteasome, the protein is chopped up into its building blocks.

cell sss

For bigger demolition jobs, our cells rely on a bigger factory: a giant bubble packed with toxic enzymes, known as a lysosome. Lysosomes can destroy big structures, like mitochondria, the sausage-shaped sacs in cells that generate fuel. To devour a mitochondrion, a cell first swaddles it in a shroudlike membrane, which is then transported to a lysosome. The shroud merges seamlessly into the lysosome, which then rips the mitochondrion apart. Its remains are spit back out through channels on the lysosome’s surface.

Lysosomes are versatile garbage disposals. In addition to taking in shrouded material, they can also pull in individual proteins through special portals on their surface. Lysosomes can even extend a mouthlike projection from their membrane and chew off pieces of a cell.

The shredded debris that streams out of the lysosomes is not useless waste. A cell uses the material to build new molecules, gradually recreating itself from old parts. “Every three days, you basically have a new heart,” said Dr. Ana Maria Cuervo, a molecular biologist at Albert Einstein College of Medicine.

This self-destruction may seem like a reckless waste of time and energy. Yet it is essential for our survival, and in many different ways. Proteasomes destroy certain proteins quickly, allowing them to survive for only about half an hour. That speed allows cells to keep tight control over the concentrations of the proteins. By tweaking the rate of destruction, it can swiftly raise or lower the number of any kind of protein.

Lysosomes, which eat more slowly than proteasomes, serve different roles that are no less essential. They allow cells to continue to build new molecules even when they are not getting a steady supply of raw ingredients from the food we eat. Lysosomes also devour oily droplets and stores of starch, releasing energy that cells can use to power the construction of new molecules.

“If you don’t have a snack between lunch and dinner,” Dr. Cuervo said, “you’re going to have to activate your lysosomes to get nutrients.”

Lysosomes become even more active if dinner never comes, and a short-term hunger turns to long-term starvation. Cells respond to famine by making only a small number of crucial molecules and using lysosomes to destroy the rest. “When times are good, make everything,” Dr. Klionsky said. “When times are lean, focus on what you need. You can get rid of everything else.”

This strategy for survival, known as autophagy (“eating oneself”), evolved in our ancestors over two billion years ago. Today, all animals rely on it to endure famines, as do plants, fungi and single-cell protozoa.

Autophagy’s great antiquity has helped scientists discover the genes that make it possible in humans. Rather than study starving people, they introduced mutations into yeast and then observed which strains could no longer survive without food. In many cases, the scientists discovered, the mutations that made yeast vulnerable struck genes that are involved in autophagy. They were then able to find nearly identical versions of those genes in the human genome.

The protection humans get from lysosomes is essential not just during famines. It is also vital just after birth. When babies emerge from their mothers, they need huge amounts of energy so that they can start to run their bodies on their own. But this demand comes at precisely the moment that babies stop getting food through their umbilical cord. Japanese scientists have found that lysosomes in mice kick into high gear as soon as they are born. After a day or two, as they start to nurse, the rate of autophagy drops back to normal.

When the scientists engineered mice so they could not use their lysosomes at birth, the newborn mice almost immediately died of starvation.

Even if you enjoy a steady supply of food your entire life, you still rely on autophagy for another reason: to keep the molecules in your cells in good working order. Cells make a lot of defective molecules. They misread genes, for example, and misfold proteins. Even a perfectly crafted molecule does not stay perfect for long. “Proteins go bad with time,” Dr. Klionsky said. “They age, and they wear out.”

When proteins and other molecules go bad, they can start to gum up the intricate chemical reactions on which a cell’s survival depends. The cell recognizes defective parts and tags them for destruction. Experiments on flies show the harm that can occur when cells cannot clear away the old and bring in the new. Flies that are genetically engineered with defective lysosomes start to accumulate abnormal clumps of proteins in their cells. The clumps build up especially in their neurons, which start to die as a result.

The Belgian biochemist Christian de Duve discovered lysosomes in 1955, for which he later won the Nobel Prize. In 1963, scientists discovered that a genetic defect in lysosomes was responsible for a disorder known as Pompe disease, which weakens the heart and muscles. Those who have the disease are missing a protein that lysosomes need to break down stores of energy. Today over 50 disorders are recognized as the result of one defect or another in lysosomes. Doctors can now treat some of these diseases by supplying people with the proteins they lack.

In recent years, scientists have also found evidence of autophagy in preventing a much wider range of diseases. Many disorders, like Alzheimer’s disease, are the result of certain kinds of proteins forming clumps. Lysosomes can devour these clumps before they cause damage, slowing the onset of diseases.

Lysosomes may also protect against cancer. As mitochondria get old, they cast off charged molecules that can wreak havoc in a cell and lead to potentially cancerous mutations. By gobbling up defective mitochondria, lysosomes may make cells less likely to damage their DNA. Many scientists suspect it is no coincidence that breast cancer cells are often missing autophagy-related genes. The genes may have been deleted by mistake as a breast cell divided. Unable to clear away defective mitochondria, the cell’s descendants become more vulnerable to mutations.

Unfortunately, as we get older, our cells lose their cannibalistic prowess. The decline of autophagy may be an important factor in the rise of cancer, Alzheimer’s disease and other disorders that become common in old age. Unable to clear away the cellular garbage, our bodies start to fail.

If this hypothesis turns out to be right, then it may be possible to slow the aging process by raising autophagy. It has long been known, for example, that animals that are put on a strict low-calorie diet can live much longer than animals that eat all they can. Recent research has shown that caloric restriction raises autophagy in animals and keeps it high. The animals seem to be responding to their low-calorie diet by feeding on their own cells, as they do during famines. In the process, their cells may also be clearing away more defective molecules, so that the animals age more slowly.

Some scientists are investigating how to manipulate autophagy directly. Dr. Cuervo and her colleagues, for example, have observed that in the livers of old mice, lysosomes produce fewer portals on their surface for taking in defective proteins. So they engineered mice to produce lysosomes with more portals. They found that the altered lysosomes of the old experimental mice could clear away more defective proteins. This change allowed the livers to work better.

“These mice were like 80-year-old people, but their livers were functioning as if they were 20,” Dr. Cuervo said. “We were very happy about that.”

Andrea Ballabio, the scientific director of Telethon Institute of Genetics and Medicine in Naples, Italy, and his colleagues have found another way to raise autophagy. By studying the activity of genes that build lysosomes, they discovered that at least 68 of the genes are switched on by a single master protein, known as TFEB.

When Dr. Ballabio and his colleagues engineered cells to make extra TFEB, the cells made more lysosomes. And each of those lysosomes became more efficient. The scientists injected the cells with huntingtin, a protein that clumps to cause the fatal brain disorder Huntington’s disease. The cells did a much better job of destroying the huntingtin than normal cells.

“This is a very good sign,” Dr. Ballabio said. “We’re very excited because this network of genes may apply to a number of diseases.”

Dr. Ballabio and other researchers are now investigating ways in which they can increase autophagy with drugs or diets — raising the number of portals on lysosomes, for example, or causing cells to make extra TFEB. But developing such treatments will require a sophisticated understanding of autophagy. After all, autophagy is a potent force for destruction, and if lysosomes are accidentally ripped open, their toxic enzymes can kill a cell.

As Dr. Klionsky, of the University of Michigan, said, “You can’t just turn this on and let it go.”

Carl Zimmer, New York Times

Full article and photo: http://www.nytimes.com/2009/10/06/science/06cell.html

DNA sequencing in a holey new way

Depiction of DNA in nanohole (IBM)

DNA molecules will be held in place by tiny voltages within the nanohole

IBM will announce on Tuesday how it intends to hold DNA molecules in tiny holes in silicon in an effort to decode their genetic secrets letter by letter.

Their microelectronic approach solves one of two long-standing problems in “nanopore” DNA sequencing: how to stop it flying through too quickly.

The aim is to speed up DNA sequencing in a push toward personalised medicine.

IBM’s chief executive Sam Palmisano will announce the plans to the Medical Innovation Summit in the US on Tuesday.

While sequencing the genomes of humans and animals has become relatively routine in a laboratory setting, the ability to quickly and cheaply sequence genomes of individuals remains out of reach.

That widely available genetic information will help bring about the era of “personalised medicine” – in which preventative or therapeutic approaches can be tailored to individuals based on their specific genetic makeup.


“There have been a number of attempts to sequence DNA much faster than it was sequenced when the first human genome was announced,” said Gustavo Stolovitzky, a computational biologist from IBM.

Chromosome depiction (SPL)

Individual genetic information will lead to more directed therapies

“All of them use some complicated sample preparation – chopping the DNA, amplifying, reverse transcribing – and some sophisticated and labour-intensive optics,” Dr Stolovitzky told BBC News.

“All this makes sequencing faster, but still slower and more expensive than it needs to be before it could be used for personalised medicine.”

Instead, Dr Stolovitzky and colleagues are pursuing a method involving silicon peppered with holes just three billionths of a metre across – 20,000 times thinner than a human hair and just wide enough for one strand of DNA to pass through.

Researchers have been looking into using such nanopores for a number of years – mimicking the proteins in cell membranes that perform the same trick – because using a semiconductor offers significant advantages over biochemical and optical techniques.

“DNA nanopore sequencing continues to be one of the great candidates to do fast and cheap DNA sequencing without sample preparation or sophisticated optics, using only electronics to fetch the signal out,” Dr Stolovitzky said.

Moreover, the approach could be done in a “massively parallel” way – that is, with hundreds or thousands of DNA strands passing through an array of holes on a single chip.

Trap stack

The idea is conceptually simple but devilishly difficult to carry out. Because DNA naturally carries a net electric charge, simply applying a voltage across the two sides of the chip drives the DNA strands through the holes.

However, the DNA tends to pass through too quickly to decode the identities of the individual nucleotides – letters of the genetic code – as they pass.

More than that, until they can study DNA strands moving at a more carefully controlled pace, researchers cannot develop the techniques to query the precise nucleotide they have trapped in place.

Blue Gene supercomputer (IBM)

The Blue Gene supercomputer simulated the nanopores’ every atom

The IBM team have now hit on the idea of a chip composed of a stack of layers, each of which can hold a precisely-controlled voltage in a thin layer inside the nanopore.

These smaller voltages trap the negatively charged chemical groups called phosphates that separate individual nucleotides.

By cycling this internal voltage, the DNA strand can be made to advance one nucleotide at a time.

The team has used IBM’s Blue Gene supercomputer to simulate the process in order to ensure it would work, and the team has built prototypes of the trapping nanopore. Tuesday’s announcement marks the beginning of the testing and refinement stages of the process.

What remains is to investigate the means to identify the individual nucleotides trapped inside the nanopores, which is likely to rest on measuring some electrical or electronic property of each as it passes.

Stas Polonsky, another IBM researcher working on the project, remains convinced that with the benefit of a trapping mechanism, this last problem is tractable.

“As a company we have a lot of expertise with electrical measurements,” he said.

“We have nanopores plus the whole arsenal of microelectronics – we can integrate all these ultrasensitive circuits right on a chip, which will boost the sensitivity for measurements tremendously.”


Full article and photos: http://news.bbc.co.uk/2/hi/science/nature/8291185.stm

The Wall Street Journal 2009 Technology Innovation Awards

This year’s winners include: a tool to identify new disease strains, an artificial hand, a solar-powered base station for mobile phones and paper-thin flexible speakers.

Medical detective work may have just gotten a lot easier.

Just how difficult it is gets highlighted every time an infectious disease sweeps the globe, as the new strain of swine flu did earlier this year. Current methods of testing for disease-causing microbes are pretty effective at discovering whether an infected fluid or tissue sample contains a known virus or bacteria. But trying to detect previously unknown organisms is a whole different story.

To address this problem, David Ecker, co-founder of Ibis Biosciences Inc., and a team of researchers developed a sensor able to quickly detect and identify all the pathogens in a given sample.

The equipment promises not only to alert health officials to new disease strains, but also to guard against bioterrorism and enable hospitals to identify antibiotic-resistant bacteria.

Abbott Laboratories and its Ibis Biosciences unit, which developed the Ibis T5000 sensor, took the Gold in this year’s Wall Street Journal Technology Innovation Awards.

The Silver award went to Touch Bionics Inc. for its i-Limb artificial hand, which features bendable fingers and a rotating thumb. The hand uses sophisticated motors and computer controls to grip objects and move in ways that traditional prosthetic hands can’t.

Vihaan Networks Ltd., an Indian telecommunications company known as VNL, won the Bronze award for a solar-powered base station to bring cellphone access to remote rural villages. The inexpensive base station can be quickly assembled and set up by unskilled villagers, and can run entirely on the built-in solar panels and batteries.

For the ninth annual Innovation Awards, a Journal editor reviewed nearly 500 entries, sending more than 180 to a team of judges from research institutions, venture-capital firms and other companies. Judges considered whether innovations were truly groundbreaking and—new this year—looked at whether their application would be particularly useful in a time of economic hardship.


And the winners in each category are…

Computing Systems

Capturing real-life motion to use in computer animation can be complicated. Typically, actors are filmed wearing bodysuits covered with glowing dots or embedded with sensors that trace their movements, then high-powered computers use that data to help create characters that move realistically.

New York-based Organic Motion Inc. won in the computing-systems category for developing a motion-capture system that doesn’t require bodysuits or markers.

The core of the system is technology that uses sophisticated software to produce a digital clone of a person being filmed. Fourteen video cameras capture images simultaneously and send them to a standard computer with a high-end programmable graphics card, making the system far cheaper than the specialized equipment used in movie special-effects shops.

Organic Motion systems are being used in the creation of virtual environments for training coal-mine rescue personnel and for helping returning military veterans readjust to civilian life. Andrew Tschesnok, the company’s chief executive and founder, says future versions will work with next-generation game consoles for more-lifelike game experiences.

Consumer Electronics

Taiwan’s Industrial Technology Research Institute, or ITRI, won in the consumer-electronics category for its work developing a paper-thin, flexible speaker.

Researchers at ITRI, a nonprofit organization, devised a way to create arrays of tiny speakers that can be combined to produce high-fidelity speaker systems of almost any size.

Because the fleXpeaker is lightweight and consumes little power, it could be attractive for use in cellphones or in car sound systems. Other possible applications include giant banners that could be used to deliver public-service announcements in train stations or advertising messages in shopping malls.

ITRI is seeking to license the technology or create a spinoff ­company to commercialize the ­product.

One advantage of the SFC fuel cells is that they produce power from methanol. Many fuel cells produce electricity from hydrogen. But hydrogen is highly explosive, so it needs to be stored in special heavy-metal cartridges. Cartridges for the SFC fuel cells are less expensive, lighter and less bulky.


SFC Smart Fuel Cell AG, based just outside Munich, won in this category for developing small, lightweight fuel cells that can be used by soldiers instead of much bulkier, heavier batteries to power communications and navigation devices and other battlefield equipment.

One advantage of the SFC fuel cells is that they produce power from methanol. Many fuel cells produce electricity from hydrogen. But hydrogen is highly explosive, so it needs to be stored in special heavy-metal cartridges. Cartridges for the SFC fuel cells are less expensive, lighter and less bulky.


Serious Materials Inc. of Sunnyvale, Calif., was recognized in this category for its EcoRock drywall substitute, which is made with recycled material and, the company says, requires 80% less energy to make and produces 80% less carbon dioxide than standard gypsum-based drywall. EcoRock, which is also termite- and mold-resistant, will be priced to compete with premium drywall products. Serious Materials has been selling limited test quantities of the product to a few contractors since early this year and plans to expand production and distribution over the next two years.

Though some judges wondered if a relatively high price would limit how widely the product is used, it is a “novel solution to a basic problem that has enormous impact,” says Darlene Solomon, chief technology officer of Agilent Technologies and an Innovation Awards judge.

Health-Care IT

DataDyne.org, a Washington-based nonprofit, and its co-founder, Joel Selanikio, won in this category for EpiSurveyor, free software for mobile devices designed to help health officials in developing countries collect health information.

technology 2

In developing countries, gathering and analyzing time-sensitive health-care information can be a challenge. Rural health clinics typically compile data only in paper records, making it difficult to spot and to respond quickly to emerging trends.

With EpiSurveyor, developed with support from the United Nations Foundation and the Vodafone Foundation, health officials can create health-survey forms that can be downloaded to commonly used mobile phones. Health workers carrying the phones can then collect information—about immunization rates, vaccine supplies or possible disease outbreaks—when they visit local clinics. The information can then be quickly analyzed to determine, say, whether medical supplies need to be restocked or to track the spread of a disease.

The software has been rolled out in more than 20 African countries.

Materials and Other Base Technologies

Light fixtures based on light-emitting diodes—semiconductors that glow brightly when charged—promise long-lasting, low-energy illumination. But there’s a problem: The light produced is harsh and bluish in color. Special filters can be added to produce warmer tones, but they can make the fixtures less efficient. Devising a way to make warmer-colored, high-efficiency LEDs is seen as essential to their widespread adoption.

QD Vision Inc. of Watertown, Mass., won in this category for inventing a way to dramatically improve the color quality of LED lights by using quantum dots—tiny semiconducting nanocrystals. QD Vision quantum dots can also be used to make energy-efficient flat-panel and other displays that can deliver purer, more intense colors.

technology 3

The company recently joined with a small LED light-fixture maker, Nexxus Lighting Inc., to make a screw-in LED bulb. The bulbs, which promise to be six times more efficient than incandescent bulbs, are expected on the market later this year.

Medical Devices

The i-Limb from U.K.-based Touch Bionics, the overall Silver winner, received top honors in this category.

Prosthetic hands typically have been limited to simple pincer-like grips that imitate the motions of a thumb and forefinger. While they can perform most essential hand functions, they lack the utility and appearance of a real hand.

The trick in developing the i-Limb was coming up with materials that could match the shape and weight of a human hand yet be powerful enough to handle all the tasks of muscle and bone. The hand uses motors that fit in the space of a knuckle to control the fingers; the motors are controlled by a computer chip.

With the hand, wearers can grip and turn a key, for instance, or hold a business card using a thumb and index finger. They can also close all the fingers and the thumb around an object, like a drink can or a shopping-bag handle. It’s also possible to point with the index finger, which is useful in operating a phone or a cash machine, among other things. The thumb can also be rested next to the rest of the hand, so that it doesn’t snag when putting on clothing.

Adding to its life-like appearance, the i-Limb comes covered with a flexible silicone skin. But wearers don’t have to go with the natural look. Stuart Mead, Touch Bionics’ chief executive, says a lot of younger wearers prefer either a clear skin that shows off the device’s inner workings or a black metallic covering “that looks a little like Darth Vader.”

technology 4

Medicine – Biotech

Abbott’s Ibis Biosciences unit, the overall Gold winner, was the top entry in this category. The technology takes a novel approach to detecting and identifying pathogens. When faced with unidentified organisms, clinical labs typically have to incubate infected fluid or tissue samples and test them for bacteria or viruses. Newer microarray technologies can run thousands of such tests simultaneously. But they are expensive and require lots of high-quality genetic material for their analyses, making them less than ideal for diagnostic purposes, says Mr. Ecker, a divisional vice president at Abbott. (Last year’s Silver award winner, the PhyloChip, is a microarray system for detecting bacteria in water and other environmental samples.)

Ibis uses a combination of technologies to identify organisms: mass spectrometry—a way of identifying the molecules that make up a sample by measuring their mass and charge—to determine the genetic markers of the organisms in a sample; a vast database of genetic signatures for different organisms; and a mathematical process to match the analysis with the signatures in the database. The test not only can reveal all the known organisms present in the sample, it also can also flag previously unknown organisms. Since the first system was completed in 2005, Ibis sensors have been deployed in 20 sites around the U.S., including the Centers for Disease Control. This spring, the device helped the Naval Health Research Center in San Diego to identify the first two cases of the H1N1 swine flu in the U.S. Abbott, a health-care company based in Abbott Park, Ill., acquired Ibis earlier this year.

Security – Privacy

Ksplice Inc., based in Cambridge, Mass., won in this category for software that makes it possible to install security patches and other software updates without rebooting computer systems.

Software makers periodically send out updates, and before they can take effect the computer system needs to be shut down and restarted. So even critical security updates are often delayed until late at night or weekends when shutdowns are less disruptive. Ksplice was developed so that companies can perform updates without interrupting their operations. The software was first deployed commercially last year, and the company has about a dozen customers. Though it currently is available only for Linux-based systems, the techniques can be applied to other operating systems, says Jeff Arnold, the company’s president and co-founder.


Qualcomm Inc., the San Diego-based wireless-technology company, won in this category for a mobile-device display it calls mirasol, a low-power, full-color alternative to traditional displays.

The mirasol display uses micro-electromechanical systems, or MEMS, and thin-film reflective material to produce color images that remain vivid even in direct sunlight. The displays are able to produce a full color spectrum, and images are refreshed quickly enough that full-motion video can be displayed as well as static images. By relying on ambient light, the displays require little power. The technology was originally developed by Iridigm Display Corp., which Qualcomm acquired in 2004 and renamed Qualcomm MEMS Technologies.

The first black-and-white displays using the technology became available in early 2008 and have been used in mobile navigating devices, Bluetooth headsets and MP3 players. The company introduced a color version in May, and has agreed to provide the displays for future cellphones from LG Electronics Inc.


Cloud computing promises to replace the complex array of hardware and software that makes up a company’s information-technology infrastructure with simple IT services delivered over the Internet, much the way that utilities provide electricity. But not all businesses can take advantage of cloud computing’s benefits—because of security concerns, or because they already have significant investments in their own data centers and other IT systems.

The latest version of VMware Inc.’s virtualization software suite, called vSphere, is this year’s winner in the software category. It promises to make it easier to turn a company’s existing data centers into a private cloud—an array of IT services delivered throughout a company over its own computer network—that’s secure, reliable and easy to manage.

VMware has long been the market leader in virtualization software, which makes it possible to run different applications or operating systems on a single computer by dividing the computer into several “virtual” machines, each running programs independent of the others.

technology 5

With vSphere, which VMware describes as a “cloud operating system,” IT managers can quickly turn all the servers in a data center into a network of virtual machines. A simple dashboard makes it possible to see all the applications that are running on each virtual device.

Judges noted that there is a lot of interest in private clouds, and said that VMware has taken a big step in helping companies build them.

“It’s a very important trend, and these guys are clearly the leader,” says Asheem Chandna, a partner at the venture-capital firm Greylock Partners who was one of the Innovation Awards judges.


The Bronze winner, VNL’s solar-powered base station for cellphone networks, led the wireless category.

Mobile-phone service can deliver huge benefits to developing countries. But getting cellphone coverage to remote, rural parts of India and other countries is hindered by high installation and operating costs, as well as the specialized knowledge needed to set up and run a cellular station. As a result, few operators have gone into these communities.

VNL is looking to overcome this obstacle with a low-power cellular base station that requires little capital expense and has almost no operating costs. The base stations can be powered by a small solar panel in daylight; batteries provide backup power for up to 72 hours.

Another challenge was making the device so simple that it can be installed at low cost by villagers.

The solution was inspired by the Scandinavian retailer Ikea: The entire base station comes delivered in six boxes, small enough to all fit in an ox cart. Simple illustrated instructions show how to put the pieces together using color-coded cables.

Even turning the station to the right microwave signal is easy—it emits a continuous beeping sound when the signal is strongest.

The technology may not be much of a technical breakthrough, but “it’s worthy because of what it might bring to developing countries,” says William Webb, head of research and development at Ofcom, the U.K. communications regulator, and one of the Innovation Awards judges.


Full article and photos: http://online.wsj.com/article/SB10001424052970203440104574399714096167656.html#mod=WSJ_hpp_MIDDLETopStories

The Nobel Prize Will Go To…

Who might the future winners be? Here are some candidates.

Next month, Nobel Prizes once again will be handed out to scientists who have conducted field-changing research.

Which scientists may be honored this year, and in the future? It’s impossible to know for sure, since the selection process is notoriously secretive and often surprising. In fact, the names of nominees considered by the various Nobel committees aren’t even revealed for 50 years. So in search of future Nobel laureates, we looked at winners of other prestigious prizes, and interviewed previous Nobel winners and other experts—and came up with the following list.


F. ULRICH HARTL, Director, Max Planck Institute of Biochemistry

ARTHUR HORWICH, Sterling Professor of Genetics and Pediatrics, Yale School of Medicine

Drs. Hartl and Horwich are recognized for their work demonstrating that many cellular proteins don’t fold themselves properly on their own, but instead require the help of so-called molecular chaperones. Without such chaperones, proteins can stick to each other, leading to progressive brain diseases like Parkinson’s, Alzheimer’s and Huntington’s. If drugs that activate the chaperone system can be developed, they can be used to slow or prevent such diseases. The biotechnology industry has already adopted the technique of boosting chaperone production as a way of increasing protein yield in the production of biologic therapies.

“They were significant contributors to the field,” says John Blanchard, a professor of biochemistry at the Albert Einstein College of Medicine and chair of the American Chemical Society’s division of biological chemistry. “These chaperones were really first and best characterized by” Drs. Hartl and Horwich.

RICHARD A. LERNER, Professor of Chemistry, President, Scripps Research Institute

GREG WINTER, Deputy Director, The Medical Research Council’s Laboratory of Molecular Biology

Dr. Lerner and Sir Greg are known for their work that allowed for the creation of a group of antibodies far more diverse than the human immune system can make on its own. They also developed new methods for combing through the various combinations of antibodies to look for blends that have the potential to treat disease. Their research has led to the development of Humira, a treatment for inflammatory diseases, made by Abbott Laboratories.

“The use of antibodies therapeutically has lagged behind the discoveries of how antibodies protect us from various infectious disease,” says Barton Haynes, director of the Duke Human Vaccine Institute. “From the therapy standpoint,” he adds, the work of Dr. Lerner and Sir Greg “helped us speed the development of antibodies as drugs and as agents.”


KRZYSZTOF MATYJASZEWSKI, Professor, Department of Chemistry, Carnegie Mellon University

Dr. Matyjaszewski developed atom transfer radical polymerization, or ATRP, a way of combining small molecules called monomers into blocks of polymers, which are plastics made up of long molecules. This method allows for the production of a combination of materials with multiple properties that can be controlled in ways that traditional plastics can’t. Dr. Matyjaszewski’s work has significant implications for industrial manufacturing: It could be used to produce more environmentally friendly materials, such as substances that retain their shape better and thus are more easily recycled, and that use up less natural resources, he says.

The chemistry developed by Dr. Matyjaszewski “allows one to make a very large range of new materials,” says Devon Shipp, a polymer scientist and chemistry professor at Clarkson University.


VICTOR AMBROS, Professor in Molecular Medicine, University of Massachusetts Medical School

GARY RUVKUN, Professor of Genetics, Harvard Medical School

Drs. Ambros and Ruvkun discovered a class of small ribonucleic acid, or microRNA, that regulates when genes are turned off and on. They showed that these tiny regulatory RNA help to control critical processes during embryonic development, and also play a role in conditions like cancer and heart disease. Regulation of microRNA activity in expression of genes could be developed as a therapy to treat such diseases.

These researchers made a “fundamental discovery” in understanding cell growth and organism development, says Curtis Harris, chief of the Laboratory of Human Carcinogenesis at the National Cancer Institute. Therapeutic research targeting microRNAs could start human testing within two to three years.

ELIZABETH BLACKBURN, Morris Herztein Professor of Biology and Physiology, Department of Biochemistry and Biophysics, University of California, San Francisco

CAROL GREIDER, Daniel Nathans Professor and Director of Molecular Biology and Genetics and Professor of Oncology, Johns Hopkins University School of Medicine

Drs. Blackburn and Greider discovered telomeres—caps found at the ends of chromosomes that protect their genetic information—and the telomerase enzyme, which is vital to normal human development but can also play a role in the growth of cancer. Their work has shown that suppressing telomerase activity appears to inhibit the growth of cancer cells.

During prenatal development, telomerase is important to ribonucleic acid, a single-stranded chain made of DNA’s basic building blocks. Without telomerase, telomeres get shorter, chromosomes become unstable and cell aging occurs. But once development proceeds far enough, telomerase activity is turned off “in almost all human tissues, and only returns with cancer,” says Jerry Shay, a professor of cell biology at the University of Texas Southwestern Medical Center at Dallas.

The reactivation of telomerase in cancer cells allows the cells to continue to change and become more malignant, Dr. Shay says, so targeting telomerase could be an important new approach to treating cancer. Telomerase also may “have utility in cell and tissue rejuvenation medicine,” in essence slowing or preventing aging, says Dr. Shay.


SHINYA YAMANAKA, Director and Professor of the Center for Induced Pluripotent Stem (iPS) Cell Research and Applications, Kyoto University

Senior Investigator, Gladstone Institute of Cardiovascular Disease

Dr. Yamanaka is known for his work on creating embryonic-like stem cells from adult skin cells. Stem cells are characterized by their ability to turn into different kinds of specialized cells, like heart or brain cells.

The ability to “reprogram” adult cells back into an earlier, undifferentiated state by inserting four genes has the potential to reshape the ethical debate over stem-cell research, because the cells no longer have to be taken from an embryo. Robert Lanza, chief scientific officer of Advanced Cell Technology and an adjunct professor at the Institute for Regenerative Medicine at Wake Forest University, says Dr. Yamanaka’s work “is likely to be the most important stem-cell breakthrough of all time. The ability to generate an unlimited supply of patient-specific stem cells will revolutionize the future of medicine.”

Today, Dr. Yamanaka, along with another researcher, was awarded the prestigious 2009 Albert Lasker Basic Medical Research Award.



PETER W. HIGGS, Professor of Physics, Emeritus, University of Edinburgh

ROBERT BROUT, Professor of Physics, Emeritus, Université Libre de Bruxelles

FRANÇOIS ENGLERT, Professor of Physics, Emeritus, Université Libre de Bruxelles

Drs. Brout, Englert and Higgs clarified how particles get mass. In particular, they discovered how mass can be generated under different conditions for particles that carry out fundamental forces such as electromagnetism and radioactivity, which has helped the field better understand fundamental interactions.

Their theoretical work has postulated a yet undiscovered particle, known as the Higgs particle, which would validate the basic mathematical structure of the particle-physics model that explains three of the four fundamental forces in physics. “It’s often said that the Higgs particle is the missing last piece in this standard model,” says Peter Jenni, a senior staff physicist at CERN, the European Organization for Nuclear Research.

SUMIO IIJIMA, Professor of Materials Science and Engineering, Meijo University


DANIEL KLEPPNER, Lester Wolfe Professor of Physics, Emeritus, Massachusetts Institute of Technology

Dr. Iijima is known for his early work on carbon nanotubes, which are tiny, extremely hard and strong, tube-like material.

The material is “about as small as you can make a wire out of atoms that has this tremendously fascinating set of things you can do with it,” says Joseph Serene, a theoretical condensed-matter physicist and an operating officer of the American Physical Society. Carbon nanotubes can have a broad range of electronic properties depending on how they are put together, from insulators to semiconductors to metals, which can be used in making smaller and smaller electronics, says Dr. Serene.

Dr. Kleppner’s work in atomic physics includes the development of the hydrogen maser, a device that produces electromagnetic waves, as well as the physics of Rydberg atoms, which are atoms with very excited outer electrons. The hydrogen maser has been “essential” for technical applications like timekeeping, such as in atomic clocks, according to Kate Kirby, an atomic and molecular physicist who is an executive officer of the American Physical Society. The ability to keep precise time has allowed for precision in terms of identifying geographical position, leading to such technology as the Global Positioning System.

–Ms. Wang is a staff reporter in The Wall Street Journal’s New York bureau.


Full article and photo: http://online.wsj.com/article/SB10001424052970204683204574358170813148330.html#mod=WSJ_hpp_MIDDLTopStories

Techs and the city

Lab by lab in and around San Francisco


SAN FRANCISCO conjures up images of hippies and of free love, the psychedelic 60s and leftist politics. A member of Jefferson Airplane, a rock band, described it as “49 square miles surrounded by reality”. It has always had that air. In a letter written in 1889, Rudyard Kipling wrote of “a mad city, inhabited for the most part by perfectly insane people.”

But as someone who writes about science (and in the interests of full disclosure, practices it for a living), I see a different side of San Francisco and the broader Bay Area around it. I don’t see a region full of people looking to escape reality; I see scientists and engineers at universities, companies, and national labs probing and investigating that reality on a daily basis. Instead of mind-altering drugs, I see the world-altering technology that flows out of Silicon Valley.

A city built on science

Plutonium was first discovered in a Berkeley lab (as were the aptly-named berkelium and californium). The Bay Area is the birthplace of “big science” and of the atom smashers that have told us so much about the fundamental building blocks of matter. Quarks were first discovered just down the peninsula at the Stanford linear accelerator.

In the 1970s, two professors from Stanford and the University of California, San Francisco (UCSF) figured out how to use bacteria to clone segments of DNA. In the process, they gave birth to genetic engineering and the modern biotechnology industry. South of the city, Silicon Valley gave us the personal computer, the mouse, and the verbs “to google” and “to tweet”. Sit down in any local coffeeshop and you’re just as likely to end up next to someone nursing a startup as you are someone nursing a cappuccino. Now, the Valley’s venture capitalists are hoping that their magic will work just as well on the clean-technology industry.

The Bay Area hosts the world’s biggest laser (at the National Ignition Facility in Livermore), the world’s most intense X-ray source (at the LCLS at Stanford’s national lab) and an institute devoted exclusively to the scientific search for extraterrestrial intelligence (the SETI Institute in Mountain View). NASA’s outpost here just launched a probe that will slam into the lunar surface in search of water.

Between them, Stanford, Berkeley, and UCSF employ some 50 Nobel laureates spanning the full range of scientific disciplines. True, a handful of these are for economics, but we’ll cut the dismal science some slack.

Science and technology are to the Bay Area what finance is to New York and what cars are (or were) to Detroit. They underpin the region’s economy, influence its culture and shape the very character of this region as much as its notoriously active seismic geography does.

Over the next four days, I intend to explore a few of the different faces that science and technology present to residents here. From the stem cell research that promises to revolutionise medicine, to the science of growing and making the best wine, to the science-fiction sounding search for extraterrestrials, we’ll be taking a scientific road trip around the San Francisco Bay Area. Think Thelma and Louise meets Watson and Crick.


I WAKE up slightly disoriented at 5:45am. Waking in darkness makes me feel more like a farmer than a scientist, but perhaps that’s appropriate for the task at hand today. I’m on my way to the University of California, Davis for their annual RAVE conference, a gathering of scientists, winegrowers and winemakers meant to share the most recent advances in the disciplines of viticulture and oenology.

Gulping down a large coffee, I head east on I-80 across the Bay Bridge and through the sprawl of the East Bay. I pass the exit for Highway 37, which winds its way north and west to Napa and Sonoma, the heart of California’s wine country. In Napa alone, over 40,000 acres of vines produce an annual crop of grapes worth $400m. I manage to resist the pull of the wineries and instead follow I-80 east into the brightening dawn.

The wine before the bottle

You may not realise it when you pop the cork on a nice bottle of cabernet sauvignon, but many scientists spend their lives studying every facet of wine, from the best pruning and watering techniques for growing the tastiest grapes to the genetics of the bacteria used in their fermentation. And UC-Davis is one of the world’s great centres for wine science.

Its Robert Mondavi Institute for Wine and Food Science, founded with a donation of $25m from the father of California’s wine industry, boasts 75,000 square feet of state-of-the art labs, kitchens, and sensory-testing equipment. Inside, the halls literally smell of wine and the researchers seem to be having a lot more fun than the typical science PhDs.

Studying wine seems like a far cry from curing cancer or weaning the world off of fossil fuels, but the scientists who do so are no slouches. They make use of the latest techniques in biochemistry and biotechnology. Their analyses are sprinkled with complex mathematics and multivariate statistics.

As I learn later in the morning, “whole genome shotgun sequencing”, originally developed for the Human Genome Project, was put to work on pinot noir in 2007. Besides shedding light on fundamental issues of plant evolution, wine scientists hope the grapevine genome will reveal some of the pathways that control wine flavour and resistance to various pests.

The packed program includes lectures on viticultural practices, techniques for drying grapes into raisins, the perils of something called “berry shrivel” and how microbes contribute to flavour during fermentation. We hear about genomics, proteomics, and the “wired vine”, where all aspects of growing are monitored and controlled by sensors. Terpenes, norisoprenoids, oak lactones—the biochemical jargon comes thick and fast and eventually overwhelms me.

But what comes through is a sense, as one speaker puts it, that wine is truly “chemistry in a glass”. Wine contains hundreds of complex chemical compounds, some of which are active in startlingly small amounts. Methoxypyrazine, which gives sauvignon blanc a slight bell-pepper odour, can be detected by the nose at less than two billionths of a gram in an entire bottle.

To the purist, all of this measuring and quantifying might destroy the beauty of a perfectly balanced bottle paired with a delicious meal. But I think of the child who looks up at the night sky and grows up to become an astronomer. Science begins with and returns to beauty and wonder.

As I hit the road back to the city, I think about the theory that it’s better to give grapes slightly less water than they want in order to stress them and to concentrate their intense flavours. Out of great struggle comes great wine—and great science.


TONIGHT I’ve got two of the hottest tickets in town. As the bouncer checks my ID, I can hear the low bass emanating from the DJ’s turntables inside the glass doors. The crowd is dressed in slinky skirts, tight jeans, and sport coats. This is not the hippest new club in the city, but the normally staid halls of the California Academy of Sciences. My girlfriend and I head off for a stiff gin and tonic at one of the many bars (though not the one sitting beneath the watchful eye of Tyrannosaurus rex).

To most people, the words “science” and “nightlife” don’t usually go together. This spring, however, the Academy opened its doors for a series of boozy evenings intended to give the residents of this young, tech-savvy city another view of the science museum. “NightLife”, as the event is called, has been selling out, with more than 3,000 people attending each week.

It was only last September that the Academy returned to its home in Golden Gate Park. After the 1989 Loma Prieta earthquake damaged the aquarium here, the Academy undertook a complete rebuilding project that took $488m and the better part of a decade. Designed by Renzo Piano, the museum is now the largest public building in the world to have a LEED Platinum rating. Its design, which melds modern glass and steel with the classical architecture of the original building, reminds me of science itself—a combination of the new and modern with the solid, tested principles of the past.

Inside, an exhibit demonstrates some of the building’s environmentally friendly features. Recycled blue jeans are stuffed into the walls to serve as insulation (which seems fitting, as San Francisco is home to both Levi’s and The Gap). Half of the building’s cement was made with recycled waste products from coal combustion and steel production, and the glass canopy outside houses 60,000 photovoltaic cells. Instead of using treated freshwater for the aquariums, water is pumped in directly from the Pacific at the other end of the park.

We continue our stroll past the 90-foot diameter glass dome that houses a living rainforest. Next to the DJ, people are gaping through a glass window at scientists in white coats working on specimens—perhaps a nod to the traditional view of scientists in a museum.

Downstairs, the Steinhart Aquarium is packed and people are noticeably tipsier. An alligator drifts towards the thick glass, having recently sent its albino tankmate Claude to the hospital with a nasty bite on the toe. A scantily clad girl sticks her tongue out at a lizard in its tank. It responds in kind and then lazily drops off its branch.

We head back upstairs and onto the museum’s “living roof”, which is planted with native Californian grasses and flowers. They help reduce runoff and, from a distance, cause the building to mirror the hilly landscape of the city around it. A line is patiently snaking its way to a docent with a telescope trained on Saturn’s rings and the moon Titan.

After we’ve had our fill of stargazing, we spill out into the beautiful evening and stroll out of the park. I’m left with the inescapable feeling that this taste of the nightlife has been high on style but a little light on the scientific substance. But that’s no terrible thing. Science will survive and grow, as this museum has.


IT IS a classic San Francisco morning. The downtown skyline is shrouded in a blanket of fog. By noon the sun has finally burned its way through, but the fog will likely roll back in with the cool evening breeze. It’s a bit like scientific progress, actually—an endless ebb and flow from haziness to clarity and back again.

Today I’m downtown to cover a town hall meeting hosted by the California Institute for Regenerative Medicine (CIRM). From the subway I head to one of the Palace Hotel’s elegant chandeliered ballrooms. It holds around 300, and eventually fills to standing capacity.

Though it seems like a euphemism, “regenerative medicine” does not refer to plastic surgery (that is, dare I say, an Angeleno rather than a San Franciscan pastime). From its office in San Francisco’s Mission Bay, CIRM oversees California’s $3 billion investment in stem cell research.

Soldiers awaiting orders

In November 2004, California voters passed Proposition 71, a ballot measure allowing the state to fund research into human embryonic stem cells. Overnight, California became one of the largest backers of stem-cell research in the world. At a time when the federal government was unwilling to invest in regenerative medicine, the message from the state’s voters was clear: the incredible therapeutic promise of stem cells outweighs the moral objections to using them.

That therapeutic promise, the meeting’s three panellists explain to us, comes from stem cells’ chameleon-like ability to turn into any of the cells that make up the body’s tissues and organs. Most cells are tailored to perform a particular function. Heart cells are good at beating, neurons transmit electrical signals and pancreatic islet cells produce insulin. While they all contain the full set of instructions of the human genome, each uses only the small subset that directs its particular task.

A stem cell, on the other hand, is a cellular jack-of-all-trades. Given the right signals, it can become a brand new heart cell or neuron or insulin-producing cell. Bruce Conklin, a professor at the University of California, San Francisco and the second speaker of the evening, plays us a dramatic video of 2,000 human heart cells that had been derived from embryonic stem cells. Sitting in their Petri dish, they wriggle and beat, just like a human heart.

Embryonic stem cells were first isolated in 1998, and since then the pace of progress has been furious. Much work has gone into figuring out how to reliably and efficiently generate the different cell types that doctors would like to use in patients. In addition, as the speakers emphasise, understanding exactly when and how implanted stem cells can go awry and cause tumours remains an essential research task that confronts all potential therapies.

Such therapies are slowly, but surely, making their way towards the clinic. In January, Geron, a biotechnology giant, got FDA approval to conduct the first clinical trial testing the safety of an embryonic stem cell therapy. It will work with patients with severe spinal cord injuries. For its part, CIRM is hoping to get ten to 12 human stem cell trials going in the next four years. In December, it will award $20m to researchers and their corporate partners with that goal in mind.

After the speakers finish their presentations, the moderator opens the floor to questions from the audience. From the front row, a young girl raises her hand. In a high-pitched, slightly faltering voice, she asks a deeply personal question: “I was burned very badly in August 2008. How might this help me, and how can I help in your research?”

After the dry PowerPoints and data-filled charts, the scientists seem slightly taken aback by the raw emotion. They stammer through some answers, but none seems satisfying. Despite stem cells’ promise, the science just isn’t quite there yet. This moment brings home both the deep hopes and the urgent desperation that surround what are undoubtedly the early days of regenerative medicine.


ONCE again I’m braving the early morning traffic on I-80, heading out of the city past Oakland and Berkeley. But just before I reach Davis, I veer north onto Interstate 5. It’s not the earthly delights of carefully cultivated varietals and nuanced terroir that concern me today. I’m heading into the mountains to get a tour of the Allen Telescope Array (ATA), a collection of 42 large telescopes that have just begun scanning the heavens for radio transmissions from intelligent extraterrestrials. Yes, you read that right—aliens.

Three hours later, my small Toyota begins the climb into the mountains of Lassen National Park. Eureka, Whiskeytown, Old Oregon Trail—the road signs here recall the miners and pioneers who trudged through during California’s mid-19th century gold rush. The two-lane road I’m driving on used to be a trail for rattling stagecoaches.

The San Francisco radio stations faded hours ago, and now only a few talk stations break through the static. Maybe I’ve lived in Haight-Ashbury for too long, but as I make a right turn into the observatory, Timothy Leary is in my head: “Turn on, tune in, drop out.” Here in Hat Creek, which is nearly devoid of manmade sounds, the ATA just turned on for science operations in May. For many years to come, it will tune into the radio sky to study the evolution of galaxies, the properties of black holes, and one of the most profound questions of all—whether we’re alone in the universe.

My tour guide this afternoon is Garrett Keating, a former cop turned astronomer. We walk out towards one of the 42 telescopes, a gleaming aluminium dish six metres in diameter. Mr Keating opens a trap door and we poke our heads inside. The main dish reflects incoming radio waves onto a smaller dish off to our left. That in turn bounces them onto the telescope’s main receiver, a long pyramid with different sized antennas poking off of it.

The antennas pick up an extremely wide range of frequencies, from those used for broadcast television on the low end up through the ones that transmit satellite television. In between is the emission frequency of hydrogen gas—the most common element in the universe and the raw material for the formation of stars and galaxies.

Off in the distance, we hear the rumbles of an approaching storm, and several lightning bolts streak across the sky. Mr Keating insists we return to the lab. The antennae, he reassures me, are well grounded. I don’t tell him that it wasn’t the antennae I was worried about.

Inside, fibre-optic cables carry the signals from the dishes to enormous racks of computers. By using the computers to combine data from each individual dish, the ATA is able to mimic a much larger telescope for a fraction of the cost. An initial donation of $25m from Paul Allen, the co-founder of Microsoft, and $25m from other sources financed these first 42 dishes. Eventually, the team hopes to collect enough funding to get up to 350.

Operating together, the telescopes are quite sensitive. And they need to be, since a single mobile phone located on the moon would give off a much stronger signal than almost every astronomical object in the radio sky. In addition to its sensitivity, the ATA also views a large patch of the sky all at once. Most other radio telescopes are like telephoto lenses, zooming into a tiny region of space. The ATA, however, is the first that can take snapshots with a wide-angle lens.

Just outside the sliding glass door to the control room, I notice a doormat with a bug-eyed alien and the caption “welcome all species”, a reminder of the ATA’s second mission. This telescope array represents a great leap forward for the enterprise known as SETI, the search for extraterrestrial intelligence.

In the past, SETI has had to squeeze precious observation time out of existing telescopes around the world. With the ATA, the search for signals from intelligent life elsewhere in the universe will be carried out constantly, right alongside the astrophysics.

So what exactly is SETI looking for? Essentially, something that seems not to belong—an odd man out in the cosmic radio haze. One possibility is a very powerful signal confined to a tiny frequency band, like the manmade transmissions that are continually leaking off of earth. As Mr Keating explains, “nature doesn’t produce pure tones”. In addition, if the signal really is extraterrestrial, its broadcast frequency should drift, as the alien planet orbits its own star.

Over its lifetime, the ATA hopes to survey 1m promising candidate stars within a thousand light years of earth, and ten billion more in the central region of our own Milky Way galaxy. And as computers and algorithms improve, so will SETI’s ability to look for more complex alien transmissions in this mountain of data.

Black holes, exploding stars, clouds of swirling hydrogen gas light-years across the galaxy—this is hallucinatory stuff. Yet if the little green men finally arrive, San Francisco—built as it is on science, tolerance and the counterculture—would seem like a natural first port-of-call.


Full article and photos: http://www.economist.com/displayStory.cfm?story_id=14113332&source=hptextfeature&CFID=70299326&CFTOKEN=65073551

Scientists Find a Microbe Haven at Ocean’s Surface


Little Kilo Moana skims thin layers from the ocean’s surface near its namesake, a University of Hawaii research vessel. Scientists have learned that the top hundredth-inch of the ocean is an ecosystem all its own.

The world’s oceans are like an alien world. The National Oceanic and Atmospheric Administration estimates that 95 percent of them remain unexplored. But the mysteries do not start a mile below the surface of the sea. They start with the surface itself.

Scientists are now discovering that the top hundredth-inch of the ocean is somewhat like a sheet of jelly. And this odd habitat, thinner than a human hair, is home to an unusual menagerie of microbes. “It’s really a distinct ecosystem of its own,” said Oliver Wurl, of Canada’s Institute of Ocean Sciences.

This so-called sea-surface microlayer is important, scientists say, in part because it influences the chemistry of the ocean and the atmosphere. “One of the most significant things that happens on our planet is the transport of gases in and out of the ocean,” said Michael Cunliffe, a marine biologist at the University of Warwick in England. The ocean stores a large fraction of the global-warming gases we produce; at the microlayer, the gases are pulled down.

“It’s the ocean breathing through its skin,” Dr. Cunliffe said.

Sailors have long known that the surface can be covered with oily slicks (hence the phrase “pouring oil on troubled waters”). But when scientists began studying the surface in the mid-20th century they found it vexing. A scientist cannot just dunk a bucket into the ocean without dredging up deeper water as well. “Even defining the surface is hard, since it’s moving up and down,” said Peter Liss, a professor of environmental sciences at the University of East Anglia in England.

So scientists had to invent some tools to skim the surface. Dr. Liss and his colleagues, for example, chill a piece of glass with liquid nitrogen and lower it into the sea, freezing water it contacts.

These tools have allowed scientists to discover that the top hundredth of an inch is chemically distinct. It is loaded with molecules carried up by air bubbles and concentrated at the surface.

Recent surveys carried out by Dr. Wurl and his colleagues have revealed that the microlayer has a rich supply of sticky clumps of carbohydrates. These carbohydrates are made by single-cell organisms called phytoplankton that live lower in the ocean to stick together in colonies. Eventually the carbohydrates break off the phytoplankton and clump together. Dr. Wurl’s studies indicate that many of them rise to the microlayer, forming a film.

“I really imagine it as tiny pieces of jelly floating on the ocean,” Dr. Wurl said.

It may be hard to imagine such a fine coat of slime holding together for long on top of the heaving ocean. But Dr. Wurl has found that it is quite durable. “We have collected microlayer samples with wind conditions of 16 to 18 knots,” he said. “It’s not pleasant to be in a small boat at that wind speed. That tells us the microlayer is pretty stable.”

Dr. Wurl and his colleagues report the findings in a paper to be published in the journal Marine Chemistry. He suspects that when waves disrupt the jellylike microlayer, air bubbles deliver sticky material back to the surface.

Dr. Cunliffe, who has replicated Dr. Wurl’s results, argues that these studies mean that the microlayer is a special kind of habitat for microbes. The gelatinous film calms the turbulence in the microlayer, which may make it easier for bacteria to attach to the particles and feed on the molecules flowing past.

To document the sort of microbes that live in the microlayer, Dr. Cunliffe and other researchers are collecting surface water, breaking open the cells it contains, and sequencing the genes they hold. They compare the microlayer residents to the microbes that live a few inches deeper.

“We’re finding consistently different communities,” Dr. Cunliffe said. The microlayer communities are dominated by groups of microbes well known for forming biofilms on more familiar surfaces, like rocks in streams, our teeth and the insides of sewer pipes.

“They’re always the usual suspects,” Dr. Cunliffe went on. “If our hypothesis is correct, it makes complete sense.”

Dr. Liss called the finding “a really interesting result, because it shows that the microlayer is a really different environment.”

Scientists say it is important to become better acquainted with this mysterious ocean skin, because it may play a critical role in the environmental well-being of the planet. Studies have shown, for example, that pollutants like pesticides and flame retardants can be trapped in the microlayer.

Dr. Cunliffe and his colleagues have identified bacteria in the microlayer that devour important chemicals like methane and carbon monoxide. The microlayer is also crucial to the ocean’s ability to absorb carbon dioxide, a potent greenhouse gas.

“It’s actually sucking the carbon dioxide down into the water column,” Dr. Cunliffe said.

Dr. Liss said the microlayer was “clearly important, because it’s where the ocean and the atmosphere interact.”

“But it’s difficult to study,” he added, “so it hasn’t received as much attention as it ought to.”


Full article and photo: http://www.nytimes.com/2009/07/28/science/28ocea.html

Dear, I love you with all my brain

Dopamine brings people together and oxytocin keeps them attached, studies show. Is love just chemistry?

For centuries, love has been probed — and of course celebrated — mostly by poets, artists and balladeers. But now its mysteries are yielding to the tools of science, including modern brain-scanning machines.

At State University of New York at Stony Brook, a handful of young people who had just fallen madly in love volunteered to have their brains scanned to see what areas were active when they looked at pictures of their sweethearts. The brain areas that lighted up were precisely those known to be rich in a powerful “feel-good” chemical, dopamine, which brain cells release in response to cocaine and nicotine. Dopamine is the key chemical in the brain’s reward system, a network of cells that is associated with pleasure — and addiction.

In the same lab, older volunteers who said they were still intensely in love after two decades of marriage participated in the experiment as well. The same brain areas lighted up, showing that, at least in some lucky couples, the honeymoon feeling can last. But in these folks, other areas lighted up too — those rich in oxytocin, the “cuddling” chemical that helps new mothers make milk and bond with their babies, that is secreted by both sexes during orgasm and that, in animals, has been linked to monogamy and long-term attachment.

It’s way too soon (and, we can hope, always will be) to say that brain scientists have translated all those warm and fuzzy feelings we call romantic love into a bunch of chemicals and electrical signals in the brain.

But they do have a plausible hypothesis — that dopamine plays a big role in the excitement of love and that oxytocin is key for the calmer experience of attachment. Granted, the data are preliminary. But the findings so far are provocative. And it’s conceivable that, as Emory University neurobiologist Larry J. Young pointed out in the journal Nature this year, once scientists understand the chemistry of love, drugs to manipulate the process “may not be far away.”

Better interactions

A new study published this year in Biological Psychiatry supports that idea, showing that oxytocin may help human couples get along better. Swiss researchers gave 47 couples a nasal spray containing either oxytocin or a placebo. The couples then participated in a videotaped “conflict” discussion. Those that got oxytocin exhibited more positive and less negative behavior than those given the placebo. Oxytocin was also linked to lower secretion of cortisol, a stress hormone.

In the Nature paper, Emory’s Young also noted that nobody knows yet whether drugs used to treat problems such as depression and sexual dysfunction can affect relationships by changing brain chemistry. But, he noted, both the antidepressant Prozac and the erection enhancer Viagra appear to affect the oxytocin system.

In the initial love study at Stony Brook, 10 women and seven men in intense, “early-stage” love were put into a functional MRI brain scanner, which can detect activity in specific parts of the brain. They were then shown pictures of their loved one or a neutral person.

In these lovebirds, one dopamine-rich region in particular — the ventral tegmental area — consistently lighted up upon viewing the loved one, but not the neutral person, according to the research, published in 2005. The intensity of the brain’s response to falling in love, says co-author Lucy L. Brown, a neuroscientist at Albert Einstein College of Medicine, suggests that it “is not just an emotion but a drive, a real goal like food or water.”

In a second experiment, the team found the same brain areas at work in people recently rejected by a loved one. Perhaps loss of love triggers the same kind of craving as withdrawal from cocaine or cigarettes, suggests Helen Fisher, a biological anthropologist at Rutgers University who also worked on the study.

In new data presented at scientific meetings in 2008 and 2009, Bianca Acevedo, now a post-doctoral fellow in social neuroscience at UC Santa Barbara but formerly at Stony Brook, focused on 10 women and seven men still in love after 21 years of marriage. Like the young lovers, when these volunteers were put in scanners and shown pictures of their partners, their dopamine-rich areas lighted up.

“But in contrast to those newly in love,” Acevedo says, other brain regions did too, including areas rich in oxytocin, vasopressin (a similar chemical) and serotonin, a brain chemical associated with well-being and calmness.

The link between long-term attachment and oxytocin has long fascinated researchers, among them, Sue Carter, a neuroendocrinologist at the University of Illinois at Chicago.

Carter’s work has centered on prairie voles, known for their enduring bonds. Compared with other rodents, prairie voles — part of the only 3% of mammals that form monogamous bonds — have more active oxytocin. Moreover, brain cells with receptors that specifically latch onto oxytocin lie in the very brain regions believed to be important in forming attachments, Carter says.

Other researchers have shown that when mice (not known for their monogamous ways), are injected with a gene containing instructions for making the receptor of oxytocin, the mice cozy up to their mates like voles.

Lack of oxytocin is important too. For instance, if female animals are stressed by being isolated, their oxytocin drops. In humans, Emory University research shows that women who were seriously abused as children have low oxytocin levels as adults.

Choosing partners

One question emerging from all this is whether knowing the chemistry of love can help in picking a compatible partner in the first place.

Fisher, the Rutgers anthropologist, who consults for the dating websites Match.com and an affiliate, Chemistry.com, thinks so. She thinks that certain personality types correspond to the preponderance and ratios of specific chemicals in the body; her team is examining blood, urine and saliva samples to test her theory.

Creative, risk-taking personalities, which she calls “explorers,” may have more active dopamine systems, as well as more activity of another brain chemical, norepinephrine, she says. In a study that involved 28,000 people using Chemistry.com Fisher built personality profiles based on people’s answers to a long questionnaire. She sorted people into different types and then followed their dating experience to see which types were attracted to which other types.

She found that explorers are particularly drawn to other explorers.” People she calls “builders,” conventional, calm, conscientious folks, may have more active serotonin and may also be drawn to other builders.

By contrast, Hillary Clinton types — “directors” — who are analytical and tough-minded may be high in testosterone and are regularly drawn to their opposites, the “negotiators” like Bill Clinton, who may be fueled by estrogen and oxytocin, Fisher says.

Whether this love chemistry will pan out in the new research is still an open question. In the meantime, remember those prairie voles — they get what Fisher calls “life’s greatest prize — an enduring mate and partner.”


Full article: http://www.latimes.com/features/health/la-he-love22-2009jun22,0,6897401.column

New Glimpses of Life’s Puzzling Origins

start june 15

A START In one view of the beginnings of life, depicted in an animation, carbon monoxide molecules condense on hot mineral surfaces underground to form fatty acids, above, which are then expelled from geysers.

Some 3.9 billion years ago, a shift in the orbit of the Sun’s outer planets sent a surge of large comets and asteroids careening into the inner solar system. Their violent impacts gouged out the large craters still visible on the Moon’s face, heated Earth’s surface into molten rock and boiled off its oceans into an incandescent mist.

Yet rocks that formed on Earth 3.8 billion years ago, almost as soon as the bombardment had stopped, contain possible evidence of biological processes. If life can arise from inorganic matter so quickly and easily, why is it not abundant in the solar system and beyond? If biology is an inherent property of matter, why have chemists so far been unable to reconstruct life, or anything close to it, in the laboratory?

The origins of life on Earth bristle with puzzle and paradox. Which came first, the proteins of living cells or the genetic information that makes them? How could the metabolism of living things get started without an enclosing membrane to keep all the necessary chemicals together? But if life started inside a cell membrane, how did the necessary nutrients get in?

The questions may seem moot, since life did start somehow. But for the small group of researchers who insist on learning exactly how it started, frustration has abounded. Many once-promising leads have led only to years of wasted effort. Scientists as eminent as Francis Crick, the chief theorist of molecular biology, have quietly suggested that life may have formed elsewhere before seeding the planet, so hard does it seem to find a plausible explanation for its emergence on Earth.

In the last few years, however, four surprising advances have renewed confidence that a terrestrial explanation for life’s origins will eventually emerge.

One is a series of discoveries about the cell-like structures that could have formed naturally from fatty chemicals likely to have been present on the primitive Earth. This lead emerged from a long argument between three colleagues as to whether a genetic system or a cell membrane came first in the development of life. They eventually agreed that genetics and membranes had to have evolved together.

The three researchers, Jack W. Szostak, David P. Bartel and P. Luigi Luisi, published a somewhat adventurous manifesto in Nature in 2001, declaring that the way to make a synthetic cell was to get a protocell and a genetic molecule to grow and divide in parallel, with the molecules being encapsulated in the cell. If the molecules gave the cell a survival advantage over other cells, the outcome would be “a sustainable, autonomously replicating system, capable of Darwinian evolution,” they wrote.

“It would be truly alive,” they added.

One of the authors, Dr. Szostak, of the Massachusetts General Hospital, has since managed to achieve a surprising amount of this program.

Simple fatty acids, of the sort likely to have been around on the primitive Earth, will spontaneously form double-layered spheres, much like the double-layered membrane of today’s living cells. These protocells will incorporate new fatty acids fed into the water, and eventually divide.

Living cells are generally impermeable and have elaborate mechanisms for admitting only the nutrients they need. But Dr. Szostak and his colleagues have shown that small molecules can easily enter the protocells. If they combine into larger molecules, however, they cannot get out, just the arrangement a primitive cell would need. If a protocell is made to encapsulate a short piece of DNA and is then fed with nucleotides, the building blocks of DNA, the nucleotides will spontaneously enter the cell and link into another DNA molecule.

At a symposium on evolution at the Cold Spring Harbor Laboratory on Long Island last month, Dr. Szostak said he was “optimistic about getting a chemical replication system going” inside a protocell. He then hopes to integrate a replicating nucleic acid system with dividing protocells.

Dr. Szostak’s experiments have come close to creating a spontaneously dividing cell from chemicals assumed to have existed on the primitive Earth. But some of his ingredients, like the nucleotide building blocks of nucleic acids, are quite complex. Prebiotic chemists, who study the prelife chemistry of the primitive Earth, have long been close to despair over how nucleotides could ever have arisen spontaneously.

Nucleotides consist of a sugar molecule, like ribose or deoxyribose, joined to a base at one end and a phosphate group at the other. Prebiotic chemists discovered with delight that bases like adenine will easily form from simple chemicals like hydrogen cyanide. But years of disappointment followed when the adenine proved incapable of linking naturally to the ribose.

Last month, John Sutherland, a chemist at the University of Manchester in England, reported in Nature his discovery of a quite unexpected route for synthesizing nucleotides from prebiotic chemicals. Instead of making the base and sugar separately from chemicals likely to have existed on the primitive Earth, Dr. Sutherland showed how under the right conditions the base and sugar could be built up as a single unit, and so did not need to be linked.

“I think the Sutherland paper has been the biggest advance in the last five years in terms of prebiotic chemistry,” said Gerald F. Joyce, an expert on the origins of life at the Scripps Research Institute in La Jolla, Calif.

Once a self-replicating system develops from chemicals, this is the beginning of genetic history, since each molecule carries the imprint of its ancestor. Dr. Crick, who was interested in the chemistry that preceded replication, once observed, “After this point, the rest is just history.”

Dr. Joyce has been studying the possible beginning of history by developing RNA molecules with the capacity for replication. RNA, a close cousin of DNA, almost certainly preceded it as the genetic molecule of living cells. Besides carrying information, RNA can also act as an enzyme to promote chemical reactions. Dr. Joyce reported in Science earlier this year that he had developed two RNA molecules that can promote each other’s synthesis from the four kinds of RNA nucleotides.

“We finally have a molecule that’s immortal,” he said, meaning one whose information can be passed on indefinitely. The system is not alive, he says, but performs central functions of life like replication and adapting to new conditions.

“Gerry Joyce is getting ever closer to showing you can have self-replication of RNA species,” Dr. Sutherland said. “So only a pessimist wouldn’t allow him success in a few years.”

Another striking advance has come from new studies of the handedness of molecules. Some chemicals, like the amino acids of which proteins are made, exist in two mirror-image forms, much like the left and right hand. In most naturally occurring conditions they are found in roughly equal mixtures of the two forms. But in a living cell all amino acids are left-handed, and all sugars and nucleotides are right-handed.

Prebiotic chemists have long been at a loss to explain how the first living systems could have extracted just one kind of the handed chemicals from the mixtures on the early Earth. Left-handed nucleotides are a poison because they prevent right-handed nucleotides linking up in a chain to form nucleic acids like RNA or DNA. Dr. Joyce refers to the problem as “original syn,” referring to the chemist’s terms syn and anti for the structures in the handed forms.

The chemists have now been granted an unexpected absolution from their original syn problem. Researchers like Donna Blackmond of Imperial College London have discovered that a mixture of left-handed and right-handed molecules can be converted to just one form by cycles of freezing and melting.

With these four recent advances — Dr. Szostak’s protocells, self-replicating RNA, the natural synthesis of nucleotides, and an explanation for handedness — those who study the origin of life have much to be pleased about, despite the distance yet to go. “At some point some of these threads will start joining together,” Dr. Sutherland said. “I think all of us are far more optimistic now than we were five or 10 years ago.”

One measure of the difficulties ahead, however, is that so far there is little agreement on the kind of environment in which life originated. Some chemists, like Günther Wächtershäuser, argue that life began in volcanic conditions, like those of the deep sea vents. These have the gases and metallic catalysts in which, he argues, the first metabolic processes were likely to have arisen.

But many biologists believe that in the oceans, the necessary constituents of life would always be too diluted. They favor a warm freshwater pond for the origin of life, as did Darwin, where cycles of wetting and evaporation around the edges could produce useful concentrations and chemical processes.

No one knows for sure when life began. The oldest generally accepted evidence for living cells are fossil bacteria 1.9 billion years old from the Gunflint Formation of Ontario. But rocks from two sites in Greenland, containing an unusual mix of carbon isotopes that could be evidence of biological processes, are 3.830 billion years old.

How could life have gotten off to such a quick start, given that the surface of the Earth was probably sterilized by the Late Heavy Bombardment, the rain of gigantic comets and asteroids that pelted the Earth and Moon around 3.9 billion years ago? Stephen Mojzsis, a geologist at the University of Colorado who analyzed one of the Greenland sites, argued in Nature last month that the Late Heavy Bombardment would not have killed everything, as is generally believed. In his view, life could have started much earlier and survived the bombardment in deep sea environments.

Recent evidence from very ancient rocks known as zircons suggests that stable oceans and continental crust had emerged as long as 4.404 billion years ago, a mere 150 million years after the Earth’s formation. So life might have had half a billion years to get started before the cataclysmic bombardment.

But geologists dispute whether the Greenland rocks really offer signs of biological processes, and geochemists have often revised their estimates of the composition of the primitive atmosphere. Leslie Orgel, a pioneer of prebiotic chemistry, used to say, “Just wait a few years, and conditions on the primitive Earth will change again,” said Dr. Joyce, a former student of his.

Chemists and biologists are thus pretty much on their own in figuring out how life started. For lack of fossil evidence, they have no guide as to when, where or how the first forms of life emerged. So they will figure life out only by reinventing it in the laboratory.


Full article and photo: http://www.nytimes.com/2009/06/16/science/16orig.html

Snakes’ Locomotion Appears a Matter of Scales

snake june 14

A corn snake on a smooth surface would have trouble slithering.

A snake’s slithering — how it translates wiggling motion into forward movement — has always been a bit of a mystery. Over the years, researchers developed the idea that an undulating snake drove its flanks laterally against small objects, like rocks and twigs, to propel itself.

“But that didn’t explain how snakes can move in areas where there isn’t anything to push on,” said David L. Hu, of Georgia Institute of Technology and New York University.

Now Dr. Hu and colleagues have come up with an alternative explanation, one that doesn’t rely on external objects. The secret, they report in The Proceedings of the National Academy of Sciences, is in the snake’s scales, which create different amounts of friction depending on direction.

The researchers did experiments with milk snakes and other species, and used the results to develop a model for slithering mechanics. They put snakes on extremely smooth surfaces, and in other instances wrapped them in cloth. In both cases, the snakes were unable to move forward, no matter how much they wriggled.

A snake’s scales, Dr. Hu said, resemble overlapping Venetian blinds, and tend to catch on tiny variations in the surface they lie on. This friction is greater in the forward direction than in sideways directions, as it is with wheels and ice skates. This frictional difference results in the snake’s moving forward as it undulates. (And the lack of movement is explained by the fact that placing a snake on a slippery surface or wrapping it in a cloth makes the friction the same in all directions.)


Full article and photo: http://www.nytimes.com/2009/06/09/science/09obsnake.html?ref=science

Microbes found miles beneath Greenland ice given new life

Discovery raises hopes of lifeforms enduring harsh conditions on other planets.

microbes june 14

An image of the chryseobacterium greenlandensis bacteria which has been buried under the Greenland ice sheet for for nearly 12,000 years.

Tiny microbes that have been buried below nearly two miles of ice for at least 120,000 years have been revived in the laboratory, in a study that raises the prospect that similar life forms could have survived on other planets.

Scientists have found at least two species of miniature bacteria, many times smaller than normal microbes, in an ice core bored deep beneath the surface of a Greenland glacier where they have been trapped for tens of thousands of years.

The researchers managed to isolate and grow the bacteria in the laboratory, where they have established thriving colonies of small, purple-brown microbes that are so small that they can pass straight through conventional medical filters used for sterilisation.

The microbes’ ability to survive the harsh environment of a Greenland glacier for such a long period of time suggests that extraterrestrial life forms – if they exist – could survive in equally extreme environments on Mars or Europa, the ice-covered moon of Jupiter where extraterrestrial life is thought to be possible.

“These extremely cold environments are the best analogues of possible extraterrestrial habitats,” said Jennifer Loveland-Curtze of Pennsylvania State University, who led the study.

“The exceptionally low temperatures can preserve cells and nucleic acids [DNA] for even millions of years,” said Professor Loveland-Curtze, whose study is published today in the International Journal of Systematic and Evolutionary Microbiology.

The ultra-small size of both bacteria – which are up to 50 times smaller than the E. coli bacteria found in the human gut – may have helped them to survive in the small liquid veins between ice crystals, she said.

“They were trapped in the ice for at least 120,000 years and possibly much longer. We’re not sure how much metabolism was going on during this time but there was probably some activity in the cells to preserve their DNA,” Professor Loveland-Curtze said.

“They may have divided perhaps once every 100 years or 1,000 years. We don’t know. A lot of organisms are out there in environments that we consider harsh and extreme.”

The study has so far revived two kinds of bacteria, which have been formally named as Herminiimonas glaciei and Chryseobacterium greenlandensis. H. glaciei is about half the size of C. greenlandensis and can pass through the ultra-fine filters used to sterilise chemical solutions – although it is not harmful, Professor Loveland-Curtze said. Both bacteria were recovered from ice cores drilled from a depth of 3,042 metres (10,000 ft) below the surface of the glacier. The ice at this depth is at least 120,000 years old but it could be up to several million years old, Professor Loveland-Curtze said.

“Studying these bacteria can provide insights into how cells can survive, and even grow, under extremely harsh conditions, such as temperatures down to minus 56C, little oxygen, low nutrients, high pressure and limited space,” she said.

Similar bacteria have now been recovered from a range of extreme environments on Earth, including ice cores drilled deep beneath the surface of the Antarctic ice sheet. In 2000, scientists claimed to have revived bacterial spores that had been trapped in salt crystals 2,000ft below ground.


Full article and photo: http://www.independent.co.uk/news/science/microbes-found-miles-beneath-greenland-ice-given-new-life-1705311.html

The mother of all inventions

From V2 rockets to antibiotics, innovations have changed the world. But which is the greatest in history?

The Science Museum, favourite haunt of aspiring astronauts and eccentric professors, celebrates its centenary this month. To mark the occasion, it is today launching a public vote to choose the most important scientific invention of the past few centuries. Curators have selected 10 objects which they believe to be most significant in the history of science, engineering, technology and medicine and are inviting the public to decide the winner. Voting will take place over the summer for the innovation which they believe has had (or will have) the greatest impact on the past, present or future.

The iconic objects are being organised into a Centenary Journey trail, which will open at the museum later this month. The winning object will be announced in October.

Tim Boon, chief curator of the museum, admitted the idea of scientific “progress” was controversial. “Some of the objects may divide opinion. Would we be better off if some of the “icons”, which have had negative consequences, had not been invented? We are looking forward to a great debate.”


inventions june 10 1

Stephenson’s Rocket
By winning the Rainhill trials and achieving record-breaking speeds, Rocket changed the future in 1829. Its design principles set the standard for the steam locomotives that would carry people and goods around the globe in the next 150 years.

inventions june 10 2

Early Newcomen water-pumping steam engine, Oxclose, Tyne & Wear.

inventions june 10 3

V2 rocket engine
From missiles to satellites to men on the Moon, the successors of this rocket engine, invented in 1942, took us into space. Developed during the Second World War, the V2 left a complex legacy but its impact is undeniable.

invention june 10 4

Cooke and Wheatstone’s five-needle telegraph
Charles Wheatstone and William Cooke changed the future in 1837 by patenting the first successful electric telecommunications device. The invention was the first practical use of electricity for long-distance communication and led to the first public telegraph service in the world.

inventions june 10 5

Reynolds’ X-ray set
The discovery of the X-ray in 1895 led to a radical new diagnostic tool for doctors. Reynolds’ X-ray set, one of the oldest sets in the world, provided some of the first glimpses inside the body without the need for the surgeon’s knife.

inventions june 10 6

Model T Ford
By applying mass production techniques on a vast scale, the Ford Motor Company changed the future in 1908. The affordable Model T brought motoring within the reach of a huge new market.

inventions june 10 7

Alexander Fleming’s observations of penicillin changed the future in 1928. A decade later work began at Oxford, then in industry both in the UK and the US. By the mid-1940s the first modern antibiotic was in use. Providing an effective way of tackling infectious diseases.

inventions june 10 8

Pilot ACE (Automatic Computing Engine)
One of the earliest general-purpose electronic computers, Pilot ACE changed the future on 10 May 1950, when it ran its first programme. At the time it was the fastest computer in the world. Pilot ACE was part of the first generation of electronics and computing, the technologies that now play a complex role in our lives. This is the ancestor of the PC, Mac and iPhone G3.

inventions june 10 9

Crick and Watson’s DNA molecular model
Until Crick and Watson proposed this now familiar structure, scientists could only wonder at the mechanism by which living organisms were reproduced. The model of DNA worked out in 1953 showed how genes could be replicated and led to many far-reaching discoveries ? including the mapping of our own genetic code.

inventions june 10 10


Apollo 10 capsule
Forty years ago, in this cramped ‘pod’, three men travelled around the Moon as a rehearsal for the Apollo 11 Moon landing, two months later. The Apollo missions changed the world in 1969 by taking us to an alien world and, with the ‘Earthrise’ photographs, giving us a new sense of our own planet’s fragility in a vast universe.


Some of the museum’s supporters have already made their choice. Trevor Baylis, inventor of the clockwork radio, is voting for the V2 rocket engine: “It’s one of the greatest achievements of our time because it led to space exploration, and then satellite development, which then led to mobile phones and the astounding communication services we enjoy today.”

Musician Nitin Sawhney chooses penicillin: “As an asthmatic recovering from a debilitating bout of pneumonia, I am painfully aware of how important a role penicillin has played in curing my lung infection. In this regard I’m hardly alone.”

Television presenter and biologist Alice Roberts’s vote is going elsewhere:

“As a doctor and anatomist, I’m championing the X-ray machine. X-rays provided the first possibility of looking inside someone’s body without cutting them open.”

Television presenter James May votes for the Apollo 10, “as it represents the furthest reach to date of manned exploration.” Broadcaster Adam Hart-Davis believes the steam engine to be “the most important step forward in technology of all time.”


Full article and photos: http://www.independent.co.uk/news/science/the-mother-of-all-inventions-1700718.html

The End of Medical Miracles?

Scientific discoveries are neither inevitable nor predictable.

Americans have, at best, a love-hate relationship with the life-sciences industry—the term for the sector of the economy that produces pharmaceuticals, biologics (like vaccines), and medical devices. These days, the mere mention of a pharmaceutical manufacturer seems to elicit gut-level hostility. Journalists, operating from a bias against industry that goes as far back as the work of Upton Sinclair in the early years of the 20th century, treat companies from AstraZeneca to Wyeth as rapacious factories billowing forth nothing but profit. At the same time, Americans are adamant about the need for access to the newest cures and therapies and expect new cures and therapies to emerge for their every ailment—all of which result from work done primarily by these very same companies whose profits make possible the research that allows for such breakthroughs.

Liberals and conservatives appear to agree on the need to unleash the possibilities in medical discovery for the benefit of all. But it cannot be ordered up at will. It takes approximately ten years and $1 billion to get a new product approved for use in the United States. Furthermore, only one in every 10,000 newly discovered molecules will lead to a medication that will be viewed favorably by the Food and Drug Administration (FDA). Only three out of every ten new medications earn back their research-and-development costs. The approval success rates are low, and may even be getting lower—30.2 percent for biotech drugs and 21.5 percent for small-molecule pharmaceuticals.

It is the very nature of scientific discovery that makes this process so cumbersome. New developments do not appear as straight-line extrapolations. A dollar in research does not lead inexorably to a return of $1.50. Researchers will spend years in a specific area to no avail, while other areas will benefit from a happy concatenation of discoveries in a short period. It is impossible to tell which area will be fruitless; so many factors figure into the equation, including dumb luck. Alexander Fleming did not mean to leave his lab in such disarray that he would discover that an extract from moldy bread killed bacteria, yet that is how it happened. Conversely, if effort and resources were all it took, then we would have an HIV/AIDS vaccine by now; as it stands, the solution to that problem continues to elude the grasp of some of the most talented and heavily funded researchers.

Scientific discoveries are neither inevitable nor predictable. What is more, they are affected, especially in our time, by forces outside the laboratory—in particular, the actions of politicians and government bureaucracies. The past quarter-century has offered several meaningful object lessons in this regard. For example, in the 1980s, the Reagan administration undertook a number of actions, both general and specific, that had a positive effect on the pace of discovery. On the general front, low taxes and a preference for free trade helped generate a positive economic climate for private investment, including in the rapidly growing health-care sector. More specifically, the Reagan administration engaged in new technology transfer policies to promote joint ventures, encouraged and passed the Orphan Drug Act to encourage work on products with relatively small markets, and accelerated approval and use of certain data from clinical trials in order to hasten the approval of new products. All of these initiatives helped foster discovery.

That which the government gives, it can also take away. As the 1990s began, a set of ideas began to gain traction about health care and its affordability (it seems hard to believe, but the first election in which health care was a major issue was a Pennsylvania Senate race only eighteen years ago, in 1991). Americans began to fear that their health-care benefits were at risk; policymakers and intellectuals on both sides of the ideological divide began to fear that the health-care system was either too expensive or not comprehensive enough; and the conduct of private businesses in a field that now ate up nearly 14 percent of the nation’s gross domestic product came under intense public scrutiny.

A leading critic of Big Pharma, Greg Critser, wrote in his 2007 Generation Rx that President Clinton picked up on a public discomfort with drug prices and “began hinting at price controls” during his first term in office. These hints had a real impact. As former FDA official Scott Gottlieb has written, “Shortly after President Bill Clinton unveiled his proposal for nationalizing the health-insurance market in the 1990s (with similar limits on access to medical care as in the [current] Obama plan), biotech venture capital fell by more than a third in a single year, and the value of biotech stocks fell 40 percent. It took three years for the ‘Biocentury’ stock index to recover. Not surprisingly, many companies went out of business.”

The conduct of the businesses that had been responsible for almost every medical innovation from which Americans and the world had benefited for decades became intensely controversial in the 1990s. An odd inversion came into play. Since the work they did was life-saving or life-enhancing, it was not deemed by a certain liberal mindset to be of special value, worth the expense. Rather, medical treatment came to be considered a human right to which universal access was required without regard to cost. Because people needed these goods so much, it was unscrupulous or greedy to involve the profit principle in them. What mattered most was equity. Consumers of health care should not have to be subject to market forces.

And not only that. Since pharmaceuticals and biologics are powerful things that can do great harm if they are misused or misapplied, the companies that made them found themselves under assault for injuries they might have caused. It was little considered that the drugs had been approved for use by a federal agency that imposed the world’s most rigorous standards, and was often criticized for holding up promising treatments (especially for AIDS). Juries were convinced that companies had behaved with reckless disregard for the health of consumers, and hit them with enormous punitive damages claims.

The late 1990s also coincided with an unpredictable slowdown in the pace of medical discovery, following a fertile period in which new antihistamines, antidepressants, and gastric-acid reducers all came to market and improved the quality of life of millions in inestimable ways. A lull in innovation then set in, and that in turn gave opponents of the pharmaceutical industry a new target of opportunity. An oft-cited 1999 study by the National Institute for Health Care Management (NIHCM) claimed that the newest and costliest products were only offering “modest improvements on earlier therapies at considerably greater expense.”

The NIHCM study opened fresh lines of attack. The first came from the managed-care industry, which used it as a means of arguing that drugs had simply grown too expensive. Managed care is extremely price-sensitive, and its business model is built on cutting costs; executives of the industry were well represented on the board of the institute that put out the report. They were, in effect, fighting with the pharmaceutical companies over who should get more of the consumer’s health-care dollars.

The second came in response to the approval by the FDA in 1997 of direct consumer advertising of pharmaceuticals. The marketing explosion that followed it gave people the sense that these companies were not doing life-saving work but were rather engaged in the sale of relative trivialities, like Viagra and Rogaine, on which they had advertising dollars to burn that would be better spent on lowering the cost of drugs. And the third element of this mix was the rise of the Internet, which gave Americans a level of price transparency that they had not had before regarding cost differentials between drugs sold in the U.S. versus Canada and other Western countries.

These three factors precipitated a full-bore campaign by public interest groups that bore remarkable fruit over the next several years. By February 2004, Time magazine was publishing a cover story on pharmaceutical pricing, noting that “the clamor for cheap Canadian imports is becoming a big issue.” Marcia Angell, a fierce critic of the pharmaceutical industry and the FDA, wrote in the New York Review of Books in 2004 that, “In the past two years, we have started to see, for the first time, the beginnings of public resistance to rapacious pricing and other dubious practices of the pharmaceutical industry.”

Harvard’s Robert Blendon released a Kaiser Family Foundation poll in 2005 in which 70 percent of Americans reported feeling that “drug companies put profits ahead of people” and 59 percent saying that “prescription drugs increase overall medical costs because they are so expensive.” Overall, noted the foundation’s president, Drew Altman, “Rightly or wrongly, drug companies are now the number one villain in the public’s eye when it comes to rising health-care costs.”

A cultural shift had taken place. Pharmaceutical manufacturers, once the leading lights of American industry, had become a collective national villain.

The life sciences are among the most regulated areas of our economy, and are constantly subjected to significant policy upheaval from Washington. Because these products are so expensive to develop, the regulatory and policy whims of Washington tend to have a disproportionate impact on investment in the industry. Without investment, there is no research, and without research, there are no products. According to Ken Kaitin of the Center for the Study of Drug Development at Tufts, new drug approvals from the Food and Drug Administration are not keeping pace with rising research-and-development spending, which means that recent spending has not been leading to results. This raises the question of how long such investments will be sustainable if they do not provide sufficient return for investors.

FDA approval is not the only hurdle for products making their way to market. Manufacturers and investors need to deal with the Department of Health and Human Services at four levels in order to get a product to market and reimbursed. Basic research begins at the National Institutes of Health (NIH), a $30-billion agency that often partners with the private sector on promising new areas of research and that has just received a $10-billion boost from the stimulus package. The FDA then handles approvals of products. Once a product has been approved, someone must pay for it in order for the product to be used. The Centers for Medicare & Medicaid Services (CMS) determines which products will be paid for by Medicare. Because CMS is the largest single payer in the health-care system, its decisions often help to determine which products will eventually be covered by private insurance companies as well. Finally, the Agency for Healthcare Research and Quality is in the process of increasing its role in conducting post-market product evaluations. These bureaucratic and evaluative hurdles have injected far too much uncertainty into the process and have dried up investment capital for the industry as a whole.

The issue constantly plaguing the industry is cost. Biologic treatments can cost hundreds of thousands of dollars, which can lead both Medicare and other insurers to refuse to cover certain treatments (although Medicare typically does not take cost into account during coverage decisions). When available, these treatments can bankrupt individuals and accelerate the impending bankruptcy of our Medicare trust funds. As a result, politicians on both the left and the right are examining various schemes for controlling costs.

One of the perennial ideas in this area is the re-importation of drugs from foreign countries. Most countries around the world impose price limits on pharmaceutical and other medical products. The United States does not do so, and as a result, prices for brand-name pharmaceuticals are higher in the United States than in other countries. Although there has been little appetite for the imposition of direct price controls in the United States, there have been some indirect efforts, including the re-importation of drugs from other countries, especially Canada.

This is a problematic notion. First, it is not clear that it would save money. The states that have created their own importation programs have generated little interest. Another is safety. Americans ordering drugs from abroad have no guarantees of the provenance of the products. We have an FDA in order to guarantee the safety of the products sold in the United States. Americans ordering from abroad have no such guarantee, and, indeed, clever counterfeiters have been known to stamp “made in Canada” on products with questionable Third World origins, usually places Americans would never think of buying from.

But there is also a philosophical reason to avoid importation schemes. We have a market economy, and this system allows U.S. firms to put in the dollars for research and development that make innovative new products possible. Elizabeth Whelan of the American Council on Science and Health has observed that the U.S. “produces nearly 90 percent of the world’s supply of new pharmaceuticals.” The pricing structure in the United States, which allows American firms to recoup research costs, makes this industry the dominant player in the global medical-products market. Without it, innovation could grind to a halt, and future generations might not benefit from life-saving, life-extending cures just over the horizon.

And this loss is not just a theoretical one. It can be quantified. A study by the Task Force on Drug Importation convened by the Department of Health and Human Services found that the loss of profits caused by re-importation could lead to between four and eighteen fewer drugs per decade. There is no way to know which promising enhancements would be lost.

At the same time, as Sally Pipes of the Pacific Research Institute has shown, the American system also lets its consumers obtain many products far more cheaply than consumers in other nations. This is because our competitive system allows for low-cost generic drugs that drive down prices. Brand-name products are more expensive here, but generics, which are available after patent protections expire, are cheaper and more widely available in the United States than elsewhere.

Another factor that reduces the availability of research-and-development investment is the growth of lawsuits. According to a Pacific Research Institute study, “American companies suffer over $367 billion per year in lost product sales because spending on litigation curtails investment in research and development.” A recent analysis by the Washington Post found that “courts have been flooded with product liability lawsuits in recent years, and statistics show about a third are against drug companies.”

The pain reliever Vioxx alone has prompted over 27,000 lawsuits against Merck, its manufacturer. Merck spent $1 billion defending itself before reaching an almost $5 billion settlement. This figure is actually relatively small, compared with the $21 billion Wyeth spent as a result of the recall of the diet drug combination fen-phen. Most famously, perhaps, Dow Corning was forced to declare bankruptcy after being flooded with over 20,000 lawsuits and 400,000 claimants over its silicon breast implants, despite the fact that the evidence has finally demonstrated definitively that they are not harmful.

The results are unmistakable. Dow Corning was forced to remain in bankruptcy for nine years. The uncertainties introduced by the impact of lawsuits and the changes in the policy environment have led to firms banding together through mergers. Wyeth was recently taken over by Pfizer; Merck has merged with Schering-Plough; Roche is buying up Genentech. These mergers mean that, by definition, there will be fewer laboratories working in a competitive way to develop new drugs.

The situation as it stands now is bad enough. But it would be made worse by the proposed elimination of an FDA policy called preemption, which holds that when federal laws come into conflict with state or local laws on drug matters, federal laws prevail. This is particularly important for the FDA, which sets federal safety standards. Preemption makes the FDA standards supreme as long as the manufacturers adhere to FDA guidelines. This policy prohibits trial lawyers from suing manufacturers in circumstances where the manufacturer adhered to the federal guidelines but where state law differs. A recent Supreme Court case, Wyeth v. Levine, opened up this issue by ruling that the FDA’s approval of a medication does not protect the drug’s maker from state-level lawsuits, thereby limiting preemption’s scope.

The case concerned a musician who received an injection of the anti-nausea medication Phenergan. The medication label has a warning that it can cause gangrene if it strikes an artery, which it did in this case, and the musician had to have her arm amputated. She sued Wyeth, the drug’s maker, and a jury awarded her $7.3 million, even though the FDA had approved the medication with a label warning of the drug’s dangers.

This decision presents manufacturers with increased vulnerability to litigation, and will likely encourage them to be far more cautious, and to seek new products that minimize risk rather than ones that maximize benefits. Such increased caution will lead drug makers “to pester the FDA with even more requests to augment safety warnings, reinforcing an existing tendency toward over-warning rather than under-warning,” notes Jack Calfee of the American Enterprise Institute. The result of over-warning is likely to be fewer drugs approved with any kind of risk profile, which will also limit the scope of potential benefits.

The policy consequences of Wyeth v. Levine could be far reaching. The Supreme Court had earlier ruled in favor of preemption, but that decision involved medical devices. By clarifying that preemption now only applies to that area, the court may have created a situation in which the FDA adds years and billions to the costs of new drug approvals on one end and then is given little weight to provide a definitive word on safety on the other.

All of this comes at a time when new drug approvals are far lower than they were a decade ago, and approvals of products in Phase III, the last level of assessment before products go to market, have declined in recent years—indicating that FDA officials may be nervous about being second-guessed. One of the Obama FDA’s first actions has been a new initiative to review the safety and efficacy of 25 medical devices marketed before 1976, a curious act that will have the agency devote precious resources to reexamining old technologies rather than reviewing new ones.

These activities raise the troublesome possibility that the FDA will adopt the old motto of the cautious bureaucrat: “You won’t be called to testify about the drug you didn’t approve.” But when innovation is squelched, who can testify for the life that would have been saved? FDA timidity is especially problematic and often detrimental to public health when it comes to risky new drugs for cancer or new antibiotics for increasingly resistant infections.

Then there is the looming shadow of health reform. One of the great requirements of a systemic overhaul is controlling costs, which were $2.5 trillion last year and growing at a rate triple that of inflation. It is clear that Congress and the administration will have to cut costs in order to come close to paying for an ambitious plan. How they do so could have a devastating impact on medical innovation.

Attempts to universalize our system and pay for it with cost controls that could stifle innovation contradict their own goal, which is, presumably, better health. It also embraces the notion that you can get something for nothing—namely, that you can get innovative new discoveries and better health outcomes somehow without paying for these discoveries to come into being.

We forget the power of the single-celled organism. For most of man’s existence on earth, the power of a single-celled animal to snuff out life was an accepted—and tragic—way of the world. Human beings could be wiped out in vast communicable plagues or simple through ingesting food or water. In the last century, the advent of the antibiotic has changed all that. For millennia, the only cure for an infection in humans was hope. Today, antibiotic use is so common that public health officials struggle to get people not to overuse antibiotics and thereby diminish their effectiveness.

Just as there is potential danger from the way in which Americans take the power of the antibiotic for granted, so, too, one of the greatest threats to our health and continued welfare is that Americans in the present day, and particularly their leaders, are taking for granted the power, potency, and progress flowing from life-saving medical innovations. And in so doing, they may unknowingly prevent the kind of advance that could contribute as vitally to the welfare of the 21st century as the discovery of the antibiotic altered the course of human history for the better in the century just concluded.

Mr. Troy, deputy secretary of the United States Department of Health and Human Services from 2007 to 2009, is a visiting senior fellow at the Hudson Institute.


Full article: http://online.wsj.com/article/SB124389153780873939.html

Why Are Humans Different From All Other Apes? It’s the Cooking, Stupid

cooking may 26 2

Human beings are not obviously equipped to be nature’s gladiators. We have no claws, no armor. That we eat meat seems surprising, because we are not made for chewing it uncooked in the wild. Our jaws are weak; our teeth are blunt; our mouths are small. That thing below our noses? It truly is a pie hole.

To attend to these facts, for some people, is to plead for vegetarianism or for a raw-food diet. We should forage and eat the way our long-ago ancestors surely did. For Richard Wrangham, a professor of biological anthropology at Harvard and the author of “Catching Fire,” however, these facts and others demonstrate something quite different. They help prove that we are, as he vividly puts it, “the cooking apes, the creatures of the flame.”

The title of Mr. Wrangham’s new book — “Catching Fire: How Cooking Made Us Human” — sounds a bit touchy-feely. Perhaps, you think, he has written a meditation on hearth and fellow feeling and s’mores. He has not. “Catching Fire” is a plain-spoken and thoroughly gripping scientific essay that presents nothing less than a new theory of human evolution, one he calls “the cooking hypothesis,” one that Darwin (among others) simply missed.

Apes began to morph into humans, and the species Homo erectus emerged some two million years ago, Mr. Wrangham argues, for one fundamental reason: We learned to tame fire and heat our food.

“Cooked food does many familiar things,” he observes. “It makes our food safer, creates rich and delicious tastes and reduces spoilage. Heating can allow us to open, cut or mash tough foods. But none of these advantages is as important as a little-appreciated aspect: cooking increases the amount of energy our bodies obtain from food.”

He continues: “The extra energy gave the first cooks biological advantages. They survived and reproduced better than before. Their genes spread. Their bodies responded by biologically adapting to cooked food, shaped by natural selection to take maximum advantage of the new diet. There were changes in anatomy, physiology, ecology, life history, psychology and society.” Put simply, Mr. Wrangham writes that eating cooked food — whether meat or plants or both —made digestion easier, and thus our guts could grow smaller. The energy that we formerly spent on digestion (and digestion requires far more energy than you might imagine) was freed up, enabling our brains, which also consume enormous amounts of energy, to grow larger. The warmth provided by fire enabled us to shed our body hair, so we could run farther and hunt more without overheating. Because we stopped eating on the spot as we foraged and instead gathered around a fire, we had to learn to socialize, and our temperaments grew calmer.

There were other benefits for humanity’s ancestors. He writes: “The protection fire provided at night enabled them to sleep on the ground and lose their climbing ability, and females likely began cooking for males, whose time was increasingly free to search for more meat and honey. While other habilines” — tool-using prehumans — “elsewhere in Africa continued for several hundred thousand years to eat their food raw, one lucky group became Homo erectus — and humanity began.”

You read all this and think: Is it really possible that this is an original bit of news? Mr. Wrangham seems as surprised as we are. “What is extraordinary about this simple claim,” he writes, “is that it is new.”

Mr. Wrangham arrives at his theory by first walking us through the work of other anthropologists and naturalists, including Claude Lévi-Strauss and Darwin, who did not pay much attention to cooking, assuming that humans could have done pretty well without it.

He then delivers a thorough, delightfully brutal takedown of the raw-food movement and its pieties. He cites studies showing that a strict raw-foods diet cannot guarantee an adequate energy supply, and notes that, in one survey, 50 percent of the women on such a diet stopped menstruating. There is no way our human ancestors survived, much less reproduced, on it. He seems pleased to be able to report that raw diets make you urinate too often, and cause back and hip problems.

Even castaways, he writes, have needed to cook their food to survive: “I have not been able to find any reports of people living long term on raw wild food.” Thor Heyerdahl, traveling by primitive raft across the Pacific, took along a small stove and a cook. Alexander Selkirk, the model for Robinson Crusoe, built fires and cooked on them.

Mr. Wrangham also dismisses, for complicated social and economic reasons, the popular Man-the-Hunter hypothesis about evolution, which posits that meat-eating alone was responsible. Meat eating “has had less impact on our bodies than cooked food,” he writes. “Even vegetarians thrive on cooked diets. We are cooks more than carnivores.”

Among the most provocative passages in “Catching Fire” are those that probe the evolution of gender roles. Cooking made women more vulnerable, Mr. Wrangham ruefully observes, to male authority.

“Relying on cooked food creates opportunities for cooperation, but just as important, it exposes cooks to being exploited,” he writes. “Cooking takes time, so lone cooks cannot easily guard their wares from determined thieves such as hungry males without their own food.” Women needed male protection.

Marriage, or what Mr. Wrangham calls “a primitive protection racket,” was a solution. Mr. Wrangham’s nuanced ideas cannot be given their full due here, but he is not happy to note that cooking “trapped women into a newly subservient role enforced by male-dominated culture.”

“Cooking,” he writes, “created and perpetuated a novel system of male cultural superiority. It is not a pretty picture.” As a student, Mr. Wrangham studied with the primatologist Jane Goodall in Gombe, Tanzania, and he is the author, with Dale Peterson, of a previous book called “Demonic Males: Apes and the Origins of Human Violence.” In “Catching Fire” he has delivered a rare thing: a slim book — the text itself is a mere 207 pages — that contains serious science yet is related in direct, no-nonsense prose. It is toothsome, skillfully prepared brain food.

“Zoologists often try to capture the essence of our species with such phrases as the naked, bipedal or big-brained ape,” Mr. Wrangham writes. He adds, in a sentence that posits Mick Jagger as an anomaly and boils down much of his impressive erudition: “They could equally well call us the small-mouthed ape.”


Full article: http://www.nytimes.com/2009/05/27/books/27garn.html?hpw

Photo: http://www.economist.com/science/displaystory.cfm?story_id=13139619


See also:

From Studying Chimps, a Theory on Cooking

Richard Wrangham, a primatologist and anthropologist, has spent four decades observing wild chimpanzees in Africa to see what their behavior might tell us about prehistoric humans. Dr. Wrangham, 60, was born in Britain and since 1989 has been at Harvard, where he is a professor of biological anthropology. His book, “Catching Fire: How Cooking Made Us Human,” will be published in late May.

cooking may 26

Richard Wrangham

He was interviewed over a vegetarian lunch at last winter’s American Association for the Advancement of Science meeting in Chicago and again later by telephone. An edited version of the two conversations follows.

Q. In your new book, you suggest that cooking was what facilitated our evolution from ape to human. Until now scientists have theorized that tool making and meat eating set the conditions for the ascent of man. Why do you argue that cooking was the main factor?

A. All that you mention were drivers of the evolution of our species. However, our large brain and the shape of our bodies are the product of a rich diet that was only available to us after we began cooking our foods. It was cooking that provided our bodies with more energy than we’d previously obtained as foraging animals eating raw food.

I have followed wild chimpanzees and studied what, and how, they eat. Modern chimps are likely to take the same kinds of foods as our early ancestors. In the wild, they’ll be lucky to find a fruit as delicious as a raspberry. More often they locate a patch of fruits as dry and strong-tasting as rose hips, which they’ll masticate for a full hour. Chimps spend most of their day finding and chewing extremely fibrous foods. Their diet is very unsatisfying to humans. But once our ancestors began eating cooked foods — approximately 1.8 million years ago — their diet became softer, safer and far more nutritious.

And that’s what fueled the development of the upright body and large brain that we associate with modern humans. Earlier ancestors had a relatively big gut and apelike proportions. Homo erectus, our more immediate ancestor, has long legs and a lean, striding body. In fact, he could walk into a Fifth Avenue shop today and buy a suit right off a peg.

Our ancestors were able to evolve because cooked foods were richer, healthier and required less eating time.

Q. To cook, you need fire. How did early humans get it?

A. The austrolopithicines, the predecessors of our prehuman ancestors, lived in savannahs with dry uplands. They would often have encountered natural fires and food improved by those fires. Moreover, we know from cut marks on old bones that our distant ancestor Homo habilis ate meat. They certainly made hammers from stones, which they may have used to tenderize it. We know that sparks fly when you hammer stone. It’s reasonable to imagine that our ancestors ate food warmed by the fires they ignited when they prepared their meat.

Now, once you had communal fires and cooking and a higher-calorie diet, the social world of our ancestors changed, too. Once individuals were drawn to a specific attractive location that had a fire, they spent a lot of time around it together. This was clearly a very different system from wandering around chimpanzee-style, sleeping wherever you wanted, always able to leave a group if there was any kind of social conflict.

We had to be able to look each other in the eye. We couldn’t react with impulsivity. Once you are sitting around the fire, you need to suppress reactive emotions that would otherwise lead to social chaos. Around that fire, we became tamer.

Q. Your critics say you have a nice theory, but no proof. They say that there’s no evidence of fireplaces 1.8 million years ago. How do you answer them?

A. Yes, there are those who say we need archaeological proof that we made fires 1.8 million years ago. And yes, thus far, none have been found. There is evidence from Israel showing the control of fire at about 800,000 years ago. I’d love to see older archaeological signals. At some point, we’ll get them.

But for the meanwhile, we have strong biological evidence. Our teeth and our gut became small at 1.8 million years. This change can only be explained by the fact that our ancestors were getting more nutrition and softer foods. And this could only have happened because they were cooking. The foraging diet that we see in modern chimps just wasn’t enough to fuel it.

Q. I understand that you once embarked on a chimpanzee diet. What was that like?

A. In 1972, when I was studying chimpanzee behaviors in Tanzania, I thought it would be interesting to see how well I could survive on what chimps ate. I asked Jane Goodall, the director of the project, if it I could live like a chimp for a bit. She said O.K. Now I wanted to be really natural and truly be a part of the bush and so I added, “I’d like to do it naked.” There, she put her foot down: “You’ll wear at least a loincloth!”

In the end, I never did the full experiment. However, there were times when I went off without eating in the mornings and tried living off whatever I found. It left me extremely hungry.

Q. What do you usually eat?

A. Oh, ordinary Western industrialized food. I won’t eat an animal I’m not prepared to kill myself. I haven’t eaten a mammal in about 30 years, except a couple of times during the 1990s, when I ate some raw monkey the chimps had killed and left behind.

I wanted to see what it tasted like. The black and white Colobus monkey is very tough and unpleasant. The red Colobus is sweeter. The chimps prefer it for good reason.

Q. You ate raw monkey for science?

A. Yes. I feel that by getting under the skin of a chimpanzee, you get insights that you don’t otherwise get. That’s how I came to this understanding about the role of cooking.

Q. Since you believe that the raw fare of prehistory would leave a modern person starving, does that mean we are adapted to the foods that we currently eat — McDonald’s, pizza?

A. I think we’re adapted to our diet. It’s that our lifestyle is not. We’re adapted in the sense that our bodies are designed to maximize the amount of energy we get from our foods. So we are very good at selecting the foods that produce a lot of energy. However, we take in far more than we need. That’s not adaptive.


Full article and photo: http://www.nytimes.com/2009/04/21/science/21conv.html?ref=books


See also:

What’s cooking?


Women’s menstruation genes found


The genes were found on chromosomes six and nine

Scientists say they have begun to crack the genetic code that helps determine when a girl becomes a woman.

A UK-led team located two genes on chromosomes six and nine that appear to strongly influence the age at which menstruation starts.

The Nature Genetics study also provides a clue for why girls who are shorter and fatter tend to get their periods months earlier than classmates.

The genes sit right next to DNA controlling height and weight.

The two to three-year transition from childhood to adult body size and sexual maturity
Complex multi-staged process involving growth acceleration, weight gain and the appearance of secondary sexual characteristics
Can happen earlier in overweight and obese chidlren
Early puberty associated with increased risk for obesity, diabetes and cancer

A second paper, published in the same journal, also concludes that one of the two genes highlighted by the first study plays a key role in the timing of puberty in both girls and boys.

Reproductive lifespan is closely linked to the risk of developing conditions such as heart disease, breast cancer and osteoporosis.

It is thought that the female sex hormone oestrogen – produced at higher rates during a woman’s reproductive life – raises the risk of these diseases.

Therefore, the earlier a woman goes through puberty, the more risk she may be at.

So the researchers say their work not only improves our understanding of the genetics underpinning development, it may ultimately aid the fight against disease.

However, they also accept that the onset of puberty is influenced by factors such as nutrition and exercise, and the effect of a single gene is likely to be relatively small.

Developing earlier

In the western world children are reaching puberty at younger and younger ages – some girls at the age of seven.

Many blame rising obesity rates because, generally, girls who achieve menstruation earlier in life tend to have greater body mass index (BMI) and a higher ratio of fat compared to those who begin menstruation later.

From its analysis, a team led by Exeter’s Peninsula Medical School predict one in 20 females carry two copies of each of the gene variations which result in menstruation starting earlier – approximately four and half months earlier than those with no copies of the gene variants.

In collaboration with research institutions across Europe and the US, they studied 17,510 women from across the world, including women of European descent who reported reaching menstruation of between nine and 17 years of age.

When they split the women up according to the age they began menstruating, certain gene patterns appeared.

Scanning the whole genome enabled them to hone in on these differences and pinpoint the exact genes most likely accountable.

Researcher Dr Anna Murray said: “This study provides the first evidence that common genetic variants influence the time at which women reach sexual maturation.

“Our findings also indicate a genetic basis for the associations between early menstruation and both height and BMI.”

Biological mechanisms

Co-worker John Perry said: “Understanding the biological mechanisms behind reproductive lifespan may also help inform us about associated diseases that affect a lot of women as they get older, including diabetes, heart disease and breast cancer.”

The second paper, led by the MRC Epidemiology Unit at Cambridge, analysed genetic information from thousands of people.

It linked a specific variant one of the two genes highlighted by the Exeter team – LIN28B – with earlier breast development in girls, and earlier voice breaking and pubic hair development in boys.

Lead researcher Dr Ken Ong said: “LIN28B works by controlling whether or not other genes are active.

“There are a number of such ‘master switch’ genes known, but this is the first evidence linking such a gene to growth and physical maturation.”

Dr Aric Sigman, psychologist and fellow of the Royal Society of Medicine, said: “Early menstruation is a health issue because beyond being an inconvenient surprise for a girl and her parents, it’s also associated with a higher risk of a variety of diseases and psychological problems.

“Girls maturing earlier are more likely to become depressed, delinquent, aggressive, socially withdrawn, suffer sleep problems drinking, smoking, drug abuse, lower self-esteem and suicide attempts.

“They’re also more likely to exhibit poor academic performance in high school than on-time or later maturing peers.

“It is important that we understand why early menstruation occurs and these findings bring us closer to explaining this in some girls.”

Three other papers, also published in Nature Genetics, throw up other candidate genes which appear to be involved in the onset of puberty.


Full article and photos: http://news.bbc.co.uk/2/hi/health/8046457.stm

Their Calling Is Defending Rats, Yet These Folks Aren’t Lawyers

Animal Lovers Go to Bat for Lab Rodents; Karen Borga Has Something Up Her Sleeve

In her fight for the rights of some of the smallest creatures, Stephanie Ernst offers a video of a frolicking, fluffy mammal snuggling up with a pet cat.

“Rats don’t get a fair shake,” she writes in an introduction to the video on her animal-rights blog. “This one is quite adorable and may lead you to see rats a little differently.”

Unfortunately, stripped along the bottom of the video is an ad for an exterminator automatically generated by the YouTube.com service hosting the online clip.

“Immediate rat solutions!” it reads. “Free inspection the day you call.”

Says Ms. Ernst: “It’s horrible.”

Ms. Ernst, a resident of St. Louis, is most concerned about the welfare of lab rodents. Animal advocates say rats and mice make up 90% of animal testing conducted in university laboratories and other research facilities in the U.S. In 2002, the Animal Welfare Act was amended to exclude rodents from protections offered to bigger lab animals including dogs, monkeys and even guinea pigs.

“Rats and mice tend to get a bad rap” that influences people from the time they are children, says Ms. Ernst. “We just have these biases built in that are not really representative of who they are.”

Animal-rights advocates in the U.S. have scored coups in recent years for an assortment of uncuddly animals. A new law requires bigger cages for egg-laying chickens in California. Foie gras, a delicacy made from the livers of fattened geese and ducks, has been banished from some restaurant menus.

But public sympathy for rats and mice hasn’t grown much in three decades since the animal-rights movement first organized in the U.S. Viewed as pests and greeted with shrieks, rats are much less likely to attract public sympathy than, say, the furry bunnies that serve as the poster critters for cutting back on animal testing.

So, rat lovers have a tough job. Researchers who use federal funds are asked to adhere to basic guidelines for rodents, such as avoiding overcrowded cages. But privately funded research labs are legally bound by no rules in their testing of rats and mice.

rats may 17 1

“I used to see rats and think, ‘Ew.’ Now I see rats and think, ‘Those rats have probably got a family somewhere.'” — Chad Sandusky

“You see people shut down if you talk about how a rat can suffer,” says Chad Sandusky, director of toxicology and research at the Physicians Committee for Responsible Medicine, a group that fights for animal rights and advocates vegetarianism.

Years ago, during his doctorate research on allergic reactions in humans, Mr. Sandusky experimented on and euthanized many rodents. The 64-year-old pharmacologist and toxicologist now works to persuade chemical and pesticide companies to carry out effective experiments using computerized tests or other means that don’t involve animals.

“I’m working off my bad karma,” he said.

Mr. Sandusky’s transformation came gradually as he reviewed studies involving rodents and other animals for the Environmental Protection Agency. He concluded animal studies were too expensive and time-consuming, and the results didn’t merit the sacrifice.

“I used to see rats and think, ‘Ew,’ ” Mr. Sandusky said. “Now I see rats and think, ‘Those rats have probably got a family somewhere.’ ”

Mr. Sandusky and other activists have succeeded in getting companies to listen to their concerns about using rodents in experiments. But rarely does anyone actually stop using rats and mice altogether. So activists are left to seek a better quality of life for the rats and mice in the lab.

“These animals are in full view 24-7, and they don’t have any ability to do anything other than drink water and eat pellets and, well, you can imagine,” says Mr. Sandusky.

Activists at People for the Ethical Treatment of Animals, who say they want “empty cages not bigger cages,” nevertheless have come up with a list of guidelines they shop to private companies in hopes of changing their treatment of caged mice and rats.

The 15-page “environmental enrichment” document calls for nesting materials that rodents can shred for stress relief. It also advocates fresh bedding (but not too fresh because some male rodents like the familiar smell of home), and elevated wire lids, a sort of cathedral ceiling, for cages.

Some activists point to research detailing how toys such as the Translucent Small Animal Shoe, the Toob-a-Loop and the Mouse Igloo can keep the animals happy. Opaque plastic structures provide “a modicum of privacy,” for the rodents, as described in the PETA document.

When Jessica Sandler, director of PETA’s regulatory testing division, engages companies to push for better treatment of laboratory rodents, she talks about “animals” instead of “rodents.”

“I certainly don’t emphasize the fact that they are mice and rats,” Ms. Sandler says. “A lot of these tests are also done on rabbits and guinea pigs, so I lump them in. I know a lot of people will empathize more with a cute rabbit.”

At a meeting a couple of years ago with representatives at General Electric Co., which contracted with companies that use rodents for safety tests in GE’s plastics division, activists laid out their simple request: a solid surface for rodent cages.

“To a layperson like me and you, you may think, ‘Well, do we really need to do something like that for these animals?’ But our scientists certainly thought it was important and relatively easily done,” said Gary Sheffer, a GE spokesman. “It wasn’t something we dismissed as being ridiculous.”

GE complied with the request. It has since sold off its plastics division.

Amber Alliger is writing her dissertation for a doctorate in psychology at Hunter College in New York on the benefits to science from improving animal welfare in laboratories.

rats may 17 2

“Inside my shirt they feel secure… That’s how they get really used to their human.” –Karen Borga

Once they get to know their handlers, rats are so gentle that some people let them lick their eyelashes clean — and even their teeth, Ms. Alliger says.

“It’s amazing how gentle they are,” she says. “You get nipped a few times and if you say ‘Ow’ loud enough, they’re like, ‘Oh, that hurts’ and will stop biting.”

Mr. Alliger has found homes for dozens of rodents she has used in her experiments.

A friend, Karen Borga, adopted seven of them — black “hooded” rats so named because they look like they’re wearing black hooded coats on their white bodies. Ms. Borga totes them in a blue portable cat carrier she has outfitted with a rat hammock and a Schweppes Ginger Ale box to create an extra bedroom.

As she explained the accommodations, three of the rats, named Seven, Eight and Eleven (Ms. Borga kept their laboratory names) scuttled across a couch. Seven hopped over a hurdle created out of plastic tubing, then crawled up Ms. Borga’s sleeve.

“Inside my shirt they feel secure… That’s how they get really used to their human,” she says. “They love it.”


Full article and photos: http://online.wsj.com/article/SB124243142041325619.html

Reconstructing the Master Molecules of Life

rna may 13

The molecules at the beginning of life were probably made of RNA, a close chemical cousin of DNA. RNA can both act as an enzyme, to control chemical reactions, and record biological information in the sequence of bases along its backbone.

But how could the first RNA molecule have emerged from the organic chemicals thought to have been present on the primitive earth?

Chemists can plausibly show how each of the three components of an RNA nucleotide – a base, a sugar, and a phosphate group – could have formed spontaneously. But the base cannot attach to the sugar, known as ribose, because the energy of the reaction is unfavorable.

Researchers have been stuck at this roadblock for 20 years. Chemists at the University of Manchester, led by John D. Sutherland, have now provided a way around.

The diagram above shows, in blue, the reaction that doesn’t work and, in green, the new work-around.

Both reactions start off with simple chemicals believed to have been present on the primitive earth. They are glyceraldehyde (10), cyanimide (8), cyanoacetaldehyde (5) and cyanoacetylene (7).

These chemicals will naturally form the base cytosine (3) and ribose (4). But the cytosine cannot be made to join to the ribose under natural conditions.

Working through all possible chemical combinations for 10 years, Dr. Sutherland’s team discovered a different and quite unintuitive route. Their reaction system, shown in green, combines the carbon-nitrogen chemistry that leads to the bases with the carbon-oxygen chemistry that makes the sugars. They make a half-sugar/half-base (11), add another half-sugar (12) and then a half-base to make an intermediate (13) that easily becomes ribo-cytidine phosphate.

Ultra-violet light converts ribocytidine to the uracil-containing nucleotide. The Manchester team has not yet found how to mkae the other two nucleotides of which RNA is composed, but this should not be an insuperable problem.

Once the four nucleotides have been formed, they would fairly easily zip together to make an RNA molecule. If Dr. Sutherland’s work is correct, it provides for the first time a plausible explanation of how an information-carrying biological molecule like RNA could have arisen on the primitive earth.


Full article and photo: http://www.nytimes.com/2009/05/14/science/14visuals-web.html


See also:

Chemist Shows How RNA Can Be the Starting Point for Life



Chemist Shows How RNA Can Be the Starting Point for Life

An English chemist has found the hidden gateway to the RNA world, the chemical milieu from which the first forms of life are thought to have emerged on earth some 3.8 billion years ago.

He has solved a problem that for 20 years has thwarted researchers trying to understand the origin of life — how the building blocks of RNA, called nucleotides, could have spontaneously assembled themselves in the conditions of the primitive earth. The discovery, if correct, should set researchers on the right track to solving many other mysteries about the origin of life.

The author, John D. Sutherland, a chemist at the University of Manchester, likened his work to a crossword puzzle in which doing the first clues makes the others easier. “Whether we’ve done one across is an open question,” he said. “Our worry is that it may not be right.”

Other researchers believe he has made a major advance in prebiotic chemistry, the study of the natural chemical reactions that preceded the first living cells. “It is precisely because this work opens up so many new directions for research that it will stand for years as one of the great advances in prebiotic chemistry,” Jack Szostak of the Massachusetts General Hospital wrote in a commentary in Nature, where the work is being published on Thursday.

Scientists have long suspected that the first forms of life carried their biological information not in DNA but in RNA, its close chemical cousin. Though DNA is better known because of its storage of genetic information, RNA performs many of the trickiest operations in living cells. RNA seems to have delegated the chore of data storage to the chemically more stable DNA eons ago. If the first forms of life were based on RNA, then the issue is to explain how the first RNA molecules were formed.

For more than 20 years researchers have been working on this problem. The building blocks of RNA, known as nucleotides, each consist of a chemical base, a sugar molecule called ribose and a phosphate group. Chemists quickly found plausible natural ways for each of these constituents to form from natural chemicals. But there was no natural way for them all to join together.

The spontaneous appearance of such nucleotides on the primitive earth “would have been a near miracle,” two leading researchers, Gerald Joyce and Leslie Orgel, wrote in 1999. Others were so despairing that they believed some other molecule must have preceded RNA and started looking for a pre-RNA world.

The miracle seems now to have been explained. In the article in Nature, Dr. Sutherland and his colleagues Matthew W. Powner and Béatrice Gerland report that they have taken the same starting chemicals used by others but have caused them to react in a different order and in different combinations than in previous experiments.

Instead of making the starting chemicals form a sugar and a base, they mixed them in a different order, in which the chemicals naturally formed a compound that is half-sugar and half-base. When another half-sugar and half-base are added, the RNA nucleotide called ribocytidine phosphate emerges.

A second nucleotide is created if ultraviolet light is shined on the mixture. Dr. Sutherland said he had not yet found natural ways to generate the other two types of nucleotides found in RNA molecules, but synthesis of the first two was thought to be harder to achieve.

If all four nucleotides formed naturally, they would zip together easily to form an RNA molecule with a backbone of alternating sugar and phosphate groups. The bases attached to the sugar constitute a four-letter alphabet in which biological information can be represented.

“My assumption is that we are here on this planet as a fundamental consequence of organic chemistry,” Dr. Sutherland said. “So it must be chemistry that wants to work.”

The reactions he has described look convincing to most other chemists. “The chemistry is very robust — all the yields are good and the chemistry is simple,” said Dr. Joyce, an expert on the chemical origin of life at the Scripps Research Institute in La Jolla, Calif.

In Dr. Sutherland’s reconstruction, phosphate plays a critical role not only as an ingredient but also as a catalyst and in regulating acidity. Dr. Joyce said he was so impressed by the role of phosphate that “this makes me think of myself not as a carbon-based life form but as a phosphate-based life form.”

Dr. Sutherland’s proposal has not convinced everyone. Dr. Robert Shapiro, a chemist at New York University, said the recipe “definitely does not meet my criteria for a plausible pathway to the RNA world.” He said that cyano-acetylene, one of Dr. Sutherland’s assumed starting materials, is quickly destroyed by other chemicals and its appearance in pure form on the early earth “could be considered a fantasy.”

Dr. Sutherland replied that the chemical is consumed fastest in the reaction he proposes, and that since it has been detected on Titan there is no reason it should not have been present on the early earth.

If Dr. Sutherland’s proposal is correct it will set conditions that should help solve the many other problems in reconstructing the origin of life. Darwin, in a famous letter of 1871 to the botanist Joseph Hooker, surmised that life began in some warm little pond, with all sorts of ammonia and phosphoric salts.” But the warm little pond has given way in recent years to the belief that life began in some exotic environment like the fissures of a volcano or in the deep sea vents that line the ocean floor.

Dr. Sutherland’s report supports Darwin. His proposed chemical reaction take place at moderate temperatures, though one goes best at 60 degrees Celsius. “It’s consistent with a warm pond evaporating as the sun comes out,” he said. His scenario would rule out deep sea vents as the place where life originated because it requires ultraviolet light.

A serious puzzle about the nature of life is that most of its molecules are right-handed or left-handed, whereas in nature mixtures of both forms exist. Dr. Joyce said he had hoped an explanation for the one-handedness of biological molecules would emerge from prebiotic chemistry, but Dr. Sutherland’s reactions do not supply any such explanation. One is certainly required because of what is known to chemists, by a play on chemical words, as original sin. The left-handed sugars prevent the right-handed ones from joining together in a chain, so they must somehow be destroyed before life can begin.

Dr. Sutherland said he was working on this problem and on others, including how to enclose the primitive RNA molecules in some kind of membrane as the precursor to the first living cell.


Full article: http://www.nytimes.com/2009/05/14/science/14rna.html?ref=global-home