Thursday, July 30, 2009

Darwin's Rubbish

We're in Maine for a few days for a meeting in Bar Harbor. It's a celebration of the 50th anniversary of 'short courses' at Jackson Labs, an organization that focuses on genetic research, primarily having to do with health. They've developed and maintain numerous model mouse strains for research -- diabetic mice, obese mice, mice susceptible to cancer, and so on. If you work in a mammalian genetics lab, you've ordered mice from Jackson Labs.

The meeting tomorrow is the culminating day of the "Symposium celebrating the 50th Annual Short Course on Medical and Experimental Mammalian Genetics", with a focus on the future of genetic research, and an emphasis on personalized medicine. Speakers are charged with addressing the question, "What are the scientific, technical, social and legal implications of 21st century medicine?" Mario Capecchi from the University of Utah will be talking on "The Future of Development", Janet Davison Rowley on "The Future of Cancer", Richard Axel on "The Future of Neurogenetics", Francis Collins on "The Future of Individualized Medicine", and so on. Ken was asked to speak on "The Future of Evolution", but has changed the title (to "Darwin's "Rubbish": 150 Years of Evolution and Counting"), as the future of evolution seemed obvious (it will happen, but its course is unpredictable) and wouldn't fill a half hour.

Ken's new title refers to a letter Darwin wrote to Asa Gray on Sept 5, 1857, upon completion of The Origin of Species. Gray was a famous botanist at Harvard, and long- time correspondent of Darwin's. In the letter, Darwin wrote,
“This sketch is most imperfect; but in so short a space I cannot make it better. Your imagination must fill up many wide blanks. Without some reflection, it will appear all rubbish; perhaps it will appear so after reflection.”
More to come.

Monday, July 27, 2009

Ethical issues in personalized 'genomic' medicine

Ken was just at a very interesting conference on ethical issues that will pertain to the use of genomic data to 'personalize' medicine in the 21st century. Of course, all such conferences are presumptuous in the sense that nobody can really know what things will be like. The record of pundits and experts (scientific, economic, or political) is so poor it's a wonder there is employment for any of us. In biomedicine and genetics, we've made so many false promises that if we were Pinocchio we'd all be sprouting giant Sequoias for noses.

Nonetheless, using individual genotypes to try to predict or diagnose disease is going to be a fad for at least some years until it either proves to be a bonanza for health care, or a bust with little definitive power. So much effort is going to be invested in the effort, that if it doesn't pay off it will be for good reasons -- that is, we'll have learned a lot about biology in the process of not improving medicine very much.

Likewise, if individual genotypes do prove to be of high predictive or diagnostic value, that will mean we know the genes and hence will be able to figure out why they lead to disorders and then, presumably, we'll be able to engineer some prevention or therapy.

'Personalized medicine' is a lobbying phrase in that medicine has always been personalized, and adding the term 'genomic' is also a lobbying phrase for support to attempt to boil your and my health (and who knows what else?) to estimable, powerful genetically based effects. The fact that's it's a catch-phrase to sell personal direct to consumer genetic advice, or other clinical or commercial products, does not mean it's bad or won't work. But at this stage, we need to be aware of the vested-interest component. In principle (but not in America), we could keep quiet until we actually knew it would work before we started selling predictive genomic services.

In fact, there are many traits for which an aberrant gene is indisputably known. Many investigators are working on trying to understand them. They are the 'Mendelian' diseases that are almost always due to aberrant function in the same gene (like cystic fibrosis or sickle-cell anemia), and fractions of more complex diseases in which some cases are due to a single gene (like breast cancer associated with mutations in the BRCA1 and 2 genes) but most cases aren't. Personalized medicine, whether predictive, diagnostic, or clinical is quite important in these instances, and the main ethical issues are things like whether prenatal screening or abortion are justified, etc.

Ethical issues abound, however, in the case of most traits, where genotypes are only vaguely known or have weak predictive power. There the question is what the relationship between a known genotype and actual risk is, and at what level of risk something should be done about it. Or whether, if nothing can be done about it, it is useful to worry people.

Who gets to see the information in either case is important. Can or should it be used to force treatment or preventive measures on people as a condition of insurability? Or to adjust health insurance premiums? Or to screen relatives? Or decide about employability?

Weak predictive power often means such incomplete knowledge that the genotype may not, in fact, make reliable or replicable effect on risk. The problem here, which we are already beginning to see, is that genotyping leads people to confront their physicians with diagnoses that the physician may or may not agree with (or understand, since much of this area is quite complex), but may feel obliged to do something about.

In a country in which the health care system is already overburdened, and will become even more so as the population ages, personalized genotyping can lead to large-scale over-diagnosis, new and unnecessary testing, and lifelong costly maintenance such as multiple screening checkups, preventive medication, and so on. Much in the way of profit to the system, much in the way of distraction for lawyer-wary doctors, but not much in the way of additional health. Indeed, overdiagnosis leads to overtreatment and hence actual increase in risk.

These and other issues, like the value (or not) of 'racial profiling' in medicine, were discussed at this meeting. There aren't answers, exactly, but at least the issues are being raised. Whether the issues are being examined deeply enough -- for example, as to ask whether some of these activities should be legal, or how much research money should be invested (or wasted, depending on your viewpoint), get less discussion, because our system places vested interests across the spectrum of people from commercial to academic to clinical.

Certainly, as everyone agreed, the genomics perspective is here to stay at least for a while. Probably, much of the promise and hype will prove to be false and will simply fade away. What is discovered that is useful will become part of standard practice and a source of better diagnosis and treatment. It is usual that most of what people claim at this or any time proves to be rather worthless. That's likely to be the same in this instance. But, as is also usual, some gains are made and they set the stage for the next wave of ideas about how genes work and how health can be improved.

Friday, July 24, 2009

New Yorker caption contest winner -- priceless


“O.K., let's slowly lower in the grant money.”


Talking trees

A number of years ago, some wacky botanists suggested that trees might communicate with each other by releasing chemicals into the air, presumably for the purpose of signaling danger. Most other botanists dismissed this out of hand, saying that these substances would be far too diluted by the time they wafted to other trees that there would be no signal left, not to mention that other trees couldn't possibly have receptors for these signals.

Well, evidence for just this has been growing over the past decade, with work showing that plants do indeed release many substances that do things like facilitate communication between the organs of a single plant and between plants, as well as repel destructive herbivores. A recent paper in Current Opinion in Plant Biology (Protective perfumes: the role of vegetative volatiles in plant defense against herbivores, Sybille B Unsicker, Grit Kunert and Jonathan Gershenzon, 2009, 12:1-7, discussed this week on the BBC World Service radio program, Science in Action with Jon Stewart) adds to the understanding of the role of such chemical release in plant defenses.

Details of the plant immune system have been known for a long time, but these have generally concerned how the plant launches a cascade of internal defensive responses to attack. When a bug bites, the plant responds by killing the area around the attacker, and launching a systemic chemical defense. Now it seems that volatiles released by plants can actually repel bugs that come to feed, attracted to this huge meal that can't get away. At least one volatile mimics a compound produced by aphids when they've been attacked, warning other aphids to stay away. Other volatiles inhibit pests from depositing their eggs on or in leaves. Indeed, more mechanisms are being documented all the time.

The system is more complicated than just repelling herbivores, however. Some of these substances actually attract herbivore predators and parasitoids. For example, researchers attached artificial caterpillars to the leaves of trees infested with autumnal moth, and birds attacked these caterpillars much more frequently than those on uninfested trees, presumably because of the volatiles being released that attracted them. Other attackers are more susceptible to parasitoids when they are on an infested tree, again presumably attracted by the substances being released by the tree.

Among other things this teaches us, it is yet another beautiful example of the importance of signaling in evolution. We find this a fascinating area of research, and yet more support for the idea that if you are open to looking for cooperation, you'll find it everywhere.

Wednesday, July 22, 2009

Bacteria R Us

Coincidently, Olivia Judson had a piece on our bacterial fellow travelers in the New York Times yesterday.

Tuesday, July 21, 2009

We are our viral load

A very interesting paper in the July 10 issue of Cell caught our attention. Written by three immunologists and titled "Redefining Chronic Viral Infection", the paper discusses the 'virome', viruses that are a constant part of our 'metagenome', and their role in health and disease. The idea that the multitude of bacteria that colonize our bodies inside out, many of which are essential for our survival, should rightly be considered part of our own genome (hence, the idea that we can correctly consider our own DNA to be just one part of the metagenome that keeps us alive) is not new, and is discussed widely, as well as in our books, but the extension of this idea to the viral load that most of us carry for most of our lives has not yet been widely considered.

It's been estimated that a human being has 10 times as many bacterial cells on or in it, than it does of its own cells. And roughly 8% of our genome appears to be sequences that were incorporated from infecting viruses. The number of viral genomes occupying a human, if the metagenome data is correct, will be countless times more than the number of our own copies of our own genome.

Some viruses can cause chronic infection while retroviruses can infest our chromosomes, and the effects can range from severe disease and death to no apparent disease at all. The same viral load in a person with a compromised immune system can cause severe disease, while the immune system of a healthy person will keep infection at bay, suggesting that the immune system constantly battles these viruses, and does so for a lifetime. Interestingly, Virgin et al. argue that this constant surveillance by the immune system "may fundamentally alter the response of the host to new infections, vaccines, or neo-epitopes that emerge during immune selection of viral variants."

It's worth quoting the paper at length here, not for the details per se, but with respect to the effects that living with the virome has on our immune system and our subsequent susceptibility to non-viral infection.
New evidence in animals indicates that chronic virus infection can fundamentally alter innate immunity to nonviral pathogens. IFN-γ expression during herpesvirus latency can symbiotically protect the host from infection by the bacteria Listeria monocytogenes and Yersinia pestis, the causative agent of plague (Barton et al., 2007). Thus, the détente developed between herpesviruses and their hosts over tens of millions of years of coevolution may offer benefits to the host. This protection may come at the cost of enhanced autoimmunity (Peacock et al., 2003). In addition, the prolonged presence of Sendai virus viral nucleic acids in mice is associated with IL-13-dependent NKT cell activation that can, in turn, contribute to reactive airway disease (Kim et al., 2008). Further, abnormal interferon secretion by plasmacytoid dendritic cells predisposes to secondary infection during chronic LCMV infection (Zuniga et al., 2008). It has been proposed that chronic activation of innate immune responses contributes to immune dysfunction in HIV and SIV infection (Mandl et al., 2008). Together, these examples provide a convincing case for a significant immunologic imprint of chronic viral infection on the nature of innate immune responses. Much remains to be done to define the balance between immunologic benefit and immunologic harm for chronic infection of humans by viruses that seldom cause overt disease.
Like much else that we discuss in this blog, new discoveries regularly add to, but rarely if ever subtract from the complexity of factors that contribute to biological traits. This is relevant, as we have many times said, to the use and interpretation of genetic epidemiological approaches, such as GWAS (trying to associate specific genetic variants in individuals to their traits). (Indeed, this paper points out that using the sterile, virus free laboratory mouse as a model for disease may be futile and irrelevant to the actual context within which humans develop disease.) It is worth repeating that while this complexity makes prediction from genotype to phenotype problematic in most instances (with some very clear-cut truly genetic traits, including diseases, excepted), it does not suggest that genes are irrelevant.

It is just that with the complexity of genotypic, microbial, and other environmental factors, along with a hefty dose of chance, there are many ways to arrive at a given trait, normal or disease. That is the stiff lesson Nature is teaching, if we care to listen.

So, this is yet another example of how further knowledge makes the story of health and disease more complex, not less so, and undoubtedly more interesting if less aligned with a natural yearning for simplicity and the hopes that, in regard to disease, simple cures will be just around the scientific corner. And, seeing the interaction of the organism with the virome as essentially an example of cooperation rather than a battle, adds an important new piece to the view of the organism as an ecosystem that relies for its survival on cooperation between its intrinsic and extrinsic parts.

Friday, July 17, 2009

Francis Collins and the NIH

Francis Collins, long-time director of the US National Human Genome Research Institute (part of our National Institutes of Health), is reported to be President Obama's choice to be the next NIH director. This is a curious choice and it is receiving comment from both Nature and Science this week (not to mention many blogs), and for good reasons.

First, on the surface it suggests a complete and total victory for the genetic view of life. That might have been fine for the Genome Institute, but seems much less so for NIH overall, because many if not most problems in both medicine and public health are not about genes or genetic variation (though they involve them at least indirectly) but are about environments, many kinds of therapies, prevention, and so on. One doesn't have to ignore the fact that genetics is certainly fundamental to life, and that molecular biology will become increasingly important, to know that (for example) most common diseases have little to do with genetic variation in any sensible way. Forcing things through a genetic conceptual lens will distort them, and root them in glamorous technology rather than concept or accountability for results.

As director of NHGRI, Dr Collins did a lot of things that can be praised and others that can be criticized. First and foremost among the former, perhaps, are that he nobly and successfully struggled to keep genome sequence and variation data in the public domain. But he also directly or indirectly intimidated other NIH agencies to get into the genome game, or even to contribute to NHGRI efforts. That did, and still does, coopt funds that could be used for other things instead. Partly, Francis rode a fashion, partly an explosion of genuinely new and important knowledge, and partly the Washington money game.

The other issue is Dr Collins' brand of Christian fundamentalism. He says he had a conversion experience, and believes in a 'personal' God and he's written a book about it. In the US context, that has implications for policy, not all of them good (and not all of them Constitutional, perhaps). He has done medical missionary work in Africa, as we understand it, but we don't know what kind of missionary work. If it was out in the sweaty hinterlands where people really need help, then he deserves and should get all the praise that is available for being consistent with the basic Christian principles that not all Christians follow (to say the least).

But a personal God intervenes in the world, and that is inconsistent with all the precepts of science. For example, one never expects experiments to be affected by God (or by prayer). Does he, or should we, pray for God to cure a loved one's disease? Science rests on the assumption that the world is a strictly material place to be understood in terms of its universal laws (like gravity, chemistry, etc.). And Dr. Collins clearly asserts that religions should basically not make scriptural claims about the world, because science will show that they are wrong. But what, then, is a personal God? These seem like incompatible views, which can be worrisome for someone in a position of science leadership in our culturally cloven society.

And what about evolution? What are his views of human origins, and the genetic relationships we bear with other species? Was that process mechanical, or God-directed? It makes a difference in how we interpret our own variation (not to mention racial variation), the differential impact of the 'curse' of disease, and the relevance of animal models. If humans are not 'just' genes, is that because God made us unique? If so, should we be licensed to torture animals in research? We do have to say that the NHGRI did not get involved in religious entanglements during Dr Collins' stewardship, but it is certainly fair to ask these things, as they have potential policy implications--especially if a fundamentalist chunk of the country will expect it.

Like grilling of a prospective Supreme Court justice, one would like to know how all our health needs will be addressed, not just one particular component of them. That will be of interest to those who are not just in genetics. Likewise, the religious aspect of Dr Collins' life is at least potentially relevant in our society, especially because genetics is inextricably and properly understood in the context of evolution; regardless of his view on evolution, religious literalists may expect him to use religion to filter research and policy decisions.

Supreme Court justices sometimes play the political role desired by those who appointed them, which is a shame since courts should be as objective as possible in the realities of human society. But sometimes they surprise. Dr Collins could go either way -- who can say? He could use his Christian religious beliefs to foster all sorts of health improvements for the needy here and abroad, rather than (as most genetics does) for the privileged. He could help international medical services (like Doctors Without Borders), a kind of US contribution to a medical Peace Corps. Or, he could stay close to his genetics worldview, narrowing the field directly or by bureaucrats being intimidated or chameleon-like in their struggle for cuts of the NIH pie.

In fact, there is no danger whatever that genetics research is going to be neglected; there are far too many commercial, academic, and bureaucratic careers that depend on it. We need to support basic genetics, but, for cost reasons, that should be done through NSF not NIH. But if anything is to be neglected it would be the work on the therapeutic approaches to hundreds of truly, seriously genetic diseases, and much less on the huge genomics studies aimed at genetic prediction or common diseases (like the targets of GWAS and biobanks); those are public-health scale approaches yet quart size Cokes and fries, not genes, are the major public health problems.

How Dr Collins will do will be interesting to see.

One weary monkey!

Ah, immortality! the ultimate dream of a sentient being who knows that this coil is, in fact, mortal. It's no surprise that we have always wanted to keep ourselves going as long, and as well, as we can. Many promises of longer lives, often quite self-interested in material ways and irresponsible, have been made over the last 20 years based on genetics. It's not worth reciting them here, and if we could drag out quotes it might be--it certainly should be--embarrassing to those highly placed scientists who made them.

There are many reasons why we should be careful what we wish for. It might come true! But if we could somehow use interventions to live for centuries instead of our 80 years today, it would likely be a very mixed blessing. A society made up of feeble and/or bored or crankily nostalgic and reactionary multi-centennarians, using up all the resources and developing every empty lot for a Seniors home.

But we write on this now because, following all the media attention to the subject, columnist Roger Cohen wrote a column a few days ago in the NY Times aptly titled The Meaning of Life, about the Miracle Science Story of the Week!, the new finding that 30% calorie restriction works on primates as it does on mice. Monkeys on restricted caloric diets lived longer with less frank morbidity than monkeys fed normally (in captivity, protected, and on standard Monkey Chow). This has long been asserted but previously not as well tested. Mr Cohen is familiar with primate research because his father studied baboons when he (Roger) was growing up in South Africa. If (and we have to say if ) the pictures in the Times article are representative, we have to agree thoroughly with him; life on a restricted diet is not at all appealing.

No matter--Methuselah here we come!

But hold on -- look at what that might mean. Judging from the photograph, the monkey on the restricted diet may be alive, but only in body, not in spirit. Moderation in all things may be wise advice, but it's hard to believe one can do without 30% of a normal diet (30% of some American McDiets perhaps!). But the poor spiritless creature on the restriction diet may be a walking (well, barely) advertisement for not aspiring to that.

There's another issue. It has to do with research ethics. After a variety of abuses over decades, Institutional Review Boards were established to protect research subjects (including animals) from suffering and abuse. A lot of us know that IRBs are not always so watchful or protective. What IRB approved this study? At what point shouldn't the authors have seen that they were producing listless animals, and stopped their study?

One weary monkey -- or perhaps wan weary monkey!

Wednesday, July 15, 2009

The way to a man's heart is .... through simpler nutrition labeling?

A headline today from the BBC: "Lower IQ 'a heart disease risk'". This is a report of a paper in the European Heart Journal in which researchers looked at the life experience of more than 4000 US Vietnam vets and concluded that IQ alone explains 20% of the risk of heart disease. They controlled for known risk factors such as smoking, diet, exercise and socioeconomic status (heart disease risk is already well-known to be higher among people with lower income and education levels, and smoking, diet and exercise are associated with SES as well), and still found an IQ effect.

Clearly, however, if this association is real, low IQ is not the direct 'cause' of heart disease--unless someone truly believes that 'brain' genes affect heart function such as clogging of arteries! Instead, IQ is a confounder, a measured variable that is not directly causal but is correlated in the sample with some unmeasured truly causal exposure factor, be it genetic or otherwise.

But confounder for what? The lead researcher suggests that the problem may be that people with low IQ may have trouble heeding health advice. It needs to be simpler and easier to understand.

"For instance, we often read about how some types of alcohol are good for you while others, or even the same ones, are not. The messages can be difficult to interpret, even by knowledgeable people."

This is an interesting quote because it (presumably unwittingly) points out that health advice is often contradictory, not to mention highly cultural. And thus impossible to interpret meaningfully, no matter your IQ. Some alcohol is good for you--except when it's not. Hard to think how you could make that into advice that's easier to follow. The other interesting bit about this quote is that the alcohol he's talking about is red wine, more generally the drink of choice among the middle and upper classes.

That aside, what could IQ be a marker of? Many people suspect IQ scores to be nothing but cultural markers--indeed, it's well-known that they vary by race, and have been changing rapidly over recent time, for largely cultural reasons. So IQ would be a marker of race, but the BBC story doesn't mention race, because the paper itself doesn't, either. Unbelievable!

One can hardly imagine an aware US epidemiologist who would not think right off the bat that race may well be the real risk factor here, for which IQ is a correlated marker. Heart disease risk is higher among African Americans, and presumably, if this particular study group follows known trends in IQ scores, African Americans would be more heavily represented in the lower IQ scoring/higher heart disease risk group. Thus, IQ is merely a marker for race here, and the risk associated with race.

But, let's run with the idea that intelligence might actually be a risk factor for heart disease. Numerous studies have been done to try to identify genes 'for' IQ, but with little replicable success. But let's suppose there are genes 'for' intelligence--indeed, if the IQ/heart disease association is real, mapping studies looking for genes associated with heart disease should have at least identified IQ genes. (But determining whether genes are IQ genes or not would be difficult because 70-80% of genes, no matter where else they are expressed or what their known function(s), are expressed in the brain--are they all 'brain genes' or genes for intelligence?). Indeed, genes 'for' poor circulatory function may affect ability to study and learn, so the reverse should also occur: IQ mapping should pick up cardiovascular genes!

Further, if this association is real, it raises the question of causality. But what 'causes' AIDS? Is it the HIV virus? But, HIV/AIDS rates are highest among the poor, so was Thabo Mbeki, the former president of South Africa, at least partly correct when he insisted that HIV/AIDS is caused by poverty? So if low IQ is a marker of race, and SES, do poverty and racial discrimination cause heart disease?

This study was carried out by British researchers. One wonders if their less nuanced understanding of American race and class issues erroneously led them to conclude that an effective way to lower heart disease is to make nutrition labeling easier to understand. Or could they be from such a middle-class environment as to be insensitive to these kinds of issues?

Or could it be that the general operating notions of causation, based as they are on stereotypical study designs that look for statistical associations, are a barrier to understanding?

As usual, genetic causation today and genetic causation as the result of evolution are similar. If natural selection favors some trait, then any genetic variation that gets the favored state will be favored. If variation in IQ genes led our ancestors to pick low-fat fruit (say), then those genotypes would be favored by selection just as fat-metabolizing genes would. If they could be detected by the kinds of searches for evidence of selection that many are doing these days, much experimental effort could be wasted trying to show how those genes were involved in lipid metabolism.

This is a subtle world today, and it's been that way throughout our ancestry.

Science is based on cause and effect, which is not identical to correlation and effect. The meaning of 'cause' has been central to philosophical thinking since Aristotle. Perhaps we should be less driven by methodology and pay more careful attention to that in science today.

Tuesday, July 14, 2009

The mind boggles

Schizophrenia is one of those important human traits that has eluded understanding despite heavy research investment. It is elusively variable and hence challenging to diagnose as a single entity or to decide how to split it up into causally distinct subsets. It seems highly familial in terms of its increased risk among family members, and hence seems clearly to have a genetic component. But the specific genes have been elusive--they must be there in the genome, but where are they?

A recent paper in Nature ("Common polygenic variation contributes to risk of schizophrenia and bipolar disorder", The International Schizophrenia Consortium, published online 1 July 2009) looked at large amounts of data on schizophrenia from several study populations. The authors did an extensive amount of genotyping and then various kinds of analysis (they looked, for example, at about a million variable sites (SNPs) in the genome, to identify regions where a particular variant marker was found more often in some 3322 cases than 3587 controls--pretty large studies for this kind of trait.

No really strong signal, that is that explained a high fraction of the disorder, was found. But through a series of analytic approaches, including computer simulations to test a range of possible genetic causal models to see which fit best, the authors (and this is one of those papers with a huge list of authors) concluded that many thousands of genes (classically they'd be known as 'polygenes') contribute to the trait. Most of the contributing variants are rare, but more importantly, they have individually very small effects.

Regardless of the details of the study, which could include all sorts of artifacts or be affected by the methods and assumptions of the authors, the study seems convincing that schizophrenia is like many other traits of a polygenic nature. The authors confirmed current ideas that bipolar disorder may involve many of the same genes, as well.

There are good evolutionary and biological reasons why this makes sense. In a nutshell, it's because so many processes are involved in brain development and function, each of them subject to mutational variation, that there are many ways to end up with the same trait. Natural selection only prunes those who can't reproduce as successfully, but the effect is distributed across these many parts of the genome, and hence acts only very weakly against any one of them. The result is an accumulation of variation that, at each individual region is essentially undetectably abnormal. The frequency of the individual variants changes over generations (and over geographic space in our species) mainly by chance (genetic drift).

The individual components have to work together--the 'cooperation' that is at the core of life as we outline in our book The Mermaid's Tale, but there is plenty of tolerance for variation, what we refer to as functional 'slippage'. It all makes sense biologically, evolutionarily, and causally.

In addition to its consistency with evolutionary expectations, this flies in the face of current predominant thinking about the prospects for what is being called 'personalized medicine', that is, medicine based on each individual person's genotype. If genotypes are poorly predictive, as in this case they seem to be, then they are of no real use to a clinician. In fact, as with so many similar studies, the total identified effect was small: based on various assumptions, the polygenic component identified by this geomewide search accounted for only 3 to 20% of the total disease risk, which itself is only 1%! Schizophrenia is an important problem (1% of the population is a lot of people), but clearly the predictive power of these gene-sets is modest, and this assumes that environmental effects will retain their current overall nature and impact (many of the genes probably have effects that vary depending on environment).

Many researchers will try to develop synthesizing methods to make individual sense of polygenotypes, so that treatment might be varied accordingly. How well they succeed only time will tell. But this is another case in which extensive study of a trait based on modern high-intensity technology has documented the nature of complex traits.

Monday, July 13, 2009

Rules of nature

Genes and DNA sequences that are crucial to survival are preserved by natural selection, so looking for sequences conserved among many species or phyla will clue you in to which genes or regulatory regions have been important in evolution. Or at least that's the conventional wisdom. A News Focus in last week's Science , however, ("Genomic Clues to DNA Treasure Sometimes Lead Nowhere", by Don Monroe) suggests that it's not that that simple.

For a number of years researchers have been trolling the genomes of numerous species, looking for conserved or 'ultraconserved' DNA sequences (ultraconserved sequences are stretches of DNA that are the same in mice, rats, and humans and perhaps other species, and often similar in fish), assuming that they represent regions with important regulatory or coding functions which is why they've been maintained by natural selection. The idea is that these stretches of DNA are so crucial that without them, the organism would die. However, often when these regions are knocked out (experimentally deactivated), there is no effect on the animal. This has surprised many people, given the well-accepted equation of conservation=importance.

From an evolutionary perspective, the problem is that mutations are always occurring and no bit of DNA is invulnerable. Given that, over time every nucleotide will experience mutational hits; most of the new alleles may disappear by drift, but not all of them will. Eventually, there will be no recognizable sequence left (that is, if the corresponding sequence in distant descendants could be identified, they would bear no similarity). Selection can maintain sequence if it is functional, and if therefore most mutational changes are harmful. But otherwise, other than by unlikely aspects of chance, how can deep conservation occur?

The conservation=importance equation thus makes sense and fits evolutionary theory well--but unfortunately for evolutionary biologists, nature doesn't always follow the rules. There are several possible explanations for this. Linear DNA sequence is less important in many contexts than the three dimensional conformation a stretch of DNA folds up into in the cell. It's this three dimensional shape that determines what other stretches of DNA or proteins can bind to it, and thus its function(s), and there might be multiple ways to attain the same shape.

And, sometimes it's not the shape of the molecule that's so important but some other characteristic such as, for example, its acidity, which is determined by amino acid composition, and acidity determines what that stretch of DNA can bind to. There are many nucleotide and amino acid combinations that will produce the same acidity. That means that the specific DNA sequence may not be conserved, but other characteristics of that stretch of DNA may be.

Many gene knock-out experiments in mice, even if the gene isn't one that is ultraconserved, have shown no effect on the mouse. Even large regions with presumably essential genes or regulatory sites have been knocked out and the mice seem none the worse for wear. This is perplexing, except that the mice are only observed in ideal conditions in a laboratory setting, while the missing DNA may be important when the mice are in other contexts.

Finally, whatever works is what nature does, and whatever works in a given context can and does vary widely. There are no steadfast rules for how to get from here to there, and there are exceptions to every generalization. Natural selection isn't always the explanation, nor do specific genes and their evolutionary histories work the same way in every situation, and so on.

To us, this story is a reminder that any rule an evolutionary biologist can come up with, nature can break.

Thursday, July 9, 2009

Articulate nuns, dementia and belief

Here's a brief follow-up on our post the other day about Alzheimer's disease. It's a story that's been dribbling out for the past few years about a study of nuns and dementia. The first results from this study suggested that nuns who wrote the most articulate application letters when they were 20 were the least likely to have dementia as they aged. So, the association of early language ability with risk of dementia reported in this story from the BBC is not new--whether or not it's 'real' or actually predictive, given that we can surely all think of very articulate people who went on to develop dementia (the story itself mentions the British novelist, Iris Murdoch, perhaps the most famous example).

So, although this possible association raises many questions (is senile dementia the quantitative end-point a life-long trait that begins at birth? Can articulateness be learned, or is inarticulateness a characteristic of a person that is also a predetermined product of a doomed brain? is this association meaningful?), that's not what caught our attention about this story.

What interests us is the assumed association of plaques with dementia. As the story says,

Dementia is linked to the formation of protein plaques and nerve cell tangles in the brain.

But scientists remain puzzled about why these signs of damage produce dementia symptoms in some people, but not others.

Why are scientists convinced that these signs of damage are what is producing dementia? This assumption is being questioned by some (e.g., The Myth of Alzheimer's: What You Aren't Being Told About Today's Most Dreaded Disease, by Peter J Whitehouse and Daniel George), but not by many. Indeed, a quick search of the literature suggests that this 'puzzling' finding is frequently reported, but it is assumed to represent 'pre-dementia' in those with plaques but without confusion. And, it's impossible to refute, as it can always be said about a clear-thinking person with plaques that they died before the disease became manifest. A few early observations led to an equation in peoples' minds of plaque and disease, and became entrenched.

But it's an assumption, a belief. It may fall out of favor, but that will take time as beliefs can fall hard. In fact, one prominent Alzheimer's researcher we've spoken with, a non-believer in plaques as causal, or even in the idea that a single disease called Alzheimer's exists, says that she thinks the field of dementia research is due for a real shaking up. It's time for a new model of dementia, and it will have to allow for complexity.

Although we all like to think that science is based squarely on fact, not belief, this is another example of how that's not as completely so as we'd like to think.

Wednesday, July 8, 2009

Origin of species?

2009 is being celebrated as the 150th anniversary of Charles Darwin's famous book On the Origin of Species . . . . whose full title continued by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. Actually, 2008 was a more legitimate anniversary to celebrate, because it was a year earlier that Darwin's and Alfred Wallace's papers suggesting that species arose by the action of natural selection were read before the Linnaean Society in London.

We rightly celebrate Darwin's contribution to science, which clearly was among the most incisive, sweeping, and transformative scientific revolution that has ever occurred. The theory of evolution by natural selection has become the clear core of most of the life sciences, and its ideas have been borrowed by social and physical sciences--even by cosmology and astronomy (yes! where universes are seen as competing ecologies of galaxies, coming and going via black holes, based on their basic properties, etc.).

But was the Darwinian theory correct?

Natural selection is certainly a phenomenon of life, and it can lead to changes in traits whose basis is heritable (generally, this means is 'genetic', or encoded in DNA). Darwin equated that with the same process that leads to speciation. Over time, organisms become differentiated by virtue of the adaptive differences that arise by natural selection in different environments, and these adaptive differences make for new species. Clearly this can in principle lead to mating incompatibility, the criterion usually accepted as a definition of species, and once mating no longer occurs the populations, that started out as one, can diverge more and more. Hence, over very long time periods, we have sea creatures diverging into fish, reptiles, and mammals. Indeed, we have plants and animals diverged from single ancestral species.

But the relevant questions these days does not have to do with divergence from common ancestry nor how traits might evolve, nor even the definition of species, but the process of speciation itself. That is still not well answered, and facile Darwinian explanations don't work nearly as well as they are said to. In fact, in many ways they are as vague and assumption-bound -- and perhaps as wrong! -- as they were in Darwin's day, and for the same reason.

When populations are separated for long time periods, genetic differences arise among them. Mutations occur locally in each population, but they are relatively rare and basically unique at the DNA level. That's because the very same mutation, say an A to a G at some specific spot in DNA, only occurs once in every ten to hundred million parent-offspring transmissions, roughly speaking. Populations in each region occupied by a species will accumulate such differences across their entire genomes. These changes will have a range of effects -- some none at all, others affecting the organism's traits. Selection may or may not prefer one version over the other.

The upshot is that regional differences arise. Darwin thought these were mainly due to selection's screening of the variants, leading to different local adaptations in populations of what had been a single species, and hence to mating barriers.

This is true only if the changes affect mating compatibility, because sperm fails to fertilize eggs, or the individuals don't choose to mate, etc. But just having, say, different shaped beaks doesn't mean you can't or won't mate (if you're a bird). In fact, humans occupy the proverbial ends of the earth, and those in Tierra del Fuego have been isolated from those in southern Africa for fifty to a hundred thousand years (or more), they look very different, their genomes are so different that one would never mistake a Fuegian for a San. Yet they are sexually compatible. The same is true of baboon species that have been separated for millions of years. In both these primate examples, the regional genetic differences are genome-wide, not just in a gene here and there. And these are just a couple of many examples.

Yet the opposite can also be true. Ring species are those occupying a long linear region, in which individuals from adjacent parts of the range are mating-compatible, but individuals from the ends of the range are not. Yet, these are considered the same species. Ring species show the subtle nature of speciation (and, by they way, humans have not become a ring species despite long separation).

At the same time, single mutations can make mating incompatible in what are otherwise clearly the same species. Known mutations of this type are called 'hybrid sterility' mutations and several examples have been studied. Single mutations or chromosomal changes can lead to mating incompatibility, and hence effectively set up different species, with no other 'adaptive' changes in the Darwinian sense. Likewise, a substantial fraction of human matings, even within a single population (e.g., infertile marriages) shows that the usual kinds of physical and behavioral traits need not arise by Darwinian processes, in order for new species to form. Unless, of course, one wants to 'save' classical Darwinism as a dogma by defining the responsible mutations as being 'adaptively' different. Nothing we've said here invalidates the ideas of common ancestry and the potential of natural selection to mold traits, and mating-incompatibility mutations may literally be viewed as 'adaptations', but that distorts the meaning of adaptation and natural selection.

These are profound facts. They show that there are still many important problems, central problems, to work on in biology. Despite Darwin's brilliant insights, some of his basic reasoning and objectives were not as correct as they have been viewed for 150 years.

Monday, July 6, 2009

Health story of the day -- Java jolt cures Alzheimer's!

Here's a particularly cruel health result of the day, it seems to us. The BBC reports that coffee may reverse the symptoms of Alzheimer's, at least in mice force-fed the equivalent of 5 cups a day (or 2 lattes, 14 cups of tea, or 20 soft drinks -- wait, didn't we recently learn from the BBC that people who drink that much cola are susceptible to hypokalemic periodic paralysis?).

The new story says that caffeine seems to prevent plaques from forming in the brain, the 'hallmark' of the disease, or reduce those already there. Oddly, other researchers have found that these plaques are neither always found in people who had been given a diagnosis of Alzheimer's, nor are they always associated with dementia when they are found. So reporters aren't doing their job and/or researchers enjoying the limelight aren't coming clean (assuming they at least know their job).

This is a perfect example of a story prematurely reported about a scary disease, and thus that will sell. Because it will sell. Everyone fears losing their memory as they age, and quick and easy cures are surely eagerly sought by caretakers and the affected alike. Many have seen the awfulness of close and loved relatives with this disorder.

But mice are not people, and the dementia bred into an inbred strain of mice cannot be assumed to be the dementia any of our grandparents suffer or suffered from. As with all 'simple' diseases, the more researchers learn about dementias in people, the more questions they have--indeed, 'Alzheimer's' has always been a diagnosis of exclusion (that is, impossible to confirm until after death), and the signature amyloid plaques in the brain that were assumed to confirm the diagnosis on autopsy have been shown to be non-confirmatory after all. The idea of Alzheimer's itself as a single definable disease is fading within the research community as more is learned about dementias (unless perhaps your lab is committed to the line of inbred Alzheimeric mice it took you years to breed).

So, once again, scientists prematurely rush to the press with a story that may well cause caretakers to force-feed elderly patients with caffeine at best, and at worse cruelly encourage hope of a cure where in fact there is none. We write often about oversimplifying genetic determinism, but this is a case about oversimplifying environmental determinism--a problem that actually predated genetic determinism in the history of 20th century epidemiology. Surely by now both researcher and media should know better than to hype such claims.

There are many ways to get dementia. There are also many ways to get your morning energy boost even if tea, say, won't help your state of mind without drowining you or making you park all day in the bathroom. And there are many ways to spin a story to the news media.

But don't forget to have your coffee....or you'll forget to have your coffee!

Right.

Friday, July 3, 2009

Natural selection and the human genome -- the last 10,000 years

Many researchers have been interested in combing the human genome for evidence that natural selection shaped modern humans. The idea is that selection would have favored those of our ancestors who were best able to adapt to changing environments over the past 100,000 or even 10,000 years, as they began to wend their way into the entire diversity of the world's environments, most of them very different from our African savanna homeland. How could we not have faced selective environments? And, look at us--at how different peoples native to different continents appear! How else but by natural selection?

And what about the effects of dense, settled populations after the invention of agriculture, including the strong selective force of disease, which could more easily spread with people living close together? Not only that, but in terms of hundreds of thousands of sequence variants across the genome that were chosen for genotyping for their variation rather than function (most probably functionless or without effect on fitness), it should be easy to identify genome regions that show evidence of selection, by virtue of manifesting greater than average difference among world regions.

The availability of large sets of data on variation in the genome, from populations across the globe, as well as methods for doing large-scale analysis of these data has meant a field day for people interested in this question. They have developed and applied a number of methods for the purpose, to distinguish the statistical chaff from the selective grain: when one does many thousands of tests across the genome one is bound to find 'signals' that arise just by the chance aspects of population sampling. A number of reports have come from these studies, some finding that human evolution is speeding up, and others finding evidence of strong selection at a few sites in the genome. Some claim many positive findings, others that the overall amount of such evidence, given the expectation, is rather small.

A paper in the June issue of PLoS Genetics by Jonathan Pritchard at the University of Chicago and collaborators, reports that selection has not been nearly as strong as many people expected. These results are not a total surprise given that most traits are affected by many genes, that environments change rapidly enough that one era's successful adaptation may be another's maladaption, and that a fundamental survival tactic of all organisms is the ability to adapt behaviorally to changing environments as they happen. Humans above all organisms have the added benefit of culture, which can buffer ecological changes. We make fire, and clothes, to protect against the cold, we invent weapons to substitute for weak teeth in hunting prey, and we have language to organize group responses to threats, the need for food and shelter, and so on.

That doesn't remove humans from the effects of selection, and crowded, permanent agricultural settlement might be particularly vulnerable to new forces, such as infectious pathogens.

But even then, and even if selection is particularly strong (and a 1% selective advantage, often literally too small to be detectable from samples at any given time, is considered strong), it may be difficult to detect selection at the gene level. If many genes affect a trait, and given the clear fact that in a population there are many sequence variants at any gene (and the nearby parts of DNA that regulate its cellular usage), the impact of selection is distributed across these many possibly advantageous variants.

What is favored or disfavored are combinations of variants at many genes. Individually, they have rather trivial effect. And most selection is weak, at least in this sense, as we've written before. When the net advantage of a variant is very small, its frequency changes largely by chance (genetic drift). As a result, when regions of the world are compared, there is usually no difference between the pattern of random markers and genes that have been affected by selection.

Occasionally, a variant has a strong effect in its current environments, and can be quickly advanced in frequency (sickle cell as protection against malaria is one of only a handful of truly convincing examples). But such variants are usually very rare, and won't be seen in the kinds of the small population samples that have been used for geographic studies of human variation. A rare variant that is quickly favored will rise to high frequency and then its effect on the favored trait will become close to average. But at least the pattern of variation in the gene and its nearby chromosome region will show less variation within its source population, and greater differences between populations, than is typical of the rest of the genome. That is the kind of signal people hope to find, but that seem for reasons just described, to be rare in humans.

So, even from a strongly darwinian point of view, 'signatures' of selection will be expected to be weak, as the authors of this paper found. Darwin's work was about phenotypes (traits) not genotypes. The focus on genes may be showing that that is the correct viewpoint in which to understand evolution, whether selection is strong or weak, persistent or occasional.

Wednesday, July 1, 2009

Vegetables and you

Today's big health news (or, at least, headline)? A vegetarian diet prevents cancer! A study of 60,000 people in the UK, published in the British Journal of Cancer, reports that vegetarians get less cancer of the blood, bladder and stomach. Of 100 meat eaters, 33 will eventually get cancer, while of 100 vegetarians, 29 will do so. If we feel a need to point out that this difference is rather unimpressive, the study does seem to have been independent--at least, it was not paid for by the carrot and broccoli industry! But, vegetarians are more than half as likely to get cancers of the blood and lymph, although the actual number of most cancers, in this sample, was quite small.

The protective effect of the vegetarian diet isn't always true, however--cervical cancer is higher among vegetarians than in meat eaters, though the number of cases was very small, and bowel cancer was slightly higher among those who don't eat meat (contrary to decades of reports that meat-eating, for various reasons having to do with bacterial metabolism of animal fat, increased colorectal cancer).

What mechanism do the authors suggest to explain their findings? Perhaps there are viruses or mutation-causing compounds in meat, or protective compounds in vegetables. Indeed, at least stomach and cervical cancers are known -- and this knowledge does seem to be real! -- to be caused by viruses.

However, something that at least the BBC write-up of the story doesn't point out, a notorious problem with these kinds of studies that should always be pointed out right at the top, is the problem of environmental confounding, in which one measure is correlated with an unmeasured factor. In that case, it is wrong to attribute causation to the former.

It should be clear even to the most obtuse that vegetarians and meat-eaters probably have different life styles in all sorts of ways, which may increase or decrease their risk of exposure to causative environments, having nothing at all to do with diet itself. In this case, diet is merely a marker of life style and risk, not a causative factor, and indeed the study does nothing to control for any such differences. Maybe the dedication of vegetarians to things like Zen meditation affects cancer risk!

Most interestingly, one of the authors of the study was interviewed on the BBC radio program, Newshour, this morning with a fascinating lead-in. Owen Bennett-Jones, the interviewer, pointed out that dietary findings come and go, often being contradicted by subsequent findings--red wine protects against cancer, or it doesn't, dietary fat causes breast cancer, or it doesn't--so why should we believe the results of the vegetarian diet study?

The author himself acknowledged that the findings are not earth-shattering, and may eventually be contradicted, and may only apply to vegetarians in the UK, and these were small numbers of each cancer anyway. He quite burst his own balloon, albeit with the help of his interviewer. His rationale is that after smoking, everything else is a minor risk factor. But, apparently this shouldn't stop researchers from spending large amounts of taxpayer money to look for these minor effects anyway. And hyping them to the media. Hmm. Maybe the vegetable industry should pay for this research!

Bennett-Jones played this story well. He had a science journalist on along with the scientist, and he asked her why so many unreliable stories appear in the media. She said the explanation is very easy--health news sells, especially when it's scary and about cancer. She blames the hype on scientists for wanting to publicize their iffy findings, the industry for wanting to promote the latest food that will prevent cancer, and the media for having to fill the papers and airwaves on deadline. Literally every day, she said, she gets calls from the food industry, or scientists, wanting to tell her about their latest findings.

So, follow a vegetarian diet if you choose to. But don't do it because of the promise that it will prevent cancer.

Fittingly enough, there's a story in the New York Times today, by Gina Kolata, updating us on yesterday's big health news, which suggested that c-reactive proteins, a marker of inflammation, cause heart disease. People were already developing tests for CRPs, counting on the promise that CRPs, rather than cholesterol, were causative and thus were going to need to be tested for in everyone, multiple times. But a new study in the Journal of the American Medical Association, suggests that, yes, CRPs are associated with heart disease, but are not causative.

Interestingly, this study uses a relatively new epidemiological method called Mendelian randomization to show that some people are genetically predisposed to make more CRPs than others, but that CRPs levels themselves aren't associated with heart disease. This is an appropriate use of genetic data.

Well, we have to go now. It's time for lunch. We wonder what's on the menu today.....