Friday, October 30, 2009

On stepping back

We have often commented on what we call 'tribalism', or ideology as it applies to human affairs, science included. Some commenters on those posts, and generally, seem to infer that we are saying that science is not reality-grounded but is just an emotional club that defends its interests.

We sometimes get asked whether we 'believe' in evolution, or genes, or global warming, or religion -- and in fact belief is often the framework in which these issues are discussed. The implication is that because we try to step back and see how humans social structures work generally, and how that applies to science, we must not accept the findings of science.

Nothing could be further from the truth! Yet even an attempt to step back and try to be objective not about the facts or theories themselves, but about how we hold our views and why, seems to be a kind of heresy. That is exactly the point.

In any area about which we write (or about which you may read or in which you may work), we depend on others' ideas, statements, assertions, and work. No one of us knows very much about the actual data in genetics first hand; we learn most of what we know by reading of someone else's work (or from what we find in data bases on the web, but that we did not generate ourselves). Even our own data were produced by a group of technicians, students, and collaborators.

This means that much of science, no matter how it may have objective aspects, is a subjective decision on each of our parts as to what work we accept and why. Who do we trust?

This is a serious question when it's clear that scientists, like others, defend points of view (not to mention vested interests such as their grant base, their tenure, their reputations, &c). It doesn't mean that the DNA sequence I publish is false or made-up. But there may be mistakes: DNA sequencers do make errors. Or I may make assertions about them that go beyond the data and I may know enough about the data to color my analysis in ways that you would have a hard time disentangling, unless you're also an expert.

The struggle in science to be objective, which most of us are engaged in, does not mean that we are fully objective. But it can mean that we tend to believe our own kind of evidence rather than someone else's.

So, yes, we may be predisposed to believe that evolution is a history of life on earth, with genes as important central causal elements. We may believe that the wisest thing is to accept the evidence that major global climate changes are due in large part to human activities that for our future's sake should be changed. And we may not accept the evidence advanced for many kinds of religious claims.

But these are personal judgments. They'll be smiled upon by others with the same beliefs, but not by those with other beliefs. Thus, our circumspection about some kinds of genetics such as the heavy investment in biobanks or GWAS studies, is approved by a minority of people who agree with us, and denigrated (or, mainly, ignored) by others who either are vested in those approaches or who, for whatever reason, think they are good things to be doing.

Understanding a particular science, like genetics, evolution, or developmental biology, is to understand a subset of knowledge gained largely through methods for collecting and analyzing it. But it doesn't remove that activity from the elements that are part of human social behavior. And from time to time in these posts, that is what we are trying to understand.

Stepping back to attempt to do this doesn't imply (or not imply) a particular position that we may personally take on the subject matter at hand.

Thursday, October 29, 2009

The rain in Spain falls mainly on the grain

We have several times in this blog pointed out the problem of too much hubris in science, the attitude that we already know everything important and therefore can be very sure of what we say. But maybe, like Eliza Doolittle in My Fair Lady, we scientists could use a bit of polishing before we speak so confidently, if we think we deserve to be taken seriously. Maybe this is an example (at any rate, it's interesting):

The Oct 21 episode of the BBC World Service radio program, Discovery, included a discussion of 'biopreciptation' (we'd link you to the podcast, but it has already been taken down; here's a link to an older version of the story on BBC Radio 4). The term 'bioprecipitation' was coined some 25 years ago by Dr David Sands at Montana State University, though it pretty much fell on deaf ears. With increasing evidence now, however, the idea that the biosphere may have a significant impact on climate is gaining followers.

Most rain and all snow falls as ice, but the water in clouds doesn't form frozen crystals at 32 degrees F, but rather some degrees colder, and almost always around ice-nucleating particles. These particles can include dust and pollutants, which have been the text-book explanation. But Dr Sands' and colleagues found, as reported in a Science paper last year (Ubiquity of Biological Ice Nucleators in Snowfall, Chirstner et al., Science 29 February 2008: Vol. 319. no. 5867, p. 1214), and recently on BBC radio, that some bacteria serve the same function, but they do so at warmer temperatures.

Christner et al. have examined snow and rain for the presence of nucleators. As reported in the Science paper, they found that 70 to 100% of the nucleators in snow that were active at higher temperatures were biological, and a majority of those were live bacteria. It turns out that most of these bacteria are plant pathogens, specifically Pseudomonas syringae. They infect, but don't necessarily kill plants, spending much of their time there, but they can be swept up into the atmosphere when conditions are favorable, where they drift with the winds aloft, later to form the core of rain drops or snowflakes, and then fall back down to Earth. Those that land on plants reproduce there, and then can be blown back up into the atmosphere, where they can become part of the precipitation cycle all over again.

So, these plant pathogens may contribute to rainy -- or dry -- conditions anywhere in the world, and there may be feedback loops. If an area is suffering from drought, vegetation may be sparse, and thus populations of these rain-inducing bacteria may be sparse, thus leading to less rain, less vegetation, and so on. It's possible that this kind of feedback is in part responsible for the droughts in Africa and Australia, according to Sands and colleagues. That means they can have continental, and hence perhaps in turn even global climate effects.

Christner et al. conclude their paper by saying:
Unearthing a role for biological [ice-nucleation] in the precipitation cycle has implications for deciphering feedbacks between the biosphere and climate, improving climate forecast models, and understanding atmospheric dissemination strategies of plant pathogens and other microorganisms.
This is a beautiful example of the interconnection of the biota on Earth in unexpected ways. But there's a further twist to the story. The ice-nucleating property of this bacterium is known; it's a protein on the cell-surface of the P. syringae bacterium. Intact bacteria, then, at the right temperature and given wet conditions, can nucleate ice formation directly on the surface of plants, causing potentially costly frost damage to crops. A form of the bacterium without the ice-nucleating surface protein, called 'ice-minus', occurs in the wild, but a lot more of them have been genetically engineered and introduced into fields where it's hoped that will have a man-made selective advantage and out-compete their ice-nucleating cousins--to do less crop damage.

Nice, neat story and a triumph for science and (we hate to admit it) big agribusiness, too. Right? Not necessarily!

Given the role of P. syringae in precipitation cycles worldwide, in ways not yet fully understood or even characterized, the possibility of disturbing it is rather alarming. If we seed the atmosphere with ice-minus bugs, and they out-compete the currently-prevalent strain, then there will be less crop damage but there will also be less cloud and rain formation. Cloud and rain formation affects climate, climate affects areas of drought and surfeit of rain, which can affect both climate on a large scale, and of course agricultural productivity. And if the patterns are altered the effect could, like the famous Lorenz butterfly effect of chaos theory, proliferate on a much larger scale.

How large? Enough to affect global climate or climate cycles? Could crop patterns already have had some effects on this interacting system, that we never suspected? Could this go back into history? Could global warming or other recent climate changes be due not to greenhouse gases alone but to agricultural paterns or technology in some significant way? These effects could, of course, turn out to be trivial, "a lot o' nowt wi' no clout" as Eliza Doolittle might say. It could even turn out that 'ice-minus' bacteria really do have a good effect without the bad. On the other hand....

The fact that bacteria are frequently the condensation nuclei around which ice crystals form and hence that are the source of cloud and precipitation illustrates the concept of 'dark matter' that first arose in physics and refers to things not suspected by science until in some way (usually by chance) they are discovered. Before that, we have our theories, that are shoes into which science tries to force the feet of data that we already know about, sometimes causing serious bunions. The lessons of the history of science are a repeated warning to us. We can't develop theories resting on evidence we don't know about, of course, and not everything we discover turns out to be important. But we can't be too sure of ourselves.

Like Eliza Doolittle, we can't win a prized companion if we can't speak the right language. Her pompous tutor Henry Higgins was cock-sure of what that right language was, but the outcome was ambiguous--as it often is in science, where the right language may always be changing, and we need to look constantly to Nature to learn what that is.

Wednesday, October 28, 2009

And then there's phlebotomy -- and GWAS!

Our last post dealt with the way we cling to unproven, often unprovable, and sometimes downright already-disproven hypotheses. Perhaps you've gotten the "Flu and the ONION" email that promises that you'll not get the flu if you put onions in your socks. We mentioned a few reasons that people keep believing things like this, in spite of the less than convincing evidence. From aquatic apes to Big Foot, these are a particular kind of its-always-over-the-next-hill hypotheses. Given the huge territory in which he's said to have been sighted, it's dashed hard to prove that Big Foot doesn't exist!

And then there's phlebotomy. That's the theory that therapeutic bleeding will cure almost any disease. That's because disease represents an imbalance among vital factors (such as the four 'humours' of classic Galenic medicine). Such theories hang on for centuries (now, with a faster pace to life, decades may be the normal turnover time). Afterwards, such as now, we laugh heartily at the foolish simpletons who actually believed in such things--in the face, we'd say, of the overwhelming evidence that it simply doesn't work, you Dufus! If only they'd have looked at the evidence--that is, if they'd taken our idea of 'evidence based medicine', they'd readily have seen how foolish their ideas were.

But that's not quite so, and it's not quite fair. Those Dufi (that's the plural of Dufus) had the same IQ points that you and we have. They were absolutely aware of and discussed the evidence. Even Hippocrates, roughly 400 BC, wrote voluminously about evidence!

Then why did they believe in their system? Because, by their lights, it worked! It's all very nice for us to say it was bunkum, but physicians had their theories of why it worked (e.g., the four humours) and, after all, the vast majority of their patients recovered! What better evidence can you ask for? Is that not evidence-based medicine? And, even today a lot of grandmothers survive the flu because onions soak up the germs.

Why the four humours (or onions) seemed like such a plausible explanation, of course, is a combination of the power of a belief system and the kinds of explanations that they had for the failures (some patients did die, after all). For those, post hoc explanations would be offered (too far gone by the time of treatment, body too weakened with mistreatment -- grog or carousing -- to recover, God was calling, etc., etc.). Sometimes, just an honest "I don't know, but the treatment doesn't always work." These are not refutations of the theory, but exceptions. Wisdom and experience (and Galen's texts) were the criteria for interpreting the evidence.

What we have now in cases like this are formal statistical criteria, and a probabilistic view of efficacy. Probabilities leave room for failure that is 'explained' by the fact that the therapy only works with some probability. We often, if not usually, say we don't know why. But our formal statistical tests ask whether the treatment works relative to no treatment, or to some alternative treatment. We ask whether more people recovered than would happen by chance if the treatment had no effect, and set some chosen level of statistical significance, p, often set to 0.05 (5% chance of misinterpretation).

The p-value is subjective, but gives us at least an accepted, or conventional criterion for success. We know this approach is not perfect, but it sets a kind of standard that is at least somewhat more objective than 'wisdom'. So if the treatment passes this test, it works. In many ways, it's a formalization of the kinds of subjective way that phlebotomy and the four humours were evaluated, but being more standardized it does seem to improve the confidence we can have that our theory works.

In fact, there are some aspects of the four humours theory that are right. Things are out of 'balance' when we get sick: we're out of our physiologic equilbrium or homeostasis. We're a lot more specific: blood pressure or glucose level may be too high, rather than the 'sanguinous factor is excessive.' Blood-letting in the evening can, in physiological fact, lead to feeling improved in the morning after. And there is the psychological lift one gets from being treated and cared for.

But the post hoc justification for what one is doing is not just something in the benighted past. It's part of our own hopeful-thinking today. GWAS and Huge Comparison Studies (like biobanks) have been notoriously incomplete, to be kindly about it. But many papers pour forth raving about its success. This is about the same as the successes of phlebotomy: whatever works is credited as a 'hit', and the failures are ignored, downplayed, hand-waved, or in other ways dismissed as exceptions that don't undermine the wish.

This similarity applies even though GWAS and biobanks will use rigorous (that is, conventionally subjective) statistical criteria for interpreting results. If the achieved p-value is not quite the nominal one, we call the result 'suggestive' and soldier on in the same direction. If it is small and convincing, but accounts for but a small fraction of cases, that's treated as encouraging. We may wish to call it very different from and much better than the bad old days, but the story has closer resemblances than we like to think.

There are many stories of theories believed for reasons of stubbornness or worse. There are explanations of the unobservable past or future, whose plausibility is a matter of our own experience and culture, or a belief system to which we often tend to cling, such as that 'it must be due to natural selection for X', but for which proof is elusive.

And then there's phlebotomy--and GWAS!

Tuesday, October 27, 2009

Is tribalism genetic?

Philosophers have a word for the situation in which the available data aren't sufficient to allow a decision between competing theories. That is, when two or more theories fit the data equally well (or equally poorly). They call this underdetermination, and it can be applied to many situations. Many theories of causation in epidemiology are underdetermined, for example -- numerous studies support the view that asthma or multiple sclerosis or heart disease have a genetic cause, but there are also numerous studies showing that these diseases have an infectious origin. Global warming is another example -- is it man made or just natural cycling?

Anthropology seems to be the intellectual home of many theories of uncertain interpretation. Perhaps this is because human behavior and evolution are notoriously difficult to assign 'true' causation to. Are humans innately violent? Are the behavioral differences between men and women culturally determined? Why, if evolutionarily the important things in life are survival and passing on our own genes, do people commit infanticide, or blow themselves up in suicide bombings?

Some theories are not underdetermined, they are just plain wrong, but that doesn't stop them from having believers, even believers who claim that scientific evidence supports the theory. ID adherents, of course, would claim that much of the same data that evolutionary theorists use to support the common origin of all life on Earth instead point to a divine origin.

Anthropology can claim many unsupported theories, such as that humans are genetically programmed to fear snakes, or that West Africans are fast runners because they were cattle thieves millennia ago, and had to be able to run fast to escape with their quarry, but most of these are better placed in the category of evolutionary Just-So stories, rather than underdetermined theories. They might be true, but how would we ever know? Can we rule out all other reasonably plausible explanations? Given the vagaries and weird one-off happenings over large areas and vast numbers of generations, how can we rule out something we haven't thought of?

Some questionable theories can claim actual scientific evidence in their support, including the aquatic ape hypothesis that Holly wrote about last week. A number of such theories continue to have legs in the Anthropology realm, even with overwhelming evidence against them. One is the idea that the closest living relative to humans is the orangutan. This is a not only a minority view, but the molecular and morphological evidence in favor of the 'alternative' hypothesis, that chimps are our nearest relative, is overwhelming and has been for decades. Jon Stewart gave this theory probably as much credibility as it deserves on The Daily Show back in August. Mark Stoneking has a commentary on the orang hypothesis in this month's BioEssays, in which he nicely refutes the 'evidence' as published in a recent paper in The New Scientist. He concludes by saying:
Finally, what are we to make of the fact that a paper whose arguments about the relative value of molecular genetic versus morphological evidence for phylogenetic analyses can be so readily dismissed gets published in a peer-reviewed, scientific journal? An accompanying editorial offers the illuminating insight that the paper "... comments on a topic of such keen general interest and therefore may well gain wide attention in the scientific and popular press." That it did, as the journal's website proudly points to coverage of the paper in The New Scientist. The editors also admit that although the reviewers were not convinced by the paper, nevertheless it "...was felt to be a contribution worth putting out to the test of further scientific scrutiny," even "though "...this perspective might superficially appear to be nonsensical to the majority of molecular anthropologists and systematists..." Yes, sometimes the conventional wisdom is overturned, and alternative views do deserve to be heard - but if publication in a peer-reviewed journal is to have any meaning at all, editors and reviewers have a responsibility to ensure that well-established contributory evidence is not dismissed in a superficial way.
A second Anthropological controversy that won't die has to do with interpreting the fossil remains of hominins found not long ago on the Indonesian island of Flores. Named Homo floresiensis, the majority view is that these remains are of hominins who lived some 18,000 years ago, they were small, and various features identified them to the discoverers as hominins, but perhaps belonging to a separate species that evolved on Flores. Others have argued that the remains don't deserve species status, but instead represent humans who happened to have been microcephalic--that's a harmful disorder, not just a description. Among other reasons to doubt this interpretation, however, are the results of a comparison of these remains with skulls from modern individuals known to have been microcephalic. The fossils do not look diseased to these researchers, but indeed, this 'controversy' is starting to spawn other unlikely disease possibilities. At this point, it is starting to look as though the dissenters simply refuse to be proven wrong.

Why do unlikely theories like these thrive? As Mark Stoneking points out, marketing interests can keep some hypotheses alive that should have died long ago. But, so can egos and career interests and so on. Often, which theory one chooses among competitors depends on one's prior beliefs, which can mean that some pieces of information are overlooked in favor of other data that seems more supportive of one's favored theory.

Everybody loves it when the circus comes to town. The clowns. The elephants. The stilt-walkers, and the side-show freaks. Kids of all ages snap up the tickets. Sometimes the 'circus' is the annual anthropology meetings, where a room is packed to the point of people (not monkeys) hanging off the rafters to see the food-fights. Another kind of circus, often, are the popular science magazines and television programs. They do, after all, have to sell ads the way circuses have to sell tickets.

Sometimes, we have no real idea of how to interpret data that would allow us to choose a theory -- we've got thousands of years worth of data on violence in human societies, but just how would we conclusively determine whether the cause is genetic or cultural? We clearly don't know, or we'd have sorted this out long ago. And sometimes we don't know how to ask the question in a way that would get us closer to an answer -- we may be better at understanding the causes of diseases like asthma or heart disease when we're better at understanding complexity.

While most theories in a field like Anthropology don't have much direct impact on how people live their lives, this doesn't prevent people from clinging ferociously to one interpretation or another.

Are we tribal for genetic reasons, or is it cultural?

Monday, October 26, 2009

The genome in three dimensions

We all learn about DNA as a string of letters, of A's, C's, G's, and Ts, so it's not surprising that we tend to think of DNA as linear. But, inside the cell, where it really matters, DNA is actually wound up into a three-dimensional ball. That this is so has been known for a long time, but little has been known about the organization of that structure. A paper in the Oct 9 Science (Comprehensive Mapping of Long-Range Interactions Reveals Folding Principles of the Human Genome, Lieberman-Aiden et al., p 289-293) discussed on the BBC website here, begins to correct this. (The figure to the left is from the paper, via the BBC.)

Using a series of clever molecular techniques to cut and sequence neighboring pieces of DNA, Lieberman-Aiden et al. generated a compendium of interacting bits of DNA. That is, for each part of the genome, they were able to determine its neighbors in three dimensional space. Among other things, they show that distant DNA sequences interact with and regulate each other in ways that aren't easily envisioned when we conceive of DNA as linear sequence alone. In addition, they determined 'contact probabilities' for parts of the genome as a function of genomic distance (number of basepairs away), finding that intrachromosomal contact probabilities are greater than those for interchromosomal contact. Further, interchromosomal contact is most likely between small gene-rich pairs of chromosomes.

The current study is, of course, following up on earlier results of similar investigations, but based on a more powerful molecular method. Conventional wisdom has been that DNA has to be tightly wound in order to fit in the cell -- unwound, it's 2 meters long, so it has be be compacted somehow to fit into the cell, never mind the nucleus of eukaryotic cells.

But DNA seems to be packaged in a very orderly way, and reflects whether or not genes in a particular spot are open for expression. The packaging seems to be replicable, if the new paper is correct. And, this brings to mind a number of questions including whether this explains why it has been so difficult to track down regulatory regions for many specific genes. Is the pattern of inter- and intra-chromosomal contact the result of functional constraints, specific sequence-based interactions, or natural selection? Or is it just how DNA winds itself up, given its overall structure? How sensitive is the cell to its packaging? If someone's DNA doesn't roll up right, are they selected against?

If selection is important, there must be many opportunities for--that is, need of--cooperation that enable the DNA to fold up and around itself, as Lieberman-Aiden et al. demonstrate; not only does winding of DNA into this tight ball fit it into the cell, but it also facilities contact between pieces of chromosomes, enabling cooperative interactions such as gene regulation. If there are functional reasons why these alignments occur, then there would be co-evolutionary constraints that maintain their compatibility--what we refer to in our book as cooperation.

As we emphasize in our writing, cooperation is a fundamental principle of life, and these results reinforce that view. We'll have more to say on this subject after Ken and a colleague give a seminar on the Lieberman-Aiden et al. paper.

-Anne and Ken

Friday, October 23, 2009

In the Beginning. . . . .an age-old story?

What if religious claims about the real world were correct? Well, let's take the Bible as an example. It says a lot of things that those who wrote it could not know from their own direct experience, and it is claimed that their source was direct inspiration or information from God.

If that were true, the facts of the world would gibe with the story. Since the authors of The Book couldn't have known this in any other way, it would reinforce their claim of divine inspiration. Science would have to accommodate. The correspondence of facts to the Story would lead us to believe the source were true, and we'd have to fit our understanding of the material world to the explanations. Like the age of the earth.

In fact Darwin worried about his version of the young earth--not Bishop Usher's estimated 6000 years, but physicists' and astronomers' estimates of how long an earth-sized molten rock would take to cool down to its currently observed temperature. That might be millions, or even a couple of hundred million. But was it enough for evolution to have happened as Darwin thought it had?

How old is old enough?

This is a question that reveals much about the nature of science. If we were to accept a Biblical explanation, the age is what we're told it was in Genesis. We would have to fit every other explanation to that assumed truth. The key word is assumed. Science is empirical, but it is also theoretical because we try to generalize from what we've seen to what we haven't yet seen. We build an axiomatic system in which we take some things as true and use them to explain real world data. Where things don't fit, we either must abandon or modify our axioms, or develop better explanations that are consistent with them.

So, if we accept 'evolution' as true--that is, that life today is the result of a history of descent from common ancestry, then we have to fit the facts to that assumption. What that means is that, however old the earth is, it is old enough! The 'how-old is old enough' question is really not a scientific question.

We simply must fit our theoretical understanding of life's diversity, the fossils, radioactive dating, geology, mutation rates and comparative genome structure and variation to our assumption that evolution is a fact. The success of this integration is overwhelmingly in favor of that assumption. But where current ideas are wrong, where things don't seem quite to fit, we simply have to question whether we've really understood our data or measured it accurately, or hunt for some missing or errant aspect of our theory.

If evolution happened, then -- it happened! Our job is not to ask if it could have happened if we assume it happened. It's to understand how it happened.

Sadly, the facts of the world don't fit with Biblical explanations. That's the reason, and not atheistic hostility to believers, that science does not accept the truth of literal Biblical explanations. It is why some scientists would say rather uncharitably that it is ridiculous to cling to such explanations anyway. It's why such persistent beliefs in religious dogma are often sneeringly likened to beliefs in the Loch Ness monster or Santa Claus.

And that's why the question whether complex traits could have evolved without an Intelligent Designer is not a scientific question. If evolution happened, then it must have happened somehow and we have to figure out how. If we have things we can't easily explain, well, that just means we have work to do.

The history of human knowledge shows clearly that things that seemed obvious may turn out to be wrong. Revolutions in scientific thinking do occur, and the discovery of evolution was one of them. But there is nothing in prospect that suggests that evolution itself is on shaky grounds.

Our inability today to explain everything we observe suggests that there could be some phenomenon we've not discovered, or principles we've misunderstood. But a lack of completeness in our understanding is not in itself an argument for some specific alternative explanation such as a Biblical one -- or that humans evolved from aquatic apes. It doesn't justify funding research to prove those claims as plausible alternative hypotheses, because they have already been found wanting. For example, the earth itself tells us that it is not just 6000 years old.

But if there is nothing to shake our assumption that evolution happened, the great amount that we don't yet understand should shake any complacency that we already know it all and should warn us not to become as dogmatic as science's critics are.

Thursday, October 22, 2009

Ignoring the Aquatic Ape Hypothesis

Humbled by the previous post, here's my contribution to this week's discussion on what science is and is not...


After the umpteenth friend sent me the link to Elaine Morgan’s TED Talk about the Aquatic Ape Hypothesis (AAH), I finally watched it today and I thought I'd share a few things that have been on my mind since last April.


That’s when the annual meetings of the American Association of Physical Anthropologists were held in Chicago. As usual, I heard many fascinating talks including one where I was accused of ignoring the Aquatic Ape Hypothesis.


It wasn’t just me. The speaker admonished the entire ballroom for ignoring the “best idea in science,” which explains uniquely human traits by hypothesizing that they evolved during an aquatic phase of our past. The AAH was first described in an article in New Scientist by Alister Hardy in 1960 and has been championed ever since by Elaine Morgan in her books and most recently in her piece in New Scientist.


The speaker explained how years ago he read about the hypothesis, fell in love with the "best idea in science," and then went to graduate school to work on it - not a wise framework for a research talk. You’re not supposed to be outwardly infatuated with your hypothesis. By doing so, the speaker removed a healthy amount of objectivity from his research. The audience was now suspicious, which was a shame for the presenter's sake because this was one of the few times someone had actually attempted to test the AAH. (The study measured locomotor efficiency of people walking in water. Yes, there are theoretical problems with this, but if done from an objective point of view, and with a well-designed study, it could be useful.)


Sure, you’ve got to be dedicated to asking the questions and to finding the best possible evidence to answer those questions. Why else would you toil away? But once you admit your personal bias toward one particular outcome, no one will take your results as seriously as you would like. They may even dismiss them out of hand. They may ask, how can we be sure that this person did not throw out all the data that falsified their idea?


Science can't move forward very well if we have to repeat everything that everyone does. So system checks, like upholding objectivity and enduring peer-review, are built into the scientific process to make sure knowledge advances effectively.


After celebrating a laundry list of scientific progress (e.g. realizing the heliocentric solar system, discovering the DNA molecule, walking on the moon), the speaker expressed his frustration with the paleoanthropological community. With all the time we have spent trying to explain the evolution of bipedalism, he asked us, why haven’t we found the answer to “why”? (Elaine Morgan took the same tack about human "nakedness" in her latest New Scientist piece linked above.)


My answer to his challenge is that it's not our failure, but the constraints on historical sciences. We will probably never know why the first apes walked upright. We will only come closer to knowing as we figure out what anatomical and genetic changes occurred, and in what sequence, during the process. With increasing knowledge about the paleoenvironments in which hominins lived, we will understand better the selective pressures they experienced, but it will be difficult to link these definitively to the origin of bipedalism (especially since it's just a continuum anyway... other primates already use bipedal postures and behaviors so what separates the first bipeds from those other guys isn't a whole lot!).


Paleoanthropology is not rocket science where you can control events in the present or future. We face an entirely different challenge: reconstructing events that happened millions of years ago. Do you think that some creature who lands on the moon millions of years from now and finds the astronaut bootprints and asks "why?" will guess that the answers are "because it was neat and to collect rocks and to put up a flag and to hit a golf ball (miles and miles and miles) and to beat the Russians, etc..."?


The how of moonwalking would be tough to piece together. But the why might be downright impossible to fully understand. And so far according to the evidence we’ve got to work with, the how of bipedalism is coming together but the why is pretty slippery.


Supporters of the AAH, like the presenter at the conference and like Morgan, are fond of claiming that the scientific community’s critical reaction is actually some sort of visceral, emotional, inexplicable, unfounded tantrum.


Here is one way of testing whether the problem that scientists have with the AAH is scientific or just some collective bias. What if somebody presented another idea, not the AAH, the way that the AAH has been presented? What if I presented something like this?


It’s mind boggling that nobody has figured out what the earliest ape looks like. Some of you think Proconsul is one, but others think it’s a monkey. What’s wrong with you? Well, when I read Alan Walker’s interpretation of Proconsul being an ape, it inspired me to apply to graduate school to work with him so that I could prove that Proconsul was an ape. So I did my dissertation on the feet and hindlimbs of some new fossils from Rusinga in order to show everybody what a great idea Alan Walker had about Proconsul being the earliest ape and not a monkey. Here’s what I did. And here are my results. Clearly Proconsul is an ape. Any questions?*


Would you like it if I had made a presentation like that? If your answer is no, then, along with a majority of the audience, some of whom were quite vocal, you probably wouldn’t have liked the AAH presentation that I heard at the conference.


The AAH gets much more play in pop culture than it does in classrooms or professional meetings. The audience’s charmed reaction to Elaine Morgan’s lovable personality during her TED Talk explains much of the AAH appeal.


I don’t intend to write a thorough or even partial critique of the AAH. John Langdon has offered one and there is an enormous website dedicated to critiquing it (both linked in 'Further Reading' below). But there are some points in Morgan’s TED Talk that are worth discussing here. Morgan lists five human traits that, according to comparative anatomy and behavior, support the AAH. (Morgan's evidence is in bold; my comments are in italics.)


All hairless mammals are aquatic or have aquatic ancestors, except for burrowing animals like the naked mole rat. What about elephants and rhinos you ask? New discoveries and interpretations of the fossil records for elephants and rhinos support her claim. Morgan lists pigs that wallow as part of her rule linking nakedness and water, but hardly anyone would suppose that selection for wallowing drove fur loss in pigs. And what about hairless cats and dogs? We managed to make them from a non-aquatic ancestor. Morgan says that people can always find exceptions to the rules in arguing against the AAH and I admit that I’m guilty of that. However, making hominins to be the only aquatic primate is a pretty big exception. So which is it: do we follow general patterns or do we take exception where it fits our preferred explanation?


Nonhuman primates walk bipedally to cross bodies of water or to feed while standing in water. They also walk or stand bipedally to do many other things on land, including threatening, mating, foraging, practicing vigilance, carrying things, and throwing.


Humans have a fat layer under the skin that is unique among monkeys and apes and is like whale blubber. Some other explanations for our fat layer have to do with feast/famine survival adaptations and with supporting our large expensive brain, particularly during infancy when growth is intense.


Humans can hold their breath on purpose and the only other animals, she says, that do this are diving animals and diving birds. She links this to speaking ability in humans (not the other animals) and it’s not clear which comes first, speaking or controlled breathing/diving. I’m sure it’s explained on one of the websites.


Humans have a “streamlined” body. Morgan asks us to imagine what a gorilla might look like diving into the water. This is supposed to illustrate how silly that sounds and, thus, how made-for-water we are. No one can deny that this is charming.


Next Morgan says that paleoanthropologist Philip Tobias, science philosopher Daniel Dennett, and naturalist Sir David Attenborough have all “come over” to the AAH. Although I do not doubt her claims, nor know what their positions are, I have noticed a pattern in what I have read on the internet; if anyone acknowledges any one of the AAH arguments, then they can be considered a convert. And when the hypothesis covers the full spectrum of hypothetical ancestors from fully aquatic all the way down to occasional aquatic forager, and this ancestor could have existed, seemingly, at any point along the hominin lineage, well that's a broad enough net to catch nearly every reasonable person.


This “us” and “them” perspective seems to be what's feeding Morgan’s beef with science - its metamorphosis into what she calls a “priesthood”.


From this perspective, it’s somebody else's fault that a hypothesis is weak, instead of first assuming that the hypothesis itself is flawed. Sure, people are vectors of science, but when a hypothesis doesn’t hold weight, it doesn’t mean then that the “priesthood” is out to get the hypothesis or its creator. If you're reminded of the Intelligent Design movement here, you're not alone.


Maybe someone clever could come along and change the fate of the AAH and Morgan is right to say so. However, in the meantime, a person must participate in the scientific process before they can criticize and analyze the evaluators (who haven’t had much to evaluate with the AAH).


Science is no different from most social animal play: You have to obey the rules to participate in the game. Then, if you don’t provide a feasible test or actually test a hypothesis, well, you’re going to sit the bench for a while. Everyone feels ignored until they score, or at least until they get on base.




*In the feet, Proconsul looks like both monkeys and apes, which is what you might expect to find in something that existed so close to the evolutionary divergence between monkeys and apes.


~~Thanks to Kevin Stacey for his terrific input.~~



Further reading


Langdon, John H (1997). Umbrella hypotheses and parsimony in human evolution: a critique of the Aquatic Ape Hypothesis. Journal of Human Evolution Vol:33 Pages:479-494


A website dedicated to critiquing the Aquatic Ape: http://www.aquaticape.org/


A website dedicated to supporting the Aquatic Ape (where you can find Hardy’s original paper): http://www.riverapes.com/



Wednesday, October 21, 2009

The 'God' gene, continued

So, let's assume that some lucky scientist actually finds a scientific rather than faith-based proof of the existence of God. Hey, let's go one better and assume that Anne and I are the lucky scientists (we want the Nobel prize money and the appearances on Colbert)! For the moment, let's just say we proved that God is something person-like (whether or not He/She/It has various unmentionable organs, hands to be on the right side of, form to be the image and likeness of, or eyes for us to be pleasing to the sight of.

Now, what would the impact of that be for science?

Of course, the immediate impact would be great relief that we will, after all, have an after life, assuming we don't use our science for greedy or harmful purposes (well, that may exclude a lot of us from the pearly gates).

First, the God we prove might act in a deist way: He/She/It created the Universe, gave it some laws to run by, and let it the go off on its own without further intervention. This, after all, is a concession that some scientists, even Darwin, have made to religion as a possibility. In that case, the nature of science would not change one iota. We would just, for self-praise, newsiness, and grantsmanship, change our rhetoric to say that we propose this or that study to understand God's laws--think of the 'significance' section of our grants!! But novel? Not a bit of it! That was what the greats like Newton were explicitly doing centuries ago.

Alternatively, we might say that God did create things-as-they-are, and that our job is to characterize the mind of God in that way, but that occasionally God intervenes in mundane affairs. He/She/It might do so in answer to a prayer, though it's hard to see how so puny a thing as a single human could have such influence on the Eternal Omniscient, and specific tests of the efficacy of prayer have bombed completely.

Nonetheless, what that means is that when an experiment to characterize the laws of Nature--the same experiments we do anyhow--didn't work, our explanations would have to include the possibility (besides that our theory was wrong) that God altered the pH of our reactions. But in practice, that would just another source of occasional measurement or experimental error in our analysis and interpretation.

Ad hoc interventions by the deity would be singleton events and science usually is not designed to identify cause in those cases, in the same way that one-off lab errors or sample design flaws often can't be identified. That's why we repeat experiments. We already accept that odd occurrences do happen, and our inference about theoretical validity is, and would remain, statistical in the face of occasional divine speed-bumps. Because intervening in that way by definition means that there are laws of Nature to intervene with. So nothing in routine science would change on that account.

Even if God always intervened in the same way if we repeated the same experiment (the wily imp!), that would be just another kind of special case of, or change in, the law of Nature that we're trying to understand.

In any case, the evidence is already prohibitively strong that the universe is law-like, so that such interventions cannot be too common. That's why literal religion has such difficulty with scientists in the first place! Indeed, some physicists speculate that the laws of Nature are no such thing, they are just generalizations about our particular place in the universe, or our particular universe in the universe of universes (i.e., that we could get to if we could sail through black holes).

And what if the proof sees 'God' only, as we've been already warned, through a glass darkly? What if God is not person-like and can't be interpreted as such, or doesn't respond to human events that way? What if He/She/It is something more abstract, like the Oneness of Buddhism, or the varied origin stories of different cultures around the world. Or what if He/She/It is more remote and distanced from us rather than a conductor bothering to steer our puny earth on its daily mundane way? How could even science that proved that He/She/It existed use that knowledge to interpret things any differently than we already do? Again, the world clearly appears law-like, and science is a set of methods to understand those generalities of how-things-are. If science has imperfections in understanding Nature, maybe it's our reliance on current methods and concepts, that we inherited, after all, from the Enlightenment period a few centuries ago and need not be written in stone.

If some clever person suggests a different way to study what we call transcendental or immaterial realities themselves, and showed that your brain really can communicate with He/She/It (or your long-deceased ancestor or pet dog), that would open up a whole new (and extremely exciting) realm for human research. But it wouldn't change how-things-are in the material world that we already know about.

Overall, knowing that God exists might make a big difference to scientists personally, but would not make any major difference to science, except returning its rhetoric to something that goes back to classical times. Just as accepting evolution doesn't make a major difference to creationists, or even many scientists, in their daily work.

These things are at least worth thinking about--they help clarify the true nature of the science-religion debate as a conflict over cultural resources, rather than one about the physical realities of the world.

Tuesday, October 20, 2009

The Greatest scientific discovery of all times (but naming no names!)

We said yesterday both that Creationist literal claims are clearly bunk and that many advocates for teaching evolution in the schools are not all that sophisticated about evolution. In the latter case, and certainly naming no names!, they may have a over-simplified idea of the role and strength of natural selection in evolution, or may not appreciate the subtle relationships between drift and selection, or have little idea about the sophisticated kinds of evidence coming out of molecular biology labs that so beautifully confirms evolution, and so on.

The blunt fact is that some of what we say in evolutionary biology, especially when dumbed-down for public or textbook consumption is also bunk. The proliferation of Just-So stories in evolutionary biology, even at the highest level, is extensive and well-known. So we're told confidently such things as that our body parts evolved 'for' such-and-such a purpose, and our physiological responses were selected 'for' a reason, and we have genes 'for' all of it.

In fact, anything that's here today can be given such explanations with little in the way of constraint much less testability. Nice, tight stories sell big!

Humility should have a deeper role in science and among us members of the Science-tribe. It seems unassailably true that life today is the result of a history. Darwinian selection also seems undeniably important, though there are other equally important evolutionary forces. And, humans really did come from terrestrial ancestors. We might hope to discover something that refutes such ideas, because it would be so energizing and exciting for science and the public alike! Alas, nothing suggests it will happen.

Still, who knows how much we don't know about life? What factors or forces might be as yet undiscovered but have an effect on biological change? What kind of real 'dark matter', that today is to us what, say, electrons or infra-red radiation were to Aristotle--or real dark matter to astronomists?

When we discuss the evidence for 'evolution', we should be humble enough to acknowledge that that word isn't so clear as it may seem. There are gaps in what we know, gaps in the fossil record, gaps in our theory (e.g., of how to predict phenotypes from complex genotypes, which we assume are causally connected, or how species evolve, or how to know when selection is acting, rather than drift).

We can't let Creationists cower us into asserting more than we know, or simply adopting a party line, as that would be a kind of victory of non-knowledge over knowledge. What sends us to work every day is the pursuit of what we don't yet know. There is likely to be a lot of that out there. It is hard to imagine something that would be so profound as to change our basic view of life as the product of historical evolution, much less anything that could convert us to a theologically literalist view. But we might discover things as new and exciting as the awareness that Darwin and Wallace brought to us about the nature of life. We should remain open to that possibility.

In fact, many scientists are atheists, some (naming no names!) so aggressively so that it is their way of hyping themselves and rolling in the book-sales and TV appearances. Of course, proving a negative is difficult, but science is not likely to prove, using its current set of principles, that God does not exist (even if He was just kidding about a 7-day creation). Many scientists are proud of their atheism, but let's not be hypocrites. And here is a warning to Creationists, who like to trash scientists for their views:

It is hard to imagine a scientist who would not crave being the one who discovered and could truly prove that the source of our hopes and dreams (naming no names!) really does exist. As the scientific discoverer of God, such a scientist would become the most distinguished human being in the history of our species. Darwin would be a puny dustman by comparison. The reason scientists don't take much to religious arguments is not that they'd not like to show them to be true, but simply, and perhaps sadly, because the evidence doesn't suggest that such a thing is even worth pursuing in science.

--Ken & Anne

Monday, October 19, 2009

Recreating Creationism

Every time you turn around there's some new lecture, Forum, or working group on teaching evolution in the schools. Audiences are usually packed. These all sound like they are serious, sober academic discussions about policy and the like. But since this is anything but a new phenomenon, and the issues have been aired countless times, and science invariably wins in the courts, why is it still going on?

It is not really that we have to arm or prepare our selves in any technical sense to present compelling new facts--the evidence for evolution is solid, and has been for a long, long time. The facts are out there, in public and on daily television. Instead, these meetings are a kind of recreation: again and again it's the re-creation of creationism as a sporting event.

And they are generally preaching to the converted. The audience already knows the facts, and likes to cite them aggressively, needing no cue cards to know when to give a derisive snort. In fact, as sincere as they may be, and as much public service as they may be doing, some advocates for teaching evolution don't really understand evolution at a very deep level relative to the knowledge we now have in genetics and population biology.

These meetings are not being held because creationism has amassed legitimate arguments that threaten evolution and require careful rebuttal. Anyone who is even halfway aware of biology and geology knows that literal creationism is simply bunk, and truthfully, most Intelligently Designed Creationists know this too, in their heart of hearts. They know the Devil didn't plant Ardipithecus to mislead us into lives of sin.

But bashing these benighted people has become something of a self-congratulatory blood sport. Indeed, many biology blogs--that get thousands of hits--cover this subject tirelessly, usually with great glee, but rarely is anything new being said. For that reason we try to avoid evolution-creation 'lectures', and don't regularly follow the food-fight blogs. Our open minds are closed to the point that we've never been to an Intelligent Design-sponsored event, and we'd be willing to bet that anyone reading this who has didn't go to honestly weigh the evidence in favor of intelligent design.

Indeed, we all come at this "debate" with our preconceived notions safely intact--and leave with them untouched. See the Lola comic we posted on Saturday. At these events it is repeatedly said that the organizers want 'dialog', not confrontation, but that's insider-code for "we want to educate these poor souls, so they'll convert to our point of view."

This is the 150th anniversary of Darwin's 1859 Origin of Species but the evolution-creation debate, with all of its current vituperation, goes back well before Darwin. Darwin in fact tapped into discussions that had been fueled (in the UK) by an 1844 book called Vestiges of the Natural History of Creation, by an amateur naturalist named Robert Chambers, and as Darwin acknowledged in his Preface, it made the environment more receptive to his own book. And even Chambers basically rested on already known facts and ideas.

People often cite figures about the large proportion of the US population that doesn't 'believe in' evolution as evidence that science education is failing in this country. But, this is not about educating the uneducated. If our kids, or your kids, were taught ID in school, we, and you, would unteach what they'd learned as soon as they got home. Symmetrically, when evolution is taught, this is what fundamentalists do when their kids get home. Every child is largely home-schooled, or at least home-acculturated--in spite of the number of hours they spend in class.

And even if science education demonstrably is lacking, and should be seriously upgraded, the divide is not about the facts anyway. Most creationists readily accept that animal breeds can be molded by artificial selection, that bacteria can evolve antibiotic resistance, and so on--it's not about whether evolution per se can happen. It's partly at least about not being able to accept that humans aren't above Nature, and weren't landed intact on Earth by a Creator.

But, really, how much difference does it make? Theodosius Dobzhansky famously said, in the context of the school debate in 1973, that nothing in biology makes sense except in the light of evolution. This statement is cited almost religiously by evolution proponents. But it's not really true--much of biology can be done very productively with no reference to evolution at all. Many life scientists, especially for example in biomedical research, have little more than a cardboard cutout understanding of evolution and do their work perfectly, or imperfectly, well without it.

And whether or not creationists accept the truth of evolution is not an issue with many practical implications to speak of--it's not like not accepting vaccinations or the importance of clean water.

We said above that this was only 'partly' about the fact of human origins. No matter how manifestly uneducated the uneducated truly are in this area, this is a clash of world views, a conflict over cultural and even economic power, a battle against fear (of death), for feelings of belonging to a comforting tribe, and so on. And the often vehement intolerance of biologists, equally convinced about their own tribal validity, is of a similar nature.

Perhaps it's easy for us to be cavalier about this because science always wins in court in the long run. Indeed, we followed the 2005 Dover trial here in Pennsylvania with interest, and we like others found Judge Jone's decision to be brilliantly argued, and routinely assign it to our students, generally when we also teach about the somewhat symmetric misconceptions about the 1926 Tennessee Scopes Trial (see Ken's article in his Crotchets & Quiddities columns: you may be surprised about that trial!).

Many if not most senior evolutionary biologists alive today were trained with evolution-free textbooks in their high school biology, in public schools in which they (we) recited a morning prayer, every school day from grades 1 through 12. Evolution had deliberately been purged by the textbook publishers to prevent loss of sales to states in certain regions of our country. Lack of early evolutionary training clearly doesn't hamper later learning. And evolutionary training manifestly doesn't generate understanding. Other factors are involved.

What the answer is, is unclear, if the question is "Why can't they be like us???" Maybe it's a selfish one: let those in the Creationist world have their schools as they want them. That will mean those with correct knowledge, your children and ours, will face fewer competitors for the desirable jobs in science and and other areas where properly educated people are required.

At least, those who can't stay away from the Creationist food fight should recognize that they are largely exercising their egos--and recreating Creationism over and over, to knock it down again and again. It's their tribal totem-dance, war paint and all. Serious resolution will come at a much higher price than blogging, no matter how many followers anti-creationist bloggers pull in, and it will come in a form as yet unknown. But that form will almost surely have to involve increased teacher salaries, higher education-major SATs, and education certificates given to those whose major was science and not education.

-Anne and Ken

Saturday, October 17, 2009

Lola on evolution

http://news.yahoo.com/comics/lola

Lola Oct 17, 2009...

(Apparently, now that Jennifer is busy with a whole new set of baby goats, she has time to find comics again! Thank you, Jennifer!)

Friday, October 16, 2009

I'll get right on it...as soon as I've finished my grant application

Ken writes:
I was about to work on a second installment of a response to Peter Lawrence's very clear commentary on the state of the grant system in science, at least in some countries like the UK and US. But I had to run off to hear a seminar. When I got there, I learned that the seminar had been canceled at the last minute. When I asked why, the organizer told me it was because the speaker had to get a grant application finished before a deadline.

This is in every way typical, a daily kind of event. It used to be said, with some truth, that when two people (well, two men) got together, it didn't take long for their discussion to turn to sex. Such a satisfying Darwinian fitness-related notion is, sadly, no longer true. Now, the subject quickly turns to grants. Moaning, or bragging (indirectly, and without seeming to, of course) about funding.

We wanted to write a second post on this, to discuss possible solutions but, as we said yesterday, as long as the system is based on fiscal competition and exponential growth, there is no real fix beyond some kind of economic collapse--and that would be more hurtful than what we already have.

The exponential growth required to sustain the system can't continue. We simply cannot go on each of us training a steady stream of graduate students--and being pressured for our own career's sake, our status, and, yes, our further funding, to keep doing so. As long as 'more' = 'better', the system will not change. Yet, understandably, administrators are rewarded by a system that endlessly seeks 'more'.

Even the idea that we'll produce many PhD's, of whom but a few succeed, doesn't fly. It might be a stable, but harsh system, but we are driven to increase our funding level per capita as well, and that too requires exponential growth.

At some point the paying and investing public may tire of exaggerated promises and decide something else is worthier of their investment. We know of some examples, like energy, global climate, and combating infectious disease that would pay off much better to society.

If the health care bill becomes too large, or we finally accept that genetic variation is not the cause of all human ills, then money may be shifted to better uses. But that will not be good for the science system, and in particular for research that is not about, or that is skewed to seem to be about, health.

Shorter applications, reliance on track record rather than descriptions of proposed work, longer funding periods, and the like, that Peter suggested, sound great and should be great. But they are at odds with anthropological facts about human societies and how they work.

We think that the only self-imposed, as opposed to catastrophe-imposed, solutions involve one thing above all: system-wide self-restraint. There would need to be a cap on how many applications, and how much funding, any one person could have, and some way to prevent this being gamed by various kinds of collaborative groups. Likewise, research institutions (departments, universities, institutes) would have to be capped. Projects would have to have duration limits.

Investigators would have to have funding that was based on real accountability--achievement in relation to what is important (not just paper or dollar counts). And success in terms of what investigators promise will be the benefits of their research. Program bureaucrats would have to show real societal benefit from the projects they sponsor, not just flurries of statistical obfuscation, or their portfolios would be reduced.

But, nothing like this will happen. Those in power are certainly as entrenched as, say, the health insurance industry or investment banking industries are in the US today. Radical change is not on. Follow the money -- who benefits the most from the system the way it is? Universities and government bureaucrats; scientists are lured into it the system with the carrots of promotion, tenure and raises if they bring in funding, and certainly these are real benefits, but what scientists do for their university administrators in the form of overhead money and bureaucrats at the granting agencies in the form of portfolios is much more important in maintaining the system as it is.

We think that's too bad, but we also believe that changing the system is just not in our current cultural makeup. Of course, imperfect as it is relative to the above kinds of ideals (or pick your own set), life does go on, science does progress, and inequity isn't exactly novel to human affairs.

More broadly speaking, science is thriving. More scientists are doing more projects than ever before in history, by far. In genetics, at least, knowledge is accumulating at break-neck speed. There have been some important new findings in genetics, though it must be said that our basic 'paradigm' is not really different from what it was decades ago. But overall, in these senses, we're in a boom time. Opportunities for women and minorities are of course much better than, say, 50 years ago. More universities than just the traditional elites have first-line faculty members doing research projects.

Every generation complains about the 'good old days'. That means that in the good old days they complained about the imperfections of former eras. In a sense, the current complaints, though correct, just happen to be this era's imperfections.

Still, while science is booming, the complaints we have discussed, and that Peter Lawrence describes, are real, and palpable. Just because every system is imperfect does not mean we cannot and should not try to reduce the imperfections of our own time.

Finally, I repeat, there are many honorable ways to have a life in science besides working in a high-octane research factory. Teaching affects more people by far than most research, and there are of course many, many jobs teaching science.

Thursday, October 15, 2009

What I already did or won't actually do but will promise....if only you'll gimme a grant

The problem
An important commentary appeared in the September PLoS Biology, though we have only just stumbled across it. It has already been viewed close to 30,000 times but seems to have generated little discussion, either on the PLoS website or in the blogosphere. Indeed, we wonder why so few have wanted to comment, since the article points to serious problems that we all know about, and describes them accurately. We think the subject deserves more attention.

In his paper, with a title that says it all, "Real Lives and White Lies in the Funding of Scientific Research: the Granting System Turns Young Scientists into Bureaucrats and then Betrays Them", Peter Lawrence, Department of Zoology, University of Cambridge and Medical Research Council Laboratory of Molecular Biology, Cambridge, United Kingdom makes a strong case for the need for drastic change in the way science is funded. Lawrence is a highly regarded developmental biologist, with a long and distinguished research record in patterning in general and the genetics of development of the fruit fly in particular. He also has many times confronted problems in the politics of science head on. He has no qualms about telling it as he sees it; his is a very welcome and needed voice. Since he has not been a research failure, his views can be taken seriously: they are not just sour grapes.

In the paper, he argues that the status quo is not good for young scientists. The incessant need to apply for research money takes far too much time, encourages conventional thinking, and even lies. As he says,
To expect a young scientist to recruit and train students and postdocs as well as producing and publishing new and original work within two years (in order to fuel the next grant application) is preposterous. It is neither right nor sensible to ask scientists to become astrologists and predict precisely the path their research will follow—and then to judge them on how persuasively they can put over this fiction. It takes far too long to write a grant because the requirements are so complex and demanding. Applications have become so detailed and so technical that trying to select the best proposals has become a dark art. For postdoctoral fellowships, there are so many arcane and restrictive rules that applicants frequently find themselves to be of the wrong nationality, in the wrong lab, too young, or too old.
And, he tells his own story:
After more than 40 years of full-time research in developmental biology and genetics, I wrote my first grant and showed it to those experienced in grantsmanship. They advised me my application would not succeed. I had explained that we didn't know what experiments might deliver, and had acknowledged the technical problems that beset research and the possibility that competitors might solve problems before we did. My advisors said these admissions made the project look precarious and would sink the application. I was counselled to produce a detailed, but straightforward, program that seemed realistic—no matter if it were science fiction. I had not mentioned any direct application of our work: we were told a plausible application should be found or created. I was also advised not to put our very best ideas into the application as it would be seen by competitors—it would be safer to keep those ideas secret.
The implications
This will resonate with anyone who has written a grant proposal--or taken a class on 'grantsmanship'. The process is less about good ideas than about gaming the system, and this takes more and more time. Science, we are proud to boast publicly, rests on truth and trust--that's why plagiarism or fudged experiments are treated so harshly.

But what about the routine kinds of dishonesty that our system fosters? People rarely admit it publicly, but it is absolutely routinely acknowledged in private that, along with the dissembling Peter describes, proposals are submitted for large amounts of funds for work that has already been done, or that the investigator knows won't deliver what is promised (and, often, simultaneously hyped in the public media). What about dissembling by the manipulation of data to present it in a technically honest way that nonetheless biases the impression of the importance of the data and of the authors' conclusions?

These issues are but the tip of a potentially destructive iceberg. The current system builds big empires with large, long-term and hence unstoppable entrenched budgets. The system encourages large teams of workers, hires large numbers of post-docs as its cogs, and actively lobbies publicly and privately to secure its funding base. Peter discusses how large groups can cover for the low-yield of many of their members.

Now, there's nothing wrong with wanting funds for your research. But when it becomes institutionalized, is based heavily on competition, and careers (and university budgets) depend on a steady stream, the pressures unite in the direction of grantsmanship: gaming the system first to secure funding and then, if there's time, to do something innovative. But when you're preoccupied with securing the funds, there's much less time or even incentive to think about the real science questions.

Fundability often means safety and that means predictability which in turn often means incremental rather than major advances. Research in some areas of biology is very expensive, to be sure, and new technologies definitely do help reveal facts we could not otherwise obtain. But sequestering of resources in a few hands, or for a few technologies, deprives other avenues of resources. The safety-first system encourages (forces?) most investigators to use the technology for various understandable reasons: being fashionable, hiding a lack of ideas behind the predictability of at least some descriptive results if new technology is applied, and equating large-scale with importance. This systematically rewards investigators and of course pleases their Deans who get the overhead.

To be sure, a lot of good science is being done! No system can guarantee that more than a fraction of science will have lasting value. Most papers are hardly ever cited other than by their own authors, and the shelf-life of most research in these overheated days is very short. The distribution of quality has probably always been skewed towards a majority of trivia. And it is reasonable that we have some ways to weed out sluggish or useless yet costly research. This is especially true when there are more claimants than funds, and this is one direct consequence of the system we have now.

But when the rewards of successful grantsmanship are great, they lead to manipulation of the system, as we have seen over the past few decades, and everyone's research becomes "paradigm shifting" on the proverbial cutting edge.

Peer review, designed as one means to reduce the clubbiness of the OldBoy system, has done some of that, but people are hierarchy builders and have long ago figured out to build new kinds of OldBoy systems. Bureaucrats, too, want their portfolios of funded clients, as that helps them (the bureaucrats) build their own careers. But portfolios are jeopardized if there isn't continuity, so investigators have relatively clear paths to continued funding....whether or not they have generated really solid ideas.

We also have a crazily proliferating number of journals, and they allow piles of 'supplemental information' (often sloppily written) to be included. This leads to a tsunami of content that we simply can't keep up with, even within specialty areas--we've commented on this before on this blog. And the pressures mitigate against teaching, because rewards are for funded research. Scientists aren't stupid: we go where the rewards are!

The solution?
We undoubtedly must live with inefficiency in science. If you're really exploring the unknown, you can't know what you'll learn. Most ideas don't pan out. But a kind of evolutionary ecology perspective is worth taking: an ecosystem is most robust to environmental change when it is most diverse. A larger number of funded scientists, perhaps all with less funding, with high-end resources housed in technology service centers, could foster a higher probability and faster flow of really new ideas.

The problems are deeper and more subtle even than all of this, though. It's become a positive-feedback system. Status depends on having students, and the more the merrier. Rather than being sane, and mutually paring back to, say, generational replacement levels in which we each train only one or two students in our careers, the system encourages us to take more graduate students, to help us write more papers and get more grants, and then to do the work on the grants. That means more competition for fewer jobs and grants. We can't taper back because everyone would have to agree to that, and we're not in an altruistic mode these days. So, the squeeze is on--mainly on the young aspirants to science careers. Our university, like your university, wants more!

There is no easy solution for a positive-feedback system, particularly because universities have become so fundamentally dependent on overhead money from grants. Professor Lawrence proposes the shortening of grant applications. But daily life out here in the field immediately reveals that such changes simply encourage many more applications per person, since it's less work to do each one and they can be parceled, packaged, slightly modified and so on, in many ways. The overall probability of funding probably will go down, if anything. The US Stimulus grants showed that, when some 20,000 or so applications were submitted--because the applications were relatively easy--for around 200-300 grants. And the administrative overhead, of preparing and submitting grants can't change much per grant, so will go up and up and up, eating further into the useful amount of funds.

One can say that this is a harsh system but that, like natural selection, it screens out the worst and favors the best. That's true certainly to some extent. But who says that human life must be made harsh, to feed the self-interest of a few?

The burden will fall on the new people who are entering, or hope to enter, the fields of science. We owe it to them to resist a system that systematically grinds the spirit, or even the careers, out of so many.

We may all have dreams, and may all seek dream jobs. But not every dream is fulfilled, and not every dream job turns out to be a dream. Some, even as students, look at their research professors' lives and say "not for me!" Others are lured by the status system into high-pressure, grant-dependent careers that turn out to be relentlessly tense, by which time it may be to late for the person to taper back and get a less-intense job.

But the system as it now exists is structured to make you feel like a failure if you don't have grants, don't publish frequently in 'High Impact' journals, or --heaven forbid -- like to teach! Nobody should feel disappointed, disillusioned or like a failure because they didn't live up to somebody else's -- to the System's -- notion of success, a notion that is in their, but perhaps not your, self interest. But it's very hard to resist the allure of illusion.

Beyond shorter grant applications, what other solutions does Lawrence propose? Smaller, less costly labs, longer lasting grants (5 years minimum), the option of being judged on past research rather than future plans, less reliance on citation counts. Others have proposed more radical changes, such as all researchers being given a research allowance, that automatically gets renewed for those doing good work -- the obvious problem with this is that it's readily gameable too.

These issues are deep and many, and need to be discussed. The system needs to reward good science again, and young researchers need to be able to expect to retain their love of science long into their careers.

Peter Lawrence lays out the problems in an honest and straightforward way. If you're a scientist, particularly if you are just embarking on your career, you owe it to yourself to read, and talk about, his paper. We'll write more about this in our next post, but meanwhile, your thoughts and comments are most welcome.

-Anne and Ken