Thursday, June 30, 2011

More hype. The microbiome. Sigh.

For the last two decades, the answer to why we get sick, and how to prevent it was said to be in the genome.  The answer may have been elusive, always just out of our grasp, but once we had the whole genome sequence, or understood what non-coding DNA did, or had catalogued all common variants, or when we discovered variable copy number or epigenetic mechanisms like methylation, or had catalogued all rare variants, we'd have the answer.  And the hundreds of millions spent on the search would be justified, and we'd all live forever -- even if all the ethical and societal implications of this consequence were never thoroughly explored.

Some people understood pretty quickly after the human genome sequence was announced in 2003 that the promised answers were unlikely to be there, and so they broadened the search.  Maybe we'd be done if we understood the function of gene networks or everything in the cell or if we thought in terms like systems biology -- or any omic you cared to come up with, from the nutriome to the connectome to the microbiome.

And it's interesting about that microbiome, the cataloging of which is now well under way.  The BBC World Service radio program, Discovery, explores the issue this week.

The Human Microbiome Project webpage explains the project's mission:

  • This initiative will begin with the sequencing of up to 600 genomes from both cultured and uncultured bacteria, plus several non-bacterial microbes. Combined with existing and other currently planned efforts, the total reference collection should reach 1000 genomes.
  • The initiative will continue with metagenomic analysis to characterize the complexity of microbial communities at individual body sites, and to determine whether there is a core microbiome at each site. Pilot studies will implement shallow and then deep 16S rRNA sequencing, progressing into deep metagenomic sequencing. Several body sites will be studied, including the gastrointestinal and female urogenital tracts, oral cavity, naso-pharyngeal tract, and skin.

The second stage of the project will explore the relationship between the microbiome and disease, and at the risk of sounding like a broken record, we have to say, the leaders of the project interviewed by the BBC sound a whole lot like the leaders of the Human Genome Project did at the inception -- microbes are likely to explain everything, from heart disease to asthma to cancers and autism, arthritis, Alzheimer's and much more, even aggression (we wrote about that particular issue here, and here's a list of the demonstration projects already funded and underway).  And once we've got the microbiome from those 5 different tissue layers sequenced, we'll understand how.  And we'll be able to prevent the 'microbiome imbalance' that makes us sick.  Or, presumably, aggressive or whatever other undesirable behavior that imbalance might cause.  (Hmmm, that sounds like Galenic medicine in which you are well unless your humours are out of balance--only it's your microbiomic contents!)


Yes, of course some diseases are caused by single genes, and some diseases are caused by microbes, in any sense of the term, and more are sure to be found.  And those can be legitimate targets of technical medicine (or, we note, non-technical environmental or lifestyle modifications)  The problem is the way the hype machine is yet again rolled out to generate the promise that has gone into getting this project funded, and keeping it going.  Microbes are as unlikely to explain all disease -- and unwanted behavior -- as genes are, but the promises must be made so the money can be gotten and the work can go on.

To see that this not (just) another rant on our part, one can consider that the microbes that matter are those that interact directly or indirectly with our cells.  Pure fellow travelers neither hurt nor harm us. That means that their genomes are in many ways our genomes as well (and vice versa). We know empirically that our phenotypes are by and large complex--and we know that whether or not some of the genes that are contributing are in our cells or our microbiome's.  The major thing that can happen with the latter is that a nasty bacterium can reproduce (much like a cancer precursor cell), to have a major effect.  But for that same reason, we already know the ones that do that (bad E. coli strains, cytomegalovirus, etc.).  We have active, successful, major research programs to track them, look at their pathogenicity, and evolution relative to us as hosts.  There are real battles ahead in this area, and regular science will work--if we're lucky--in this context.  Otherwise, just as with GWAS, what we're promising to find is mainly likely to be the shifty complex of minor contributors.

We have allowed a Battle of the Omics to be waged by almost unrestricted acceptance of each omics claim.  Bloating and puffery are the current way of life.  Of course, the dust will eventually settle, but nobody will know what better might have been done with the money than waging this kind of unrestricted warfare.

Wednesday, June 29, 2011

How to turn the interesting into the stupid: a magnetic field sensing gene?

How migrating and other species navigate or orient themselves in the world is a fascinating subject.  There are all sorts of potential cues that birds, fish, bees, and butterflies might use.  They include the elevation or declination of the sun, odors, scent imprinted or visually memorized trails, visual monitoring of celestial position (stars, moon, etc.), and gravity.  And there have long been studies and speculations about whether gravitational navigation using magnetic signals might explain some migratory patterns.

Interesting work is reported in a NY Times story about this.
Many animals rely on the magnetic field for navigation, and researchers have often wondered if people, too, might be able to detect the field; that might explain how Polynesian navigators can make 3,000-mile journeys under starless skies. But after years of inconclusive experiments, interest in people’s possible magnetic sense has waned.
That may change after an experiment being reported last week by Steven M. Reppert, a neurobiologist at the University of Massachusetts Medical School, and his colleagues Lauren E. Foley and Robert J. Gegear. They have been studying cryptochromes, light-sensitive proteins that help regulate the daily rhythm of the body’s cells, and how they help set the sun compass by which monarchs navigate.
Because the DNA sequence of one of the butterfly's cryptochrome genes is similar to that of a human gene (CRY), they transplanted the human gene into fruitflies and report their results.

[W]e show using a transgenic approach, that human CRY can function as a magnetosensor in the magnetoreception system of Drosophila and that it does so in a light-dependent manner. Thus, human CRY has the molecular capability to function as a light-sensitive magnetosensor, and this finding may lead to a renewed interest in human magnetoreception. 
Whether it does or not, however, is still under contention.  But as is so typical, the authors and reporter BS-up their story with claims that this may explain how humans find their way.  For example (of course), how those Polynesian sailors did it (we think that has clearly been demonstrated in terms of celestial navigation, and isn't a real question any more, but maybe we've misremembered).  

But it is simply preposterous, we think, to give this the usual hyperbolic spin.  Humans move too slowly for much of this to make sense, even if the gene does code for a useful magnetosensor.  As with any other 'chaos' formation, if our direction is even slightly off, then we'd be wildly off over any distance--as off as if we had no magnetic sense.  Going very long distances would not be practical with this sense and we know from centuries of direct evidence that sailors got lost and drowned because without real instruments (even with the stars available) it wasn't possible. No fitness advantage there, til the astrolabe-gene came along.  Did the magneto-gene evolve in a very non-Darwinian way, millions of years ago, to lay in waiting til the Age of Sail?

Our own personal (lack of) sense of direction (one of us could get lost just going out to pick up the paper) is also clear proof of the Bollocks Principle: don't believe what you read about science without first thinking seriously about it.

Tuesday, June 28, 2011

To eat or not to eat, that is the question: was Malthus right?

The subject of the most recent BBC 4 radio program, In Our Time, is Thomas Malthus and his ideas on population.  These ideas were very influential to Charles Darwin as he developed his theory of evolution and natural selection.

Thomas Malthus
Malthus (1766 - 1834) was an English preacher and scholar.  His most influential contribution was his book written in response to the ideas of utopianists like William Godwin, an anarchist who believed in the 'perfectibility of society', that society's potential was essentially limitless -- women would eventually be equal to men, aristocracy would disappear along with the monarchy, indeed, any form of government.  Once the evils of society were gone, men would no longer be evil, as it was social pressures that made them so.  Godwin was one of many Europeans who were arguing in this way, which was quite threatening to monarchies (and look what happened in France!!). (Godwin was, incidentally, married to Mary Wollstonecraft and the father of Mary Godwin Shelley, who wrote Frankenstein.)

Malthus argued, in contrast, that it was a law of nature that society could never improve because when times were good, population would rise only to be checked by famine, war and disease.  As he wrote in his famous book, An Essay on the Principle of Population, 
Assuming then my postulate as granted, I say, that the power of population is indefinitely greater than the power in the earth to produce subsistence for man.
Population, when unchecked, increases in a geometrical ratio. Subsistence increases only in an arithmetical ratio. A slight acquaintance with numbers will shew the immensity of the first power in comparison of the second.
By that law of our nature which makes food necessary to the life of man, the effects of these two unequal powers must be kept equal.
This implies a strong and constantly operating check on population from the difficulty of subsistence. This difficulty must fall somewhere and must necessarily be severely felt by a large portion of mankind.
William Godwin
There was no getting ahead, and poverty was inevitable and constant. But Malthus's argument was not a generic one.  It was an explicit ad hominem attack on Godwin's ideas, and his powers of reasoning.  For example (one of many), in chapter 10 of the Essay, which is devoted to a critique of Godwin, Malthus says,
In reading Mr Godwin's ingenious and able work on Political Justice, it is impossible not to be struck with the spirit and energy of his style, the force and precision of some of his reasonings, the ardent tone of his thoughts, and particularly with that impressive earnestness of manner which gives an air of truth to the whole. At the same time, it must be confessed that he has not proceeded in his inquiries with the caution that sound philosophy seems to require. His conclusions are often unwarranted by his premises. He fails sometimes in removing the objections which he himself brings forward. He relies too much on general and abstract propositions which will not admit of application. And his conjectures certainly far outstrip the modesty of nature.
Godwin rose to the bait, much against his better judgement, and wrote a rebuttal to Malthus's treatise.  He didn't buy the idea that population rose geometrically, and he demanded proof.

And Godwin was right.  Indeed, as discussed by the experts on In Our Time, Malthus basically made up the idea of geometrically increasing population vs arithmetically increasing food supplies out of whole cloth.  The idea was that population doubles every generation because the number of offspring next generation is proportional to the number of parents in this generation: each parent more than reproduces him/herself. But agricultural progress on fixed amounts of acreage only increase gradually--even if by some amount per year, it's not as fast as population growth (Malthus said). But the argument was bolstered by little more than vague data and lots of hand-waving.

Yet from then, through his inspiration of Darwin and Wallace as a justification of natural selection, it is still believed.  Even in Malthus' time, population was not in excess of resources.

This shows how close examination can reveal things that casual acceptance misses: we ourselves, as evolutionary biologists, have read, re-read, and marked-up Malthus, and read Darwin's and Wallace's comments about Malthus, yet were so blinded that we did not question the evidence for the differential growth assertions.

Overpopulation is a reality, but many specialists debate what, when, where, and how that is the case, and not everyone today accepts Malthus' basic tenets.  Indeed, pro-business interests say that industry can feed everyone and it's just politics that prevents the distribution.

Darwin and others extended--at their own recognizance and not really taking from Malthus--the idea that the struggle was not just the population against the environment, but among the population because of the environmental constraint.  Again, this is a whole-cloth extension, basically.  Indeed, Wallace was more about populations (species) against environment than about mano a mano competition.

Only if it was a true Law of Nature, and not something that just happened now and then, would overpopulation force natural selection.  It is manifestly and obviously true that in almost all circumstances the vast, vast majority of indivduals are not on the very edge of survival, scrawny, scrambling for the last scrap so they can screw one more time and perpetuate their genotypes.  There are competitive elements in Nature, to be sure, but they are much less law-like and ubiquitous, and there are other ways than just selection for evolution to occur.

The canonization of Malthus, who was largely defending the privileged monarchy against Utopian idealists, shows how an idea, easy, simplistic and appealing even if badly flawed even at the time, can come to dominate and drive even an important area of scientific theory inhabited by people as intelligent and well-trained as there are.  It isn't the first time in the history of science that poorly supported theory has survived (Galen, phlebotomy, celestial spheres, Genesis). It's more the rule than the exception. So why then are we so convinced by this one?

This does not imply that population can't or doesn't often press up against carrying capacity, nor that this cannot sometimes induce competition.  But it does imply that such is not a fundamental law of life, and it forces us--if we recognize the truth of it--to think more subtly about the many forms of differential reproduction that as far as we know must be possible for the diversity and adapted nature of life.

And it should force us to be a little more humble about our own wise insights and theories.....

Monday, June 27, 2011

Evolutionary cascade

Extinctions always happen, and many if not most species either disappear entirely or change to become something different.  We say many if not most, because as we've posted before, there is the curious phenomenon of species staying morphologically (as seen in fossils) static for tens or hundreds of millions of years, while nonetheless accumulating genetic distance from related species in amounts corresponding to their fossil-based times.

Humans will become extinct, too, and probably to the great relief of whatever else is left.  And that raises today's point.  Because normally, extinctions (species lifespans) have a kind of regular, probabilistic distribution. But times of rapid large-scale extinctions lead to much less stable change for the survivors.  The onset of humans has led to an acceleration of species changes:  demise of many prey, pathogen, and other species (small pox, passenger pigeons, buffalo), and the growth of others (cows, chickens, poodles, parakeets).

We know that our impact on the world is non-trivial, but a recent report by the International Programme on the State of the Ocean, reported on the BBC website, suggests that as far as the oceans are concerned, it's much worse than even global-warming catastrophists have feared.  As the report summary says:
The current inadequate approaches to management of activities that impact the ocean have lead to intense multiple stressors acting together in many marine ecosystems. 
The impact of such stressors  is often negatively synergistic meaning that the combination of the two magnifies the negative impacts of each one occurring alone. This is already resulting in large--‐scale changes in the ocean at an increasing rate and in some regions has resulted in ecosystem collapse. The continued expansion in global population exerts ever increasing pressures on scarcer ocean resources and tackling this issue needs to be a part of the solution to current concerns. 
The changes in the ocean that are coming about as a result of human CO2 emissions are perhaps the most significant to the Earth system particularly as they involve many feedbacks that will accelerate climate change. 
The resilience of many marine ecosystems has been eroded as a result of existing stressors, leading to increased vulnerability to climate change impacts and a decreased capacity for recovery. An example is coral reefs, the most biodiverse marine ecosystem and one of the most valuable in socioeconomic terms to humankind.
The depletion of ocean species diversity from pollution, climate change, and over-fishing will be serious because even if we are not tree-huggers we depend on the ocean for many things--food not being the only one.  Politics is never far away from these kinds of assessments, and the BBC stories' use of words like 'shocking' decline of ocean life ("The findings are shocking," said Alex Rogers, IPSO's scientific director and professor of conservation biology at Oxford University) are emotive forms of lobbying, whether we agree with the writers or not.  If everything announced by scientists were really 'shocking' we would all be wandering through life as stunned zombies.  But let's try to go beyond manipulative rhetoric...

Evolutionary change usually involves proliferating consequences.  This is because ecosystems are built up over eons, as countless species become adapted to food-chains, niche specialization, and so on.  Removing one or another species may have little effect, because someone else already existing will adapt to the vacated niche (or may have been responsible for it having been vacated).  The system is usually robust enough to tolerate such things---because they are always happening and always have been.

But if the system suffers major eco-quakes, the result can be chaos or oscillations of species relationships that swing way out of control.  Even 'rapid' settling down may take an infinite amount of time as far as humans are concerned: say, hundreds of thousands of generations.  So this is serious.

Maybe the consequences of current oceanic changes will take generations to materialize, and we simply can enjoy the luxury of not having to worry about it all that much ourselves (although having to do without swordfish or squid or whales).

But if the cascade of implications of one species going, leading to starving out of other species, and on ad infinitum, is serious, we or our children may live to regret all those international vacations, high thermostat settings, and drives to the mall that neither we, nor the Earth, needed.

Friday, June 24, 2011

The Twilight Zone, Part III: Is there life in space, outer or inner?

We've written a couple of posts (here and here) about the chances that there is life in space, besides ourselves.  We were stimulated to do this by an article in American Scientist that we referred to in those posts.  We said we thought a lot of the astronomical musings in that article were reasonable, and interesting, but that the evolutionary discussion was naive.  We explained why.  But there are some other issues that may or may not have any bearing on the subject, even if they are at least captivating to think about.

Many cosmological arguments say that the universe is effectively infinite, perhaps including other universes through black holes and things of that sort.  Now there are several levels or degrees of infinity.  The smallest is the number of integers 0,1,2,3.....  But between each of those are infinitely many partway numbers 1.00000....1, 1.00000.....2,  1.00000.....3, and so on (the 'real' numbers, that form essentially a continuum).  The number pi, relating diameters to circumferences of circles, for example, is 3.141.... to a never ending sequence of digits.  Now, if the universe is discrete--made up of particles such as atoms, photons, gravitons, electrons, and so on, its infinity might be of the 'smaller' kind (we don't know what astronomers would say about this).  Still, if it's infinite then so long as there is any non-zero probability of something existing in the vastness of space, it must exist.  And for the same reason, it must exist infinitely many times.

If one were to say that space is huge but not really infinite, then as we noted in our previous post, to guess at how many other planets had intelligent life on them, we'd have to know the probabilities of every attribute that was essential for (any form of) intelligent life.  Whatever these probabilities are, we can estimate from them how likely it is that they occur for a given space rock.  Suppose each probability is on the order of 1 in a million and there are 10 such vital criteria.  Then the chance of life existing would be 0.000001^10 (a millionth to the 10th power), an exceedingly small number.  If there only a finite number of rocks, even if that's a huge number, the tiny probability could mean that life simply is unlikely to exist anywhere.

Of course, space is not uniform so that multiplying such probabilities as if they had the same meaning everywhere and every-when is undoubtedly not accurate, nor do we have more than crude guesses at what even their average values might be.  But the idea holds that the chance of life depends on the size of space in some ways.  This does not count the chance that we could actually detect that life.  Or that our 'intelligence' and theirs would bear enough resemblance that communicating with each other even made sense.  Or that we could communicate  even if we thought in the same way and wanted to, given the vastness of space relative to the speed of communication (speed of light), or that we'd be contemporary in any relevant way.

So we should recognize as weird and pure Hollywood all this talk about--and investment in--searches for life in space.  A waste of money to treat it as if it's serious science, and much cheaper to do with video graphics in a movie studio.  And, of course, we can't even think seriously about interacting with the universes that might exist on the other 'side' (so to speak) of the navel of black holes.

In this sense, the infinity game is a misleading or even stupid one to engage in.  But, oddly, there is a kind of infinity game that serious scientists do play, about things right here at home.  And many, apparently, take seriously the idea that there are infinitely many universes, almost as similar as but also as different from our own, and in any and every way, that one can specify--literally!

This is the conclusion of the Multiverse theory of physical reality.  You can Google this subject or find it on Wikipedia to get a sense of the idea.  It is based on decades of experiments with photons shone through slits in barriers, onto flat detectors, in which single photons generate wavelike interference patterns in the detectors.  If a single photon is a discrete unit, the only way it can generate such patterns is if it is interacting with other particles that behave just like photons but interact with them, but which we don't detect in any other direct way--because our experiment only released one actual photon, not a sea of them.

The idea, incredible or not, is that these shadow photons are members of parallel universes that in this case are almost identical to our own.  We release a photon and they're releasing shadow photons at the same time!  The wavelike pattern reflects the different interacting universes of which there must be an infinite number since the waves of interference are essentially continuous.

Advocates of the Multiverse theory, who are apparently mainly sane and taken seriously and considered mainstream physicists, argue that probability is itself a misleading concept.  That is, if you flip a coin, there are infinitely many coin-flips occurring in the Multiverses, and in half of them the coin will come up Heads.  It is this, rather than any true probability, that gives us the notion that things are actually random.

his affects the space-infinity game, because the infinity game in the Multiverse sense says essentially that every possible universe, including others only infinitesimally different from our own, actually exist.  We don't (yet?) know how to communicate with them by exchanging telegrams, but physicists  studying 'quantum computing' seem to be doing that in some ways (that we don't understand).

If the Multiverse view is correct, searches for life way out there in space are a bit off the mark (even if it exists), because there are an infinity of universes running directly parallel with our own, right here and now, and close at hand.

In that sense, evolution is not a random process at all.  Instead, probabilities such as of mutation, gene transmission, or survival in competition, are illusionary: these things that seem probabilistic are instead what happened in our own fully deterministic universe, out of the infinite distribution of what could happen--and did  happen in the infinity of other also fully deterministic universes running along parallel with ours.  If Mendelian segregation says (in our benighted understanding) that there is a 50% chance an Aa parent will give a child her A variant, what's really happening is that the transmission of A (or a) is fully deterministic in each universe, but A-transmission happens in half of them.  Weird to think about!

Rather than being Alone in the Universe (the title of the article that triggered these thoughts), the Multiverse is infinitely crowded, not just with other life, but with other our-life, too--you're reading a nearly identical blog in those universes, or maybe in some you're not, or in some you already died, or in some of them this word was omitted or mis-spelled, etc.

We, not being astronomers or cosmologists, surely have many concepts and details wrong, and surely this doesn't matter to what we do as evolutionary biologists here on our own Earth.  If we could get in touch with them--with their us--what would we do or say?  Would that contact itself cause some or all of the universes to change or self destruct?  It is interesting to think about anyway, because if it's true, we should feel far less lonely-even if we're not able to contact the other life that exists in the infiintely many versions of our particular universe!

Thursday, June 23, 2011

The Twilight Zone, Part II: Is there life out there in space?

This is the second in our 3-part series on the idea of ETI (extra-terrestrial intelligence), and the theories and ideas that astronomers and cosmologists have typically advanced.  We're specifically triggered to do this by an article we referred to yesterday, by Howard Smith in the latest American Scientist.  See Part I for more on the source, and for background.

Within his area of expertise, Smith makes many valuable points about why he believes that either there is no intelligent life 'out there', or why even if there is it's moot to talk about because we'll never detect it.  However, we think he is typical of astronomers who seem to feel no restraint in leaving their own field, which they know about, to speculate very naively about evolution,  for reasons we'll discuss below.  But let's briefly consider his more robust points about ETI first.

First, he's only talking about 'intelligent beings'.  Micro-organisms that can't communicate with us are irrelevant.  If there's only 'primitive life' out there, for all intents and purposes we're still alone -- the life he's interested in has to have something equivalent to a radio technology with which to send signals.

That life has to be close enough to Earth to allow a signal to get here, or for ours to get there, before the universe ends.  It has to be within the 'cosmic horizon', the constraint being how far light can travel within the age of the universe.  Since the universe is expanding and distances are getting larger, this further restricts the possible planets from which signals can reach us.  And distant signals of course represent not life as it exists now, but life as it existed perhaps eons ago when the signals were sent, so we may never catch up with ETI in anything like real time.  But this means that that distant life would have evolved a lot sooner than life on Earth.

And that distant planet has to be stable, that is its host star must be a stable size, with stable enough radiative output to have given life time to evolve, and of the right age -- not too young that life didn't have time to evolve, and not too old that the star's luminosity, which increases with time, isn't too great to have overwhelmed and destroyed the planet (many of these criteria might in fact be called the Goldilocks criteria).

The planet's orbit must be just right vis à vis its star, planetary mass must be "massive enough to hold an atmosphere, but not so massive that plate tectonics are inhibited, because that would reduce geological processing and its crucial consequences for life."

And the planet must "contain elements needed for complex molecules (carbon, for example), but it also needs elements that are perhaps not necessary for making life itself but that are essential for the environment that can host intelligent life: silicon and iron, for example, to enable plate tectonics, and a magnetic field to shield the planet's surface from lethal charged winds from its star."

But now, Smith works out some--indeed many--of his calculations from what seems to be an evolutionary point of view.  Life has to start and then evolve to become intelligent.  Here, let's take for granted that we know what 'intelligent' means.

In a nutshell, Smith asks how many planets have liquid water.  How many have ample carbon.  How many would be in systems where gravitational and other forces tilt the planet at an angle so it has seasonality of climate.  How many have physical and gravitational properties that will lead to an atmosphere.  How many have radioactive cores or other means by which they have plate tectonics to shift around the hardened crust and renew or recycle needed elements for life.  How many would have the billions of years needed for intelligence to evolve, which has to do with the age of their solar systems.  How many would circle around only one star rather than two, so their gravitational status was stable and so they did not have long cycles of being too hot or too cold.  In how many could DNA evolve and function, leading to intelligent beings.  And so on.

It all sounds sensible and of course the questions are relevant.  But what they ask in essence is: how many planets are just like earth and their life like earth-life? But that is far off the central question--are we alone in the universe?

The criteria he uses are post hoc--they say this is how earth-life evolved, and so this must be how other life has to evolve, so let's see how likely it is that just the same conditions exist elsewhere.

Playing the infinity game we discussed in Part I, one could say that if the universe is infinite, then the same conditions must exist (infinitely many times!) in the universe.  But if the universe is finite--even no matter how big and how numerous its stars--then things become different.  That's because if you multiply guesstimates of the probabilities of all the just-like-earth conditions, the net probability of all of them being true will become infinitesimally small.  Then, even if there is a planet that's just-like-earth, the likelihood of being contemporaneous (in the galactic communication sense) with us would be so small as to lead to the conclusion--his conclusion--that we better take care here because we're the only here there is.

Of course there is no way to know how different among these earthy planets conditions could be and still be compatible with life, and again the issue is probabiltity of earthy viability times the number of existing space-objects that could in principle, host life. If the product is greater than one, then the statistical expectation is that there is at least one such planet somewhere.

On the other hand, this is exceedingly naive evolutionary thinking.  Astronomers should stick to their telescopes.  The core facts of evolution as a process are divergence from shared origin and adaptive change.  There are also some other basic aspects of earthly life that Smith doesn't mention, but that are the main points in our book The Mermaid's Tale, such as the sequestering of interacting but semi-independent units, combinatorial information transfer (signaling), and so on.

There is no reason that life elsewhere has to 'evolve' in the way we understand from what happened here on earth, but let's not quibble about that.  Even then, there is no prior reason to think life must be based on water, or carbon (some have suggested that life, even earth like life, could be based on silicon, for example), or be restricted to particular temperatures (after all, we have Inuit and Saharan nomads, even under our own conditions), or that evolution would have a DNA or any other information-inheritance basis--lamarckian inheritance of some form might evolve on some planet somewhere, for example.  There is no reason evolution would take 4 billion years to become 'intelligent', nor that our kind of intelligence or ideas about communication would evolve.

Even if we accept the two core premises, there is simply no way to work out how or how long intelligence would take.  In a sense the whole point of evolution is that it works under the conditions it finds, and would work in its own way under what to us here would be harsher conditions, or more variable conditions, than Smith considers to preclude life.  We can't rule out non-earthlike chemical bases for life, other than for earthlike 'life'.  Natural selection can work in ways we may not know if, if inheritance is different, for example.  It's not that hard to think of other ways.

If  life does exist elsewhere does it have to be 'intelligent' in our sense to count?  Is the ETI question really about earth replicate life--that would and would want to communicate with the likes of us in the way we (currently) do it?

We think his article is very interesting and useful, but we also think it reveals the cardboard caricature  of life and evolution that (as we repeatedly harp in MT) is so widely prevalent outside of a rather limited range even of biologists.

In addition to these ideas, there is the possibility that even if ETI exists, it is so aggressively acquisitive of resources and so on that it destroys itself quickly, reducing the likelihood that two such sources would exist at similar enough times and stages to be able to detect each other.

There are many reasons that ETI may not exist at all, or if it does that we'll never detect it.  The conclusion that we're all alone--for all practical purposes--seems rather sound.  Even if one were to suppose that English-speaking life existed, say, a thousand light-years away (that is, nearby in space terms), it would take more than 2000 years to exchange a single message by electromagnetic means (actually more, because of the delay in reading and responding, and the rapid expansion of the universe), assuming neither our nor their life did not destroy itself in the interim (or theirs had not ended before their message got here).  2000 years is the time since the Roman Empire.  How long would it take even to find out, to within a reasonable accuracy, just where the ETs were (and they us) so that travel could be planned or messages accurately directed. So even if somebody does claim to have discovered ETI in their telescopic data, don't let NASA talk you into voting for a big budget increase.  Indeed, why allow NASA even the expensive sport of SETI, when lots of us here in the US don't even have health care?

So, we're with Smith in saying:  let's take care of what we have, instead of yearning for companionship we can't provide for each other!

There are a couple of other fascinating sides to this story that we'll pursue in Part III.

Wednesday, June 22, 2011

The Twilight Zone, Part I: Is there life elsewhere in the universe?

Figure 1 from the article.
MT is about biology, genetics, evolution, and the anthropology of science (besides digressions, ramblings, rants, and other miscellany).  But sometimes these subjects can get rather far afield.  A case in point in the current (July-August) issue of American Scientist,  a very nice article by Howard Smith, entitled "Alone in the Universe" (the abstract, or if you subscribe, the entire article, is here).  Read it if you have a chance, but here we'll present just some thoughts of our own, partly overlapping with his.  In Part I, here, we will lay out some of the basic case. In our next post, Part II, we'll go over why this is relevant to evolution and the nature of life here and (maybe) elsewhere--and we'll show how even in cosmology, naive views about evolution are rife.  In Part III, we talk about the multiverse theory, which posits infinitely many universes right here on earth.

A favorite subject of Science Fiction is how we would deal (and be dealt) with by ET's, aliens from outer space.  The idea that ETs are real and have, could, or are trying to contact earth, is irresistable.  NASA panders to it widely to justify spending (wasting?) gobs of money on sending people to find life on Mars.  Even they are at least restrained enough not to claim that there are Little Green Men there.  Still, so distinguished a scientist as DNA co-discoverer Francis Crick suggested that life here was seeded, perhaps by trash if not intent, by alien spacecraft passing by earth (yes, he did!).

Nobody expects to be able to spot a SpaceBus with our usual visual telescopes, but with the advent of radio and other non-optical telescopes the idea that 'intelligent' life may--or must--exist out there has been taken seriously, including that the ETs would know about elecromagnetic means of information transfer (radio, for example).  This doesn't mean they know about us or want to communicate specifically with us (why would they, if they knew about what kind of beasts we are?).  If so, and if we can intercept signals by al Qaeda, why not from ETs as well?

We don't know what kind of signals they might send, but they should be different from the remorselessly mechanical signals of the broiling, expanding universe.  So if we at least listen, perhaps we can filter out the mechanical to detect the intelligent communications buried within it?  A huge project called SETI (Search for ExtraTerrestrial Intelligence) is one such effort, and has involved co-opting thousands of volunteers' computers to screen incoming electromagnetic radiation.  Wikipedia has an informative page about SETI. Perhaps sadly, but not surprisingly, the result so far can be summarized as: [nothing].

Besides having to guess at what kind of signal would be wafting around in space, detecting ETI requires defining what's mechanical, to see what's left.  But there is much in the order and chaos of space, reflecting many different things: the behavior of galaxies, exploding or coalescing stars, black holes, overall expansion, remnant signals from the Big Bang, and more.  This of course doesn't include such things as anti-matter and so on.

These phenomena vary from object to object or quadrant to quadrant because in the splat! of spatial history each star, galaxy, etc. is behaving differently.  The first task is to be able to identify that, from all directions.  Doing that is not easy since we have no prior theory (if, indeed, we have any theory that everybody agrees on) for what is going on.  Much of what we believe we understand comes from interpreting the signals.  There is a danger of interpreting something as ETI that is really another phenomenon we're misunderstanding.

So for the moment we need some prior thinking to decide why or whether we should make the effort.  The basic reasoning, besides just human interest and curiosity is a kind of infinity argument that goes something like this:
  1. If there are essentially infinitely many objects in space, and if any non-zero fraction of them contain conditions on which life could exist, there simply must be life 'out there'.  Our existence proves by itself that such conditions can, and do, exist
  2. If that is so, there must be all possible kinds of life.  No probability is so small that in an infinite distribution of objects, it will never occur;
  3. Indeed, for the same reason, it must occur an infinite number of times, and with every possible variation!
  4. If life can exist, and therefore must exist, it must exist in infinitely many places
  5. If life can exist, it can evolve--our human existence proves that, too
  6. If it can evolve, it can evolve intelligence--ditto here as well
  7. If it can evolve intelligence, intelligent life can travel
  8. If it can communicate, it can also--indeed must also--communicate
  9. If we understand the physics of the world, we know that communication must (at least in some forms) involve electromagnetic means
Electromagnetic communication travels at speeds and takes forms that we understand.  Thus, radio telescopy is a means to detect what must, inevitably by this reasoning, exist in space.  Wherever, and whenever, the signal was sent, it will travel at the speed of light in all directions, and that means in our direction (it need not be directed here specifically). 

If we have sensitive enough instruments, we should be able to detect the emanations.  For them to be intelligent, they must have systematic rather than random structure, and must be different from the other electromagnetic 'noise' in space.  Why?  Because otherwise Spaceship X could not communicate with its home base.

None of this implies when a given level of intelligent life will arise, or where.  But if the universe is effectively infinite, there must be some signals that will reach us at any given time (such as today).  If the signal comes from very far away, all we know is what the ETs were like back at the time the signal was sent--not what they're like now.  So while detection doesn't help direct space travel plans, or booking of exotic vacations, it would at least answer the question:  Are We Alone in the Universe?

There have been many arguments, some of which we've given above, why we must not be alone, and thus must be able to detect the life that does exist elsewhere.  On the other hand, there are various things that must be true for a given planet to have life that can communicate in the way we've discussed.  The planet must be suitable for life and old enough that life there has already originated.  It must have had life long enough for it to evolve intelligence.  The intelligent beings must have been their and their culture evolved to the stage of sending electromagnetic signals.  And long enough ago for the signal to get here.  And these things must be compatible with the age of the universe, and the age of the Earth.

So, for there to be ETI there are issues of habitability, evolution, time, and coincidence.  The latter is perhaps the most vital, because there must be an intersection between the ETI's stage (at the time of signal sending) and our stage today.  Planets alive but only with moss or bacteria won't help.  Planets that have ETI but too recently for their signals to reach Earth won't help either.  Emanations blocked on their way here by other objects or gravitation, or that got sucked into a black hole, will never get here.  Likewise for planets with ET's that haven't discovered radio, or who are isolationists and don't care to venture forth, or who were too far away from us when they got their smarts for the signal to have got here yet.

Still if space is really infinite there must be at least some such coincidences.  Indeed, there must be infinitely many of them!  Emanations we can detect, scanning in any direction, must literally be loaded with their messages.  And that means such messages must be coming at us from all directions right now!

But there are more issues to discuss, that can make you depressed if you're into SETI, or happy if you wish to be left alone.  We'll discuss some of these in our next post.

Tuesday, June 21, 2011

Genetics and modern day transubstantiation?

John Wyclif, philosopher
Well, after the disappointing program last week, this week's In Our Time on John Wyclif and the Lollards reminds us yet again why it's such a fine show.  An eclectic college education, without the homework or exams.


John Wyclif was one of the most important European thinkers of the Middle Ages.  In a Wikipedia nutshell,
John Wycliffe (c. 1328 – December 31, 1384), also known as Wycliffe John, was an English Scholastic philosopher, theologian, lay preacher, translator, reformer and university teacher who was known as an early dissident in the Roman Catholic Church during the 14th century. His followers are known as Lollards, a somewhat rebellious movement, which preached anticlerical and biblically-centred reforms. He is considered the founder of the Lollard movement, a precursor to the Protestant Reformation (for this reason, he is sometimes called "The Morning Star of the Reformation"). He was one of the earliest opponents of papal authority influencing secular power.
Lollards were hunted down and burned at the stake in the 15th century because they adopted many of Wyclif's ideas disputing many of the key teachings of the Roman Catholic Church in England.  Wyclif was a philosopher before he waded into politics or religion, and there was nothing heretical about his teachings in philosophy.  Though, he had an unusual Realist's view of the world, at a time when Nominalism predominated in philosophy. The argument was about language.  The explanation for this given on IOT was the meaning of the sentence "Socrates is human".  To a Nominalist, "Socrates" clearly existed, but "human" is only a sound, that stands for nothing that exists outside the mind.  Realists, such as Wyclif, believed that there are real universals -- human, for example, stands for a real entity, not just an idea.

This was more than an esoteric philosophical argument, and Wyclif drew political conclusions from it in a way that got him and his followers into trouble with the Church.  To Wyclif, universals were more important than individuals, and things that were held in common were more important than any individual's possessions.  Thus, he was an early believer in a communist ideal, and that led him to believe, among other things, that the Church should not hold wealth.  But this was only one of his criticisms of the Church.

Before Wyclif's time, there was very little dissent in England against the Roman Catholic Church, although there was in Europe, and the little that there was was brutally suppressed.  So, Wyclif's dissent was not imported but his own.  He questioned the nature of authority in the Church, and its power over secular segments of the State, its control over so much wealth, and paying taxes to the Papacy in Avignon, at a time when France was England's mortal enemy.  Wyclif didn't believe the church should be taking from the poor, and he believed that good acts were much more important in a religious life than faith alone.  After the devastation of the Black Death in the 1300's when traditions were challenged and questioned, he struggled to understand what it meant to be Christian in a rapidly changing world, arguing that at the end of the 14th century, the Church was not a true reflection of the church described in the Bible.  

John Wyclif, heretic
Many agreed wholeheartedly with him when it came to criticizing the requirement to pay taxes to a Pope in France, but when he began to question the Eucharist, the consecration of bread and wine, turning it into the body and blood of Jesus Christ, he began to make enemies.  The Eucharist was the central rite, the hub of Christian devotion, but Wyclif the Realist couldn't see how bread could become something else, including the body of Christ.  Oddly, he didn't argue that Christ wasn't present in the ritual, but that the bread was still bread, at the same time that it was the body of Christ.  That is, he believed in consubstantiation rather than transubstantiation.  

And now he was really stepping out of bounds.  Turning bread and wine into the body and blood of Christ was the most powerful thing the clergy did, and if the Church accepted that the Eucharist wasn't what they claimed, the clergy would be stripped of any real power.  And taking communion was very important to the laity as well, so he was also threatening their belief.  To make matters worse, he preached in English rather than Latin, so everyone understood his argument.  It wasn't an esoteric question discussed among the powers-that-be, but among real people as well.  And a lot of real people didn't like it, nor did the Church.  

Wyclif is probably best known today for having translated the Bible into English, although there is some question about whether he actually did, or whether he did so alone because his name is not on the Bibles that remain today, nor was the translation attributed to him at all heretical.  In any case, starting in about 1390, the Scripture was readable in English, and accessible to a much larger group of people than every before.  And more and more English readers were then able to think about the meaning of Scripture independently, and many began to say that they weren't finding described in the Bible the opulent and powerful Church that they knew.  

This was true heresy, although it took the Church and the State some time to organize their response.  Politics and religion were intertwined at the time, so these powerful entities organized their response together.  The first heretic was burned in 1391, but burning at the stake became commonplace in the early 1400's, in a crackdown against heresy throughout England, as well as on the continent. We tend to think (with a rather superior attitude) that the auto de fé (burning at the stake) happened only in benighted Spain, but it was rife in the UK as well.  A very long Book of Martyrs by John Foxe, published in 1563, documented this in great detail (and let's not forget the persecution of witches in the American colonies in the 1600s).  It was routine, and Wyclif's era inagurated it as the Reformation was beginning--Luther was in 1517.

On the face of it, it is completely absurd that people could have been burned at the stake for believing that the bread and wine they were offered for communion wasn't truly the body and blood of Jesus.  How could such a small idea have been such a large challenge?  Of course, its power was that it was an emperor-has-no-clothes kind of idea, and it challenged the purpose of parish priests, and the belief parishioners had in the meaning of the service, the dogma of the established church, and its material foundations as well.  

It's no longer heresy to question transubstantiation -- people may believe it literally, or they may take it metaphorically, or not believe it at all, but not believing is no longer a threat to the foundations of the Church (and won't again become so in the future, we can hope). 

Modern day transubstantiation
But, there's perhaps a parallel argument going on in genetics today.  Is there a gene for criminality?  Or voting behavior?  Or for diabetes or heart disease or asthma?  Indeed, do genes become traits?  Do they transubstantiate into traits?  In a more direct analogy, is the HBB DNA sequence (that codes for part of the hemoglobin protein) transubstantiated into hemoglobin?  Or perhaps to be true to Wyclif,  do genes consubstantiate, co-exist along with the trait?  The genetic Eucharist -- the transcription factor, like the priest, consecrates the gene and it becomes flesh?  

It's a silly thought, and, like the transubstantiation of bread and wine, there's no obvious physical mechanism that would explain how a gene becomes aggression, or liberal voting behavior, or hemoglobin.  Or it would be a silly thought if it weren't so close to what so many people believe about how genes work.  After all, both evolutionary theory in many if not most scientists' (and lay) hands treat the gene as if it is the trait, it exists and evolves for that reason.  People are genotyped or sequenced to find out if they have a given version of a gene, and if they do, assume they thus do or will have the trait it codes for.  Development is the 'bioeucharist' that transubstantiates one form into the other.  Of course we know it's not exactly that way, since even at its most deterministic, the gene codes for rather than becomes, the trait.  But the difference, conceptually, is perhaps more subtle than it may seem.

If what people believe about genetic determinism, as in Wyclif's time, weren't so often tied to their political beliefs, this would be remain a subtle and perhaps trivial scientific issue, but as a rule people's beliefs about genetic determinism are clues to their politics, and vice versa.  Therein lies the real danger.  Questioning determinism becomes heretical in the same way that Wyclif's challenges to the Church were.  A threat to the genetic establishment, a threat to grant-getting, to tenure, and hence to livelihoods if not life itself.

And the tie can even become a flirtation with eugenics, and we all know what that can lead to, because the 'heretics' -- those with the bad genes, and hence inherently unworthy, so to speak -- have been abused and killed for centuries.    We don't burn people at the stake, and that's a tremendous advance, but people have been gassed to death in living memory, among other things, for their purported genes.  And, however you feel about elective abortions, often they are based on genetic testing.

The Wyclif story was new to us, and we found it interesting in its own right, but also a parallel to what is happening in genetics today.  The polarization of opinion about genes and what they do and how they do it is based more on ideology than scientists generally like to admit.  And it's all more than just rhetoric.

Monday, June 20, 2011

Eugenics is back...and YOU are paying for it!

It didn't take a genius to predict that the fervid ideology driven by genomic technology would lead to a revival of the geneticizing of every human trait, and once behavior was allowed back into the tent that we'd see eugenics not far behind.  And a story in the NYTimes suggests that criminality is already back in apparent good graces. 
The tainted history of using biology to explain criminal behavior has pushed criminologists to reject or ignore genetics and concentrate on social causes: miserable poverty, corrosive addictions, guns. Now that the human genome has been sequenced, and scientists are studying the genetics of areas as varied as alcoholism and party affiliation, criminologists are cautiously returning to the subject. A small cadre of experts is exploring how genes might heighten the risk of committing a crime and whether such a trait can be inherited.    
 It's couched in pious only-for-social-good kind of rhetoric, along with 'oh, no, your genes don't determine your future penal state, only give some suggestions about .....'

But this is eugenics by other names.  Once it is believed that genes affect your risk of being a criminal (or whatever other kind of undesirable), the acknowledged fact that environments can modify that probability tend to be swept away, because your genotype can be measured at birth and used to label you.  If you have an above-average 'tendency' to become a sociopath (of types other than those of the neo-eugenicists in modern genomics--now, as before, in major universities), then you or your kids and friends may be labeled, watched, shadowed, pressured (on pain of things like no insurance or jobs) to take preventive measures (e.g., be doped by psycho-meds).

It's understandable for a host of reasons, all of which were invoked by the first round of eugenecists, because we know that genetic variation affects variation in basically any trait you can name.  But the determinism is usually very weak, and the bulk of social problems, like criminality, could be cured not by professors at prestige universities with large grants, but by more social equity and integration, and so on--things we understand imperfectly but perhaps far better than we understand genes.  But who will know and what will they be allowed to do with that knowledge?  Will there be forms of subtle intimidation applied (insurance, jobs, school admissions, imposed preventive measures....).  And what traits will, incrementally, be added to the test list?  We all know from history--including some contemporary history--how it goes.

The current step sounds innocent, as it's about crime, as being discussed by NIJ (the National Institute of Justice; the 'J' is supposed to be about justice, but they have the money so they can define that however they want).  Big deal?  After all, how different will genomic surveillance be from communication surveillance that is apparently being expanded? 

The work is going to be done,  since nobody has the will to stop it....and YOU are paying for it.

Sad and Disgusting Department item

Well, we have previously posted on the utterly disgusting and to us immoral, or at least sad, advert scaring investigators into paying a grotesquely high price for a crib book on how to fill in NIH grant applications and (the company suggests but doesn't guarantee) raise your funding chances.  It is a sad commentary on the commercialization, gold-digging, fear-mongering, rat-raced nature of a lot of science today.  No grant, no tenure, self respect, or job!  It is sad in that we have fallen into such a state where we'd even be susceptible to such things.

Well, another sick come-on crossed our email transom today.  As before, we're sorry, but we won't grace it by giving you even a link.  This is a company of self-proclaimed 'experts' who will increase your chances of getting your paper published!  They'll give you simulated peer review, help with your English or writing, or provide other kinds of editing 'services'.  For several hundred, up to about a thousand! dollars they'll give you a review, editing, or even recommend a journal for you (you got a PhD but can't do these things?  Maybe you should contact the university that gave you the degree and ask why they did that if you weren't ready).  The price also depends on how many anxious days you're willing to wait for the lifesaver service to be returned to you.

As with grant review, any scientist who thinks that passing a paper by some reviewer(s)--paid or through the usual peer review system--protects them from the whims and chances of subsequent reviewers doesn't understand the system.   There is unpredictability and stochasticity involved.  But if your paper is so bad that the paid reviewers really trash it, you will likely be trashed by other reviewers.  So your work isn't yet ready for prime time, or is of the LPU nature (least publishable unit).  On the other hand, it may not be good for business for DesperateScientists, Inc. (not the scurvy company's real name) to really trash you....and charge you for it!  So they may be tempted to soft-pedal, to get another (paid) look at your work, or to see your next papers.  So even the service may not be such a service.

These exploitations are symptomatic of the way science has become populated by fearful drones, needing to grind out 'product' non-stop, endlessly striving, like trees in the tropical forest, to be noticed by the sun more than any other tree.

Too bad.  Even worse is that you can probably legally charge this service to your grant--that is, get the public to pay for your vulnerability, poor training, or ambitions.

Either we choose, as a culture, to be rats on a treadmill driven by our administrators, in a university imitation of capitalism rather than sticking to a non-profit, educational mission, or we jointly work to change the system and back off, making it more civil and savory as a way to earn a living.

Friday, June 17, 2011

Infectious diseases in our time

As regular readers know, we usually very much enjoy the BBC Radio 4 program, In Our Time.  We were quite looking forward to last week's show on the origins of infectious disease, as it promised to touch on a number of our interests, but we were sorely disappointed.

It should have been good, and indeed some of it was quite informative in a very basic way -- appropriately, for a show for a general audience.  The guests were geneticist Steve Jones, professor of infectious disease epidemiology Roy Anderson and professor of microbial genomics Mark Pallen.  Presenter Melvin Bragg doesn't know this field, but he had some good questions about such things as when infectious diseases first began to affect humans, where they came from and indeed still come from.

They begin with the history of the bubonic plague, and Steve Jones spoke about its origins and natural history.  He said that the growth of population led to the spread of disease, and often the rise of an epidemic coincides with change in proximity to an animal, such as the rat in the case of the plague.

As he has done before, Jones talked about three epochs of human history: the age of disaster, the 99% of human history in which most people were killed by tigers or cold; the age of disease, the rise of population from the beginning of agriculture 10,000 years ago, which brought the rise of epidemics; and the age of decay, at least in the western world, where we now die of old age rather than infection.

While he didn't talk about this in the current program, he has said previously that he believes that we were molded by selection during the first two epochs, but that natural selection will no longer be a force that molds us now that we mostly survive to reproduce and pass on our genes.

But what an odd thing to say, as infectious diseases are still a primary killer, especially of children in much of the world, and aren't going away anytime soon.  And, more and more diseases seem to have an infectious component, so it may indeed turn out to be much more influential now, even as we 'decay', than Jones suspects.

Further, even before large population sizes and agriculture, much of our species was probably exposed to continual varieties of parasites, infections in injuries, and the like.  We are immersed in all sorts of viruses and bacteria, even if the clearly infectious diseases, that we easily notice, have to do with large, dense populations and close association with large herds of domestic animals.

And of course new infections arise all the time, and indeed Roy Anderson believes that the coming century is going to see a huge increase in emerging diseases given global population size and the ease with which people globe trot and thus spread disease.  The idea that humans -- or any species until it goes extinct -- have stopped evolving is just wrong.  Yes, we adapt to changing environments to a great extent, but new alleles arise with every birth, and there will always be change in our species' genotypes.  What is new -- for the moment -- is the rate and nature of change.

The rate of change will be much lower than in most of our past, so long as population size is so large.  Likewise, the great amount of intercontinental mixing will slow down the rate at which specific genotypes can be screened by selection, because there will be so many more genotypes in the global mix.  Even mass deaths, by modern standards, will usually be trivial in relation to whole population sizes -- that is, won't make much of a population bottleneck that would purge variation.  Variation very specific to serious disease susceptibility could, of course, disappear fast, but positively adaptive genetic variants will be much slower to advance in frequency than in most of our past.

Medical care and other perks and protections of modern society (until such society collapses of its own excesses) will change the nature of selection, but we'll still evolve.  But where our evolution will be most interesting may be how the pathogens evolve and adapt to us, because they can evolve very fast, especially confronted with the strong selective effect of antibiotics and the like.

Bragg asked if it's possible to generalize about where most viruses come from.  Anderson said that homo sapiens are 1/2 million years old (though it's not clear where that figure came from), and that we acquired all our infectious agents from wild animals before we domesticated animals, and then acquired disease from livestock.  Usually infection traveled in one direction, animals to humans, though several examples of infection going the other way did come up during the program -- we gave TB to cattle, and leprosy to armadillos, for example. An interesting twist, since sometimes now these pathogens mutate and get transferred back to us!

Jones was asked about the effect of animals on epidemics.  He said, as Anderson did, that indeed most diseases come from animals, but he then said that in fact the most dangerous animals are our closest relatives -- which is another odd thing to say on the face of it, as he'd just told us that the bubonic plague, one of the biggest killers ever, came from fleas via rats, and we get malaria from mosquitoes. We get relatively few such diseases from primates, so whether pigs and chickens are our 'closest' relatives in this context isn't clear.  But it is clear that population structure and habitat are important.

Jones also said that only recently, at the beginning of the 20th century, has medicine stopped killing more people than it cured.  But this is meaningless, as the denominator, the number of people who access medicine and what they go for, has changed dramatically.  It's comparing apples to, to ugli fruit.  Without affordable health care, few would have gone to the doctor for a common cold in the 19th century, but only when they were already deathly ill.  And probably most afflictions people see their doctors for now are self-limiting anyway, or at least aren't fatal, so of course a much smaller fraction of the people who see doctors now die.  Which is not to say that medicine hasn't improved, because of course it has.  

It is not at all clear that in the first age of humanity disaster killed more people or shortened life more than parasites of various kinds, nor that, overall, we're not still in the age of infection.  Certainly we suffer the slings and arrows of aging these days, but this is less clear outside of the 'developed' world, and indeed some of these slings and arrows may be laced with pathogens.  Also, in evolutionary thinking one might argue that even with rampant infection still so widespread, there are vastly more people alive today than ever before.  So whatever the changes, from an evolutionary point of view they've been good for our species, no matter how many suffer from whatever causes.

Pallen ends the program by saying that after they finish recording he wanted to talk with Jones about which theory was more important in terms of our understanding of the world and how we affect it, evolutionary theory or the germ theory.  What he meant, we assume, was that understanding how germs infect and kill, and their behavior in populations has allowed us to control them in ways that evolutionary theory could never do.  But in fact much of our immune system evolves, along with parasites, in the bodies of infected people.  So it seems an odd kind of comparison.
Evolutionary theory works far better and more clearly for such rapid adaptive changes, in our parasites, than those that happen in ourselves.  But asking whether evolutionary theory or the germ theory is more important is like asking which is more important, brakes or drapery.  They are both important  in their own way.  And evolutionary theory is a framework for understanding life, not a prescription.

Thursday, June 16, 2011

Oh, what a tangled web we weave

Walter Scott is responsible for the famous line:  Oh, what a tangled web we weave, when first we practice to deceive.

It sounds profound, but is it wise words, or just bollocks?

Here, at least is the latest in the 'evolution was just like this' department.   A study on reasoning  that is part of an issue on the subject in the Journal of Behavioral and Brain Sciences, concludes that reasoning evolved to be deceiving rather than to tell the truth.  At least, the authors of this new just-so story don't claim it's genetic: deceptive rhetoric evolved socially. 

As the NYTimes reports, it has long been assumed that reasoning evolved to enable us to search for and determine Truth.  But,
[n]ow some researchers are suggesting that reason evolved for a completely different purpose: to win arguments. Rationality, by this yardstick (and irrationality too, but we’ll get to that) is nothing more or less than a servant of the hard-wired compulsion to triumph in the debating arena. According to this view, bias, lack of logic and other supposed flaws that pollute the stream of reason are instead social adaptations that enable one group to persuade (and defeat) another. Certitude works, however sharply it may depart from the truth.
The idea, labeled the argumentative theory of reasoning, is the brainchild of French cognitive social scientists, and it has stirred excited discussion (and appalled dissent) among philosophers, political scientists, educators and psychologists, some of whom say it offers profound insight into the way people think and behave. 
Now whether this is true in all societies, a uniform cultural evolution or a parallel one (similar in, say Pacific Islanders and African Ashanti or San, but of independent social-evolutionary origin) is a valid question.  In fact, much as we hate to take sides with the genetic-evolutionary view (!), there have been numerous arguments that human language is basically an extension of eons-old display tactics that were designed to intimidate or deceive, as in mating competition.  Such things are not new to humans, or even primates, so if it is biological it predates our species and the explanation is more general.

However, let's ignore whether it's cultural or biological.  The same people who fervidly see competition-everywhere, the arch-darwinian view of life,  will love the idea that she dissembles to deceive.  If you're looking for a rival under every bed, you'll certainly go along with the idea that we use language (reasoning, persuasion, rhetoric) to distract, derail, or deceive potential competitors:  you really do, as the article says, want to win rather than to inform others.

This certainly is one way language is used (at least in cultures in nation-states, where we have daily evidence).  But is it 'the' truth?  Is it part of culture, or only of some cultures....or did cultures evolve reasoning 'for' deception per se?

It is easy to see a polar opposite to the latest assault of selectionism.  If you deceive, you can cause others to come to loss or grief.  Why should they not have a long memory, that they'll use to even the score later?  Why isn't truth good for the group, and deception a way to make everyone vulnerable?  Our ancestors--including primates--lived in very small local groups.  They might be very vulnerable to internal misinformation.  Why is telling the truth institutionalized in many if not most cultural norms--what children are taught, for example, even if we're not perfect at it?  Is that because if everyone is convinced you're truth-telling, it's easier for you to mislead?  Or is it because truth-telling really is what's important, and you mutually have to rely on it for your survival?

Further, what is truth and how do you know what people's motives are (whether they are even aware of them or not)?  Since 'truth' is what we think we perceive, and since we always have imperfect data, imperfect perception, and imperfect intelligence, why should we assume that 'reasoning' is false rather than flawed?  After all, even 'experts' in most fields related to behavior (to wit: economists, pundits) are grossly wrong presumably because of ignorance or bias rather than intentional deception.  One might suggest that the journal article's authors' interpretation of the intent in reasoning is more a reflection of their ideologies and biases, than it is of the underlying truth (or are we just saying that to make you believe us??).

Anyway, why should our ability to reason have evolved for only one purpose?  Did our hands evolve just to let us hunt?  Like most traits, reason is multi-purpose, and can be as useful for cooperation as well as competition -- and many other things. It's how we express how we assess our environment, our circumstances.  Often it is verbal expression, imperfectly representing our internal thoughts.  We can reason our way to figuring out from these footprints what kind of prey we're likely to find if we hang out long enough at the waterhole, or that that plant is poisonous, or to formulating a mathematical proof just as we can clap with our hands, help another across a creek, and caress a child's cheek. And, it's not just that the functions other than winning arguments are exaptations (purposes beyond the one the trait first evolved 'for') of the ability to reason, because ants and bees can reason. Or is deceptive reasoning limited, by social constraints, to certain but not all topics of conversation?

The choice of a single overriding purpose for our ability to reason says more about the those doing the choosing than it does about the trait.

Wednesday, June 15, 2011

Not getting older but wearing thin: the death of the anti-oxidant theory of aging

For 30 or 40 years we've been fed on a steady diet of claims that anti-oxidants were vital for reversing the oxidative damage to tissues that was widely taken to be the most important aspect of biological aging.  Eat your veggies and live forever (unless of course they are infected with E coli)!  Take your vitamin E and vitamin C!

There have been other theories of aging, but proponents have convinced NIH and other agencies that huge and expensive studies of anti-oxident dietary components and longevity, in humans and other animals.  How much disease and earlier-than-necessary death such lifestyle changes would occasion was never quite clear.  Advocates of the theory naturally wanted to say it was huge, or even most or all of what goes wrong in aging.  Anti-oxidants even protected against mutations, and hence cancer.

Well, a new paper in BioEssays by John Speakman and Colin Selman shows that the various kinds of evidence we now have suggests that antioxidants have little if any measurable effect on aging, and likewise the genes whose antioxidant (or equivalent) functions.

Given the statistical, institutionalized, miracle-promising funding and research landscape we have developed, this is actually not a surprising set of findings.  Debunking established ideas is rather commonplace (yet we still eat up the headlines proclaiming the next miracle).

Of course, we still age and die.  There are many reasons, and oxidative damage may be one of them.  Mutations are another.  Wear and tear is another.  Even telomere loss may be one!  For some reasons, most species have typical lifespans related to their body size or position in the phylogentetic tree of species.  Such findings fueled the belief, obviously correct in major ways, that genomic factors were responsible for aging patterns.  Many theories of how selection for lifespans--for 'death' would have operated.

Yet there have always been exceptions: species with lifespans far too long or short for their phylogenetic position.  And dietary reduction, telomeric experiments, and so on, have produced laboratory results that fueled these theories in one way or another.

But in real life, out here on the streets, things are somewhat different.  We know that moderation in all things (Hippocrates' recommendation around 400 BC) will lead to longer healthier life.  We also know of risk factors that affect only some causes of death or disease (smoking and lung cancer, for example), and that most people die of a disease, not of old age (the latter as a meaningful cause of death was pretty much given up decades ago).  Death, in real life, is timed by the assemblage of these various system-failures in the population, with particulars different for each individual.  Longevity and its net selective effects have probably always pertained, with no one cause evolving, or being necessary for the march of evolution.

Sorry, no miracle pills.

Tuesday, June 14, 2011

Cold chills and statistical mischief

In a Beeb story, Dr Phil Jones, a climatologist suggests that we now have evidence that global warming is real.  This is appropriate for MT because it reflects some strange, yet almost universally accepted, criteria for deciding what is really 'real'.  It is an issue biologists, evolutionary or genomic or otherwise, including anthropologists and others dealing with human variation should be more than just aware of, but should integrate into their thought-processes.

Previously, says Dr Jones (as quoted by the reporter), we didn't have enough information to be sure,
"but another year of data has pushed the trend past the threshold usually used to assess whether trends are 'real'".  Dr Jones says this shows the importance of using longer records for analysis.
Now what could this mean?  How can something become 'real' just because we have another set of data?  Dr Jones explains:
The trend over the period 1995-2009 was significant at the 90% level, but wasn't significant at the standard 95% level that people use," Professor Jones told BBC News.
If this doesn't strike you as strange, it should!  How can another year make something 'real'?  How can a 95% 'level' make something real?  Or if it makes it 'significant', does that make it real?  Or is there a difference?  Or if it's 'significant', is it important?

This is uncritical speaking that makes science seem like a kind of boardwalk shell game.  Find the pea and win a teddy bear!

In fact, what we mean is that by convention (that is, by a totally subjective and voluntary agreement), if something is likely to happen 'only' by chance once in 20 times, and it actually happens, we judge that it's due to factors other than chance. One in 20 means a 5% chance.  We call that the p value of a significance test.  (this is the complement--same meaning--as the 95% level referred to in the story).  And here significance is a very poor, if standard, word choice.  We would be better using a more neutrally descriptive term like 'unusuality' or 'rareness'.

In fact, global warming is either real or it's not (assuming we can define 'global warming').  Regardless of the cause--or the real significance for worldly affairs--the global temparature is always changing, so the questions really are something like: 'on average, is the global mean temperature rising more than usual, or in a way reflecting a long-term trend?'

Further, those who 'believe' in global warming--are convinced on various diverse grounds that it's happening, the 'mere' 90% level of previous years' data did not convince them that global warming wasn't taking place.  Indeed, if before now we didn't have data showing the trend at the 5% level, how on earth (so to speak) did anyone ever think to argue that this was happening?

There is absolutely no reason to think that very weak effects, that can never be detected by standard western statistical criteria are not 'real'.  They could even be 'significant': a unique mutation can kill you!

Perhaps a better way for this story to be told is that another year of data reinforced previous evidence that the trend was continuing, or accelerating, that its unusuality got greater, and that this is consistent with evidence from countless diverse sources (glacial melting, climate changes, biotic changes, and so on).

Suppose that no single factor was responsible for climate change, but instead that thousands of tiny factors were, and suppose further that climate change was too incremental to pass the kind of statistical significance test we use.  Global warming and its effects could still be as real as rain but not subject to this kind cutoff-criterion thinking.

This is precisely (so to speak) the problem facing GWAS and other aspects of genomic inference, and of reconstructing evolutionary histories.  p-value thinking is rigid, century old rigid criterion-of-convenience, with no bearing on real-world causality--on whether something is real or not real.  It may be that we would say if an effect is so weak that its p-value is more than 0.05, it's not important enough to ask for a grant to follow up that finding.  Hah!  We have yet to see anyone act that way!  If you believe there's an effect you'll argue your way out of unimpressive p-values.

And, again, even if the test-criterion is not passed, the effect could be genuine.  On the other side, and here we've got countless (sad) GWAS-like examples of trivially weak things that did pass some p-value test, and that fact is used to argue that the effect is 'significant' (hinting that that means 'important').

Statistical inference has been a powerful way that institutionalized science can progress in an orderly way.  But it is a human, cultural approach, not the only possible approach, and it has serious weaknesses that, because they are inconvenient, are usually honored in the breach.