Friday, April 29, 2011

Bird brained?

A story on the BBC site this week reports that birds with bigger brains are better able to adapt to city living.  In a paper published in Biology Letters, researchers describe their study of 82 species of birds from 22 families, in which they conclude that birds that have large brains compared to their bodies are more adaptable to living in urban areas.  Their motivation was that since the earth is becoming more and more urbanized, it's the species that are better able to adapt to city life that will succeed in the long term. They say in the paper:
These data support findings linking relative brain size with the ability to persist in novel and changing environments in vertebrate populations, and have important implications for our understanding of recent trends in biodiversity.
Harpy eagle and chick
Although they don't explain the correlation, they do say that brain size has been "repeatedly linked to the ability of animals to adapt to novel or changing environmental conditions, as well as to innovative behavior".  This is brain size, not relative brain size, or the brain/body ratio, so it's hard to know what to make of this study.  Is it that birds with bigger brains are more adaptive?  In which case why don't harpy eagles, with their 6 foot wing spans and legs the size of a toddler's arm, live in cities, and why do hummingbirds come to urban feeders?

And it's never been clear what brain size in itself is all about.  Male and female brains vary by a clear (average) amount -- roughly 10% in humans but varying among species with their sexual dimorphism.  But the intelligence of males and females is comparable.  And one can argue about whether urban or rural environments, with their different threats and opportunities, vagaries and resources, are more demanding of 'intelligence'.  Does being fed by old folks in the park take more intelligence than finding food while worrying about hawks?

Or is it the brain/body ratio that's important, in which case, do crows and tits and wrens in the countryside have the same size brains as their city-dwelling cousins?  And what is it about bigger brains relative to body size that would make an animal more adaptable?  And is adaptability really correlated with intelligence?  Amoeba adapt to changing environments, after all.

But whichever it is, if the underlying assumption is that adapting to city living is the ultimate in adaptability, this suggests that other birds are somehow less adapted or adaptable, but all birds are adapted by definition, and given that environments change all the time, they've all managed to adapt to change.  And, there's probably an ascertainment bias here anyway -- a lot more observations have been made on urban birds than non-urban birds, so that we have a lot more evidence of this presumed superior adaptability in these birds than we do on birds that we don't see nearly as much. 

A few more things are worth considering here.  One is that if this result is indeed correct, this would reflect organismal selection rather than Darwnian natural selection: birds who like the urban environment go there, birds who find it unpleasant or confusing don't.  Genes related to environmental response could be partitioned in this way, but there need not be any differential reproductive success.

Of course lifestyles in cities and country, including diet, amount and nature of exercise, and who knows what else may differ during the growth and development of the birds.  This would not pertain if the country relatives of the urban species the authors studied are also bigger-brained, but then it would imply that country life was originally responsible.

The authors also found small-brained birds in the urban areas but (clinging to their hypothesis) tried to explain that away by invoking ad hoc (or post hoc) special explanations.  That means that the rule, if true, is only a sometime-thing.

In any case, the study (if it can be confirmed in a systematic way, and if what it actually means about brain size can be clarified) is interesting.

And whatever the reason, crows have certainly adapted to city dwelling.  But then, crows are in a class of their own.

Thursday, April 28, 2011

Tornado tragedies: science advances, but prediction of Nature still found wanting

Today's sobering headlines involve the recent spate of deadly tornadoes that have cut swaths of destruction across the South.  The death toll alone, not counting injuries and economic damage, is nearing 200.  Here we are in our technological age, with AccuWeather (located here at Penn State,  one of the Meccas of meteorology).  But people are still swept up to Oz.



I was for a few years a weather forecaster in the US Air Force.  That was some time ago, when technology was much less advanced than it is today.  We studied severe storms such as the tornadoes so typical of spring in the US.  They're due to the collision between warm, muggy air from the Gulf of Mexico, meeting cooler, drier air sweeping down from Canada (the first map below, from AccuWeather).  The denser air mass undercuts the warmer one, pushing it up, where it cools, condenses, and releases tremendous amounts of latent energy.  The whole system is steered rapidly along to the north and east by the jet stream, a strong wind high in the atmosphere, like corks floating along a river.

For a number of known reasons, these mass collisions also typically involve zones where air converges on a line or point.  In these places, swirling convergence leads to thunderstorms of majestic power, and they can spawn the trailing intense vortices that are the tornadoes.

All of this can nowadays be predicted in general--to an uncanny accuracy relative to what we could do in the bad old days.  But where the most intense energy will be released is still only probabilistically predictable.  Tornadoes are such local phenomena relative to, say, a line of T-storms hundreds of miles long (map below, from AccuWeather), and they are so brief (touching down usually only for a few miles), that the complexity of stormy turbulence does not make their specific occurrences predictable.  As can be seen, we in Pennsylvania also got some nasty T-storms during the night, and there are ominous skies outside my window right now, but so far we've been spared the tornadoes. 


Whether the desired precision can ever be achieved is one of the many questions in science related to understanding complexities in Nature.  Until it can, we may invest heavily and in general successfully in science technologies, but Nature's gremlins may still be able to hide between the bits and bytes even of the fastest computers.

And so long as that is the case, awful tragedies such as what has occurred in Alabama will still happen.  And meteorologists will struggle to do better against what may be theoretically prohibitive impossibilities.

Wednesday, April 27, 2011

Care....and be smart! Is it in your genes?

A new article in the 'early edition' of PNAS suggests that IQ tests are not so simply interpretable as some might believe, or wish.  Duckworth et al. ('Role of test mutation in intelligence testing') show that not just the raw DNA in a person, or his/her environmental circumstances such as diet and books in the house, but also motivation affect both the score and the usefulness of the test in predicting life success.

The authors studied 2008 individuals and found substantial increases in IQ test scores by highly motivated participants, greater for those with lower baseline scores.  IQ scores predicted many aspects of life outcomes, but basically not after motivation in taking the test was taken into account.

No matter how you may feel about the Nature-Nurture debate in regard to intelligence, this paper is yet another example of the complexities of biological causation.  In itself, it's probably a minor study relative to the larger question of how much of what an organism is, is 'inherited' in its genome, and how much is due to many other factors of its life-experience.  The authors themselves temper their interpretation.

However, the relevance to societal issues is great, since much in your life depends on how well you fit into the system that assigns resources and rewards, and because discrimination, positive and negative, results.  This affects your social 'fitness' and one can debate if it affects your evolutionary fitness.  But the question goes far beyond the political.

If organisms can be predicted from their genomes, then environments don't count much.  That is relevant to GWAS and personalized medicine, of course, but also to evolution itself.  If genomes are predictive we can effectively forestall disease.  That would be good (though it's largely not true).

But what about evolution?  The idea of evolution in the adaptive Darwinian sense is that genomes are predictive in the context of the environment!  That is what adaptation is all about--adaptation to the environment.  It is those genomes that, in their environmental context, proliferate the most.

This at its very core means that genomes cannot in themselves be predictive.  Or, put another way, adaptive Darwinian evolution can only occur to the extent that genomes are not predictive of the organism's traits.  In this sense, genomes evolve as respondents, not determinants.  In a static environment genomes might become predictive, or seem predictive, but that would be because of the environment, not in spite of it.

The politics of IQ testing and its many, sometimes evil, heads is not our point here.  Again, we just take this story as an illustration of the elusiveness of biological causation, and the subtlety of the Nature-Nurture phenomenon.

Monday, April 25, 2011

Challenging accepted wisdom

We had a good week away.  We were at the Physical Anthropology meetings in Minneapolis, albeit briefly, where it was great to see old students and friends, including Holly -- who so ably kept MT hopping last week.  Not to mention gave an excellent talk in which she, and co-authors, systematically, carefully and convincingly dismantled a widely accepted hypothesis about the size of the human pelvis and why human infants are so altricial.  She nicely made the point that long-standing ideas can be basically completely wrong but accepted, put in all the textbooks, and so on.  Holly and her collaborators have a different idea about this subject, but like most evolutionary stories, it is hard to prove and opens other questions.

We went from snowy and grey Minnesota to sunny New Mexico and the Santa Fe Institute, where people sit around and think about complexity.  Ken's been an external faculty member there for some years, and has had a number of good visits.  True, all academics are in effect paid to think, but there are major distractions -- classes to teach, students to advise, committees to sit on, plus often those thoughts must have some practical side.

Not so at SFI.  Trying to think out of the proverbial box is in a sense what they are there to do.  Of course, that itself can become another 'box' to think inside of.  Explaining complexity in physical, biological, and social Nature is their objective, and it is certainly relevant to our concerns in genetics and evolution.  So it was fun to spend hours talking science, mostly agreeing, sometimes not, but converging on shared ideas that we hope to build on in the months to come.

We try consistently to ask the question: "What if the accepted wisdom is not true?  What would that imply?", and we hope to keep asking it. Most who ask this question come up with zero new answers, because accepted wisdom is always based on a lot of truth. Yet new or better answers are almost always possible, at least in the history of science so far.  So we keep on trying, and trying to be (constructively) skeptical about current stories--especially those advanced with great confidence, hubris, or vested interests.  But to assert that working with the excellent people at SFI will lead us to do better--it would be the same of hubris, so we won't!

We took advantage of a free day to hike at Kasha-Katuwe Tent Rocks National Monument, southwest of Santa Fe, where we took this picture.  The whole area was formed by multiple volcanic eruptions, with prominent distinct layers still visible.  The 'tent rocks' or 'hoodoos' were formed when a harder rock on top of lower softer layers of rock didn't erode when the strata around it did, thus protecting the lower layers.'

Besides great fun, every trip West leaves one awed, wishing s/he could have become a geologist, and wondering at Nature.  It is an important reminder of the nearly imperceptible slowness and complexity of Nature, perhaps the greatest challenge in evolutionary science and genetics.

Tuesday, April 19, 2011

You are what your mother eats

A story on the BBC site this week reports research that has found DNA modification in infants that affects their prospects for increased body weight.
A mother's diet during pregnancy can alter the DNA of her child and increase the risk of obesity, according to researchers.
The study, to be published in the journal Diabetes, showed eating a lot of carbohydrate changed bits of DNA.
It then showed children with these changes were fatter.
This is not new, but is another of the current themes in genetics, to look for environmental effects that can be inherited.

Epigenetic changes alter DNA's chemical configuration by modifying the nucleotides (the A's, C's, T's, and G's), rather than by mutating--changing--them.  Modification of the DNA in this way can change the chemical machinery in cells that affects which genes the cell does or doesn't use.  Modifications like this are inherited when the cell divides, until re-set by later mechanisms.

Epigenetic changes in metabolism can affect how much weight a person gains.  If the changes are 'remembered' by the individual's cells and also imposed on that person's gametes or offspring, they can be inherited and can appear genetic.  But they are environmental (unless the person's DNA sequence makes it more vulnerable than someone else's to undergoing this epigenetic change in a particular environment).

Although yet another must-do approach, or fad, it is not yet clear how important epigenetic changes will prove to be.  If they are inherited in the sense that they reflect the infant's experiences in gestation, they may not necessarily be transmitted to the next generation.  But they may be important in trying to understand traits like obesity, because one looks typically for the person's life experience: exercise, diet, and the like, when examining the person's mother's experiences during or even before her pregnancy might also be informative.

Of course, overeating is overeating, and it can make you gain weight regardless of the origin of the gene expression patterns that such exposures induce.

Monday, April 18, 2011

Time's Deep, Woman

This was given to me by a friend of the artist who's a student in my Human Origins class at NEIU.

Friday, April 15, 2011

Travel advisory -- MT going dark

We are off on the first of a number of trips we've got coming up in the next few months, this one to the Physical Anthropology meetings in Minnesota, and then on to Santa Fe to do some collaborative work on the complexity issues in genetics.  We will no doubt be posting rather more haphazardly than our usual pace, but we will post as often as we can.  Check back soon!

Thursday, April 14, 2011

The greatest failure in science?

World  news and events have, for us, personalized some of the grief that humans are imposing on each other, as we have referred to in recent posts (here and here).  This leads us to reflect about science, beyond the relative triviality of what most of us do each day compared to the tragedies being experienced each day, too.

The period known as the Enlightenment, roughly starting in the early 1700's, led to great hopes.  Thinkers gave up on a priori received knowledge in favor of empiricism.  The power of experiment and dealing with data rather than ideals led to and was stimulated by the development of new technologies such as optics, and these in turn led to quick, even startling, new understanding of the physical world.  The world could be understood by 'science' rather than (in modern terms) 'philosophy', and could be manipulated. Enlightenment science led to physical and economic improvement, empire, wealth, and much psychological satisfaction.  We in today's research world are a part of this 300-year-old legacy.

The same excitement led to various Utopian movements in Europe, which spread to the US and elsewhere.  The cruelties of  the world--harsh, unfair societies, famines, disease, inequities of all sorts--led to the belief that if science could make the material world better, surely by the same approach knowledge should be able to make society better, too!

In this spirit various social-political movements were initiated.  They included socialism, unions, communism, and other utopian visions of how society should be.  Much of these ideas were theoretical, couched in terms of natural law--just as physicists and chemists were doing in their fields.  The theory was that society, like organisms, was evolving, moving toward a time when oppression and inequity would cease.  Human knowledge would lead this evolution inevitably to stable, equitable endpoints.  So what has happened to all that?

Well, the physical sciences did indeed deliver, big time!  No matter our regular complaints here on MT about genetics and evolutionary determinism--our efforts to try to nudge the system to be better, as we motor along doing genetics ourselves.  Every science, including the life sciences, and genomics and evolutionary biology, have made and are making huge gains in knowledge.  The Enlightenment's empirical, methodical approach to the world works very well indeed when it comes to the physical world, including the genetic world. (Of course, it wasn't all so Utopian, as the same approach gave us such 'successes' as AK-47s, laser-guided missiles, RPGs, and nuclear weapons.)

But in the social world, the social 'sciences' have, by and large, been abysmal failures.  Despite huge faculties, unenumerably large numbers of students trained, decades of munificent funding, and roles in policy-making circles, there is relatively little that can be found that has improved society that can be attributed to be a result of that research. Our society, and world society, are in that sense not much improved (e.g.,  the end of segregation greatly improved American society, but that was not because of somebody's survey research)

We live longer and in the wealthy countries easier lives (Novocaine alone validates the Englightenment approach!).  But with so many people in wealthy countries needing a variety of personal therapists, with growing social discord and inequity even in these societies, and with global turmoil and horrors unabated, social sciences have manifestly failed to achieve the utopian goals, even approximately.

Now, some in social sciences would disagree, saying that Darwin showed that we're all selfish and that the rapine and murder are built in, necessary, and even in that sense 'good'.  They might argue that the social darwinian idea of society as a manifestation of evolutionary drives and our nature as their result, explains what we see (even, by a stretch, the idea of suicide bombers) very well.  Equitable societies can at most be temporary aberrations of the underlying truths about life.

The failure of social sciences to have applicability to improved societal life, or to understand society and behavior on their own terms, have led social scientists to turn with envy (and avarice?) to genetics even in political science, and to all sorts of high-tech approaches by social and behavioral sciences including our own anthropology (here, we're not referring to neurosciences).  This turn, not accompanied by mea culpas,  shows that we don't really understand society on terms that enable us to do anything about it, the way understanding physics and chemistry has led to so much success.  Unless it's to have Big Brother dope everyone up with gene 'therapy' so they vote the right way, don't mind being abused, don't become addicted, or whatever else we think we find genes 'for' that affect behavior.

Why this is can be debated.  But the social sciences (like the natural sciences) have become a self-protecting institutional part of universities.  Until we start pulling the research plug on these fields until they deliver the kind of societal good we demand of the material sciences, in our view we'll just keep on wastefully spinning wheels.

And people, even children, in so many places around the world will keep on quipping about death.

Yes, science is very powerful and has done much good.  But the greatest failure in the sciences, is in those that might be most important.

Wednesday, April 13, 2011

A healthy change for health research....or just day dreaming?

The problem, and a suggested solution
A commentary in the March 31 issue of the journal Nature has taken on the challenge of what to do about the NIH, the research behemoth that couldn't--couldn't deliver on its promise, despite decades of public largesse.  (And no, we're not criticizing Nature this time!)  The commentary is by the President of Arizona State University, Michael Crow, and suggests ways to take a huge operation that isn't doing its job in proportion to its funding, and reform it so it might.  Crow has done some other program turnarounds, including a serious reorganization at ASU, which gives him credibility in writing such a commentary.

From Crow, Time to rethink the NIH, Nature 471:569
NIH spends way more than anybody else on health research (on this planet, at least), but Americans have worse health and longevity, consistently and by many different measures, than many other countries.
This model for discovery and application in health care is failing to deliver. A 2009 report4 found that the United States ranked highest in health spending among the 30 countries that made up the Organisation for Economic Co-operation and Development (OECD) in 2007, both as a share of gross domestic product and per capita. In fact, the country spent 2.5 times the OECD average (see 'Big spender'). Yet life expectancy in the United States ranked 24th of the 30 countries... And on numerous other measures — including infant mortality, obesity, cancer survival rates, length of patient stays in hospital and the discrepancy between the care of high- versus low-income groups — the country fares middling to poor.
And it's not that these other countries exploit our research results better than we do ourselves.  To a great extent it's because our research isn't bureaucratically designed to improve health but to foster the interests of peer-reviewed research.  Crow suggests reorganizing and simplifying to have as much research attention paid to actual improvements to health as to basic science.  With accountability built in to the system, which is not the case now.  He'd like to see a new NIH restructured around just three institutes, a fundamental biomedical systems research institute, a second institute focused on research on health outcomes, "measurable improvements in people's health" (fancy that!), and a third "health transformation" institute, whose funding would be contingent on success. 

Of course, as we note regularly here on MT, the system is a System, interlaced with stubborn vested interests, from NIH's bureaucracy of portfolio-protecting officials and institutes, to investigators dependent on NIH grants regardless of whether anybody's health is improved or not, to universities (dare we say Dr Crow's included?) that hunger for the grants and the overhead which gives their administrators sustenance, to journals and news media who need 24/7 stories to sell, to the biotech industry that feeds on technology-glamorized research, to doctors who like or feel empowered by hi-tech tools and tests (some of which actually work, of course!), to social scientists and epidemiologists who do endless studies to evaluate and re-evaluate the health care system, to politicians who can preen themselves by playing the fear card ("I support research to help your health, so vote for me!").

A more radical solution
How to dislodge such a system and get back to science that works towards basic understanding of the world in a less self-interested way, and make health research about health, is not an easy question. Crow suggests that his ideas are radical, but one doubts that they are nearly radical enough, because truly radical change would have to undercut the bloat in self-proclaimed 'research university' 'research' activities.

Moving agencies like the Genome Institute to NSF would perhaps help.  NSF budgets are typically lower, and their grants pay less or no faculty salary, so tech-hungry investigators and overhead-hungry universities would object.   Many investigators at the NIH trough are paid on grants, not by their universities, a corrupt system that should never have been allowed to begin 30 or 40 years ago, so that it became vital to so many of us today.  But that salary dependency leads to wasteful, often dissembling research, in part because of  the very understandable need always to have external funding--can't blame investigators for wanting to be paid!

Moving genetics research to NSF would force it to focus on real scientific problems, not ones based on exaggerated or even disingenuous promises of health miracles.  It would force NIH to do research on things that mattered to health (shocking idea!).  Some of that would certainly involve genes, for traits that really are 'genetic', but most would involve less glamorous, non-tech, boring things like nutrition and lifestyle changes (not research about such changes, as we already largely know what would deliver the most health for the buck, and that research, soft to begin with, leads to nothing but the claimed need for more follow-up studies).  NIH budgets for research could be greatly pared down with no loss.

If lifestyle changes were made, then diseases that are seriously genetic would be clearer, and they would be the legitimate targets of properly focused genetic research.  Meanwhile, researchers with reasonable ideas could do basic research funded by NSF---but, with less money available, they (we) would have to think more carefully, and not assume we'll always have a huge stream of funding, or that just more tech or more data meant better science.

Universities would have to  phase in a strange policy of actually paying their faculty, would have to provide at least modest lab-continuity support to allow investigators to think before they wrote grant applications, and universities would have to scale-down, gradually having fewer faculty, more stress on actual teaching (another shocking idea!), less dependency on grant overhead, and less emphasis on 'research' (much of which demonstrably goes nowhere).

This could be good for everybody.   Science would be less frenetically and wastefully competitive.  The competition could be more about ideas, less about money and publication-counts.  Such changes could, in principle, put science back toward a path more closely connected to understanding nature, than to feeding Nature.  And the journals, including Science and Nature, could phase out the gossip columns (which, in the current careerist system, we naturally read hungrily--they are probably read far more than the science articles themselves) and get back to reporting rather than touting science in a way more closely connected to their articles' actual import.

Of course, the current system feeds many, and that is probably what it is really about.  So dream-world reforms are unlikely to happen, unless simply forced not by thoughtfulness but by a plain lack of funds.

Tuesday, April 12, 2011

"If I die, I'll send you a text message."

So said a 12 year old Palestinian boy, a student of our daughter's, when she cautioned him to be careful.  It's a week since the actor/director, Juliano Mer Khamis, was killed in Jenin, Palestine, and this boy had heard that a location near his home was rumored to be the target of the latest death threats.

What is the point of education in theater, music, or the arts--or, science for that matter, when such realities are a part of daily life?  Yet, even people as brutally and degradingly oppressed as the Palestinians are today, hunger for knowledge, for aesthetics.  Even as children.  Or perhaps even especially as children.

The Palestine that we read of in the papers is rather statistical and abstract to us in our comfortable academic lives: so many killed or taken maimed to hospital--so many of this religion, or that sect--that it is a matter of numbers and daily column-inches.  A brief 'tsk, tsk, too bad', and we are back to our important grant application, experiment, or publication.  Or even a lecture.

We know we may get ill or accidents may befall us.  But those events are so unlikely in our lives or the lives of people we know that when it happens to us, along with sadness, we may feel resentment that 'real life' has been interrupted.  We take our daily life seriously, as if it matters--as if we matter--and this inconvenience, or tragedy, is only a brief time out, a commercial break.  Then, let's get back to what's really important, our major research--making sure we've GWAS'ed exhaustively enough to find all of the thousand variants that trivially affect your risk of death--in bed, at age 70.

But this child in a different world just casually talking about the dangers he faces every day, with so many of his relatives dead or in jail, can reassure his teacher of his seriousness about doing his work and showing up for his lessons by making a quip about fate, a tacit recognition that the odds aren't nearly as in his favor as they are in ours.  "If I die, I'll send you a text message."  He smiled.

Monday, April 11, 2011

What is peer review? What should it be? Does it work?

Scientific fraud
There was a recent discussion on our favorite medium, the BBC (in this case, the World Service on radio) about the problem of scientific fraud.  How common is it?  How harmful?  How does it get through the peer review process?

Science depends more than perhaps any other area of human societal life, on honesty.  That's because we each can only do our own experiments, and can't possible do all that was done before that set the stage for what we attempt to do.  Fortunately, real fraud seems to be very rare in science.  It occurs, but is usually punished when caught, by strong sanctions.  It is very serious indeed, because investigators can spend precious time, effort, and resources pursuing ideas suggested by published reports of important findings or methods.  The findings are widely cited and used in support of the new ideas in, for example, grant applications.  In anthropology, the Piltdown fossil fraud was accepted by many scientists for decades, and built into texts and other frameworks for analyzing human evolution.  The recent Korean cloning fraud misdirected many labs into wasteful cloning experiments.

Major fraud is caught in various ways, perhaps most often by others trying, and eventually failing, to replicate an important result.  Minor fraud may be more common--we know about the major cases because of the problems they cause, but few care about minor results.  And dissembling and exaggeration and self-promotion are rife.  The public may not be aware of it, but scientists usually are (and often justify doing it themselves because 'everybody does it' and 'you have to, to get funding or to be published in Science.')

Puffery has consequences similar to fraud in that skillful hyperbole establishes fads, and most of us, desperate for attention and funding and so on, eagerly jump on band-wagons (especially when a new toy, like fMRI, gene expression arrays, and the like are available).  Overall, it is likely that puffery in fact causes more damage by diverting funds and energies in areas less promising than is believed.

Peer review
Peer review is the system that supposedly keeps up the scientific standard.  An article is submitted to a journal, and if deemed relevant, is sent to 2 or 3 experts to judge.  They are to judge quality and appropriateness, importance or novelty, and find factual errors and so on that may be in the reported work.  But, of course, these peers may also be rivals, or (almost always) too busy to pay close attention to the paper they're sent.  Nowadays, with the huge proliferation in the numbers of frenetically competing investigators, and new journals, and tons of online 'supplemental' information allowed, it is nearly impossible to maintain a standard.

When reviewers find unclear statements, things not well explained, or actual errors, then they do the original authors a huge positive service by improving the paper before it is published. 

Peer reviewers were never asked to find fraud, however.  They are hardly ever in a position to do that anyway.  They, like all of us, must in practice assume the honesty of the authors.  So peer review doesn't generally find fraud.

One thing the system of peer review does that we find increasingly irritating--and try to avoid ourselves--is empower the reviewer to insist that the author(s) modify the paper in ways that, when you get right down to it, would make the paper more like what the reviewer would write if s/he were the author.  This leads to often extensive modification in ways that are forced rather than natural, and not what the authors actually wanted to say.  The authors, anxious to get their work published and move on, will then litter their revision with every kind of cumbersome citation, caveat, and implicit rhetorical bow to the reviewers--so they can tell the Editor they responded to all of the comments.

More and more, we feel, the reviewer should find mistakes and unclear aspects of papers, but just let the authors have their own conclusions and interpretations.  If the authors do a bad job, then readers of the paper will do better work, ignore the paper, or whatever.  But if the authors do a good job--including a good job of presenting their case in a style they wish to use--then the paper they wrote is better off unadulterated. 

Peer review is valuable, but could be improved if reviewers were instructed to stick to the important issues.  Let the poets in science out of the lab!

Friday, April 8, 2011

Deep Time: The Movie

The online journal PaleoAnthropology has recently posted the longest boringest movie ever made!

Why would anyone do that? Well, hopefully, you'll see for yourself.


The movie begins with the earliest hominin fossil from between 7-6 million years ago and ....93 hours later... it ends with
[SPOILER ALERT!] modern humans.

You can download the movie here
and you can also check out the brief article that accompanies it called, "Deep time in perspective: An animated fossil hominin timeline."

(Notice the author's name?)


Because here on the Mermaid's Tale I can address a broader audience--and I'm also free to write about celestial bodies and microbes as much as I please--I thought I'd post about the movie here.


So stock up on raisinettes, nuke enough popcorn to last 93 hours, find a comfortable chair, and enjoy your journey through deep time....


Darwin and Deep Time

Darwin heavily stressed the concept of deep time in Origin of Sp
ecies because he knew it was a major obstacle to understanding evolution.
It is hardly possible for me to recall to the reader who is not a practical geologist, the facts leading the mind feebly to comprehend the lapse of time. (Darwin 1872: 294)
Deep, geologic time is absolutely crucial to evolution but it is difficult to grasp let alone represent in a drawing.

Scientists, like everyone else, have no frame of reference and so the lapse of deep time is all but ignored in evolutionary trees, like this one that Darwin drew.

Mr. Croll, a Strip of Paper, a
nd Deep Time
A colleague of Darwin’s, a "Mr. Croll" mentioned in Origin of Species, offered a c
rafty illustration of deep time.

Take a narrow strip of paper, 83 feet 4 inches in length, and stretch it along the wall of a large hall; then mark off at one end the tenth of an inch. This tenth of an inch will represent one hundred years and the entire strip a million years. But let it be borne in mind in relation to the subject of this work, what a hundred years implies, represented as it is by a measure utterly insignificant in a hall of the above dimensions. (Darwin 1872: 269)
Demonstrations like this offer a glimpse into the expanse of deep time and conjure up a feeling of awe. Everyone should try Mr. Croll’s exercise.

Wait. What? You don’t have 83 feet of paper? You’d like to experience more than one million years of deep time?

Well, here’s a solution for you. It’s an animated movie of Mr. Croll’s strip of paper and the inspiration to make it came from marking off the hominin fossil record on a long strip of register receipt paper in Alan Walker's paleoanthropology lab several years ago.

Making the Movie about Deep Time
Because a movie about the entire fossil record of life on Earth might take a lifetime to make and a lifetime to watch, we focused in on the last six and a half million years of human evolution.
We built a database of the majority of significant fossil hominin specimens beginning with the late Miocene Sahelanthropus cranium and going up until about 40,000 years ago.

(This was back in 2005 so we could only include fossils that were published at that t
ime. We could also only use the known dates/ages from that time. We also had to stop adding fossils near the end (the present day) because that part of the record gets enormous and the fossils overwhelmed the movie... you'll see if you watch it all the way through.)

Then, with help from our programmer friends, we created a movie using the software
Shockwave® that turned the record of hominin fossil specimens and their ages into an animated timeline.
The movie pauses with each fossil to allow the viewer to see it. Skeleton icons represent complete and nearly complete skeletal remains. Skull icons represent complete and nearly complete skulls and crania. Partial skull icons represent skull fragments. Full tooth row icons represent nearly complete maxillae and mandibles. Partial tooth row icons stand for jaw fragments. A single molar icon stands for an isolated tooth.

Watching the Movie about Deep Time
To watch the movie, you need to go to this link, find my article
and click on "PA20110013_S03.zip." Download it, unzip it or "extract" the files (crucial), and then click on the "timeline" which should run in Mozilla Firefox or another browser. Detailed instructions on how to do this are here.

Viewers of the movie may begin by following these st
eps:
  1. When you open the movie, it starts rolling immediately. The movie can be stopped at any time by clicking on the red button, which, when clicked again, causes the movie to resume playing. The default rate is 20 years per second and at this rate time ticks by at one generation (20 years) per second.
  2. When you watch the movie at the default rate for at least 2 minutes, you will experience onlyabout 2400 years and will see very few icons. At this rate, watching the entire 6.5 million yearsof the timeline would take about 93 hours!
  3. For a different perspective, you can click on the drop down menu and set the pace to 50,000 years per second (same as 2,500 generations per second). At this pace you can witness the entire timeline of hominin evolution in a more practical length of time... with the film taking only several minutes to watch from beginning to end.
A Guide to the Movie about Deep Time
The clock starts at 6.5 Ma with the cranium from
Chad (Sahelanthropus tchadensis). Time ticks by on the screen at the chosen rate until 6 Ma when symbols representing the next known specimens (belonging to the species called Orrorin tugenensis) flash into view. Then specimens belonging to Ardipithecus pop on the screen, and so on until the proliferation of anatomically modern humans in the Late Pleistocene. This movie offers a different perspective on the hominin fossil record from that of conventional evolutionary tree drawings, where the time span of a species is sometimes estimated based on a single specimen or very few specimens. See this diagram below for just one of the myriad examples (Dunsworth, 2007).


Gaps in the Fossil Record and Deep Time
There are literally tons of fossils in museums around the world, yet it
is clear from watching the movie that there are more gaps than there are fossils. How can that be? Part of the answer has to do with the fact that not all creatures fossilize; and, of those that do fossilize, not all preserve to the present; and, of those that do preserve to the present, only a rare few are actually discovered by paleontologists. This enormous Earth is teeming with life. And it has been teeming with life for longer than we can comprehend from our limited perspective.

So not only is the fossil record spotty because of the rare conditions that are required for fossils to form, and the special conditions that enable paleontologists to find them, but the fossil record also has gaps simply because it is so long. The record covers so much deep time that it is taking generations upon generations of scientists to fill in those gaps.

Like your body, like the Milky Way, and like the universe, the fossil record is more empty space than matter.



However, that does not diminish the wealth of information that we can glean from the thousands of hominin fossils and artifacts that have been discovered so far.

In general, hominin fossils are more similar to others that share space and/or time than they are to those found further away in space and/or time. There are a few instances where different types or species of hominins were living practically side-by-side (e.g. robust australopiths and early Homo) and when that is discovered, the hominins are placed on separate evolutionary lineages, marked by separate species or generic designations. Noting similarities and making distinctions are all part of the process that paleontologists use to reconstruct evolutionary history.

In spite of the gaps in the hominin fossil record that are made obvious by the movie, scientists know a great deal about human evolution.

Overall Trends in Hominin Aanatomy and Behavior
The traits and behaviors that distinguish hominins at the genus level are evident in their teeth, their skulls, their bones and the artifacts that they leave behind such as stone tools and butchered bones.

Ardipithecus, Orrorin, and Sahelanthropus: As the fossil record stands now, the bipedal abilities and adaptations of the earliest hominins are disputed (as is their inclusion as hominins!). Some of the teeth of these small-bodied apes have thick enamel and their canines are relatively small—both trends that link them to later hominins.

Australopithecus: The main trend in the time of the australopiths is that of bipedal adaptation. It’s during this era that two separate lineages diverged: The robust australopiths (or Paranthropus) and the genus Homo to which humans belong.

Robust australopiths (a.k.a. Paranthropus): With their large molars and large jaws, these hominins adapted to hard, tough diets and then went extinct.

Homo: With their nearly modern (and, later, totally modern) skeleton, these hominins made and used stone tools as they added meat to their diet. It’s during this time that the brain makes a significant expansion. Species in the genus Homo are the only hominins to be discovered outside of Africa.

Lamarck and Deep Time

Prior to Darwin, Jean Baptiste Lamarck grappled with deep time and described how miniscule the present perspective is in comparison to the vast stretches of time that came before.

There is one strong reason that prevents us from recognizing the successive changes by which known animals have been diversified and been brought to the condition in which we observe them; it is this, that we can never witness these changes. Since we see only the finished work and never see it in course of execution, we are naturally prone to believe that things have always been as we see them rather than that they have gradually developed. Throughout the changes which nature is incessantly producing in every part without exception, she still remains always the same in her totality and her laws; such changes as do not need a period much longer than the duration of human life are easily recognized by an observer, but he could not perceive any of those whose occurrence consumes a long period of time. (Lamarck, 1809)
Thanks to controlled breeding experiments on organisms like bacteria and basset hounds we can witness evolutionary changes. What’s more, that you are not a clone of your parents, that your generation is different from the previous one, is evolution. However, the dramatic changes that occurred over thousands and millions of years before the present, and that will carry on for thousands and millions of years into the future, are beyond our imagination.

Deep Time is beyond Human Imagination
It is impossible to comprehend deep time. Even if we had a time machine and could visit an australopith 3 million years ago, we would only gain a snapshot of the past.

To gain a better perspective on deep time than we have now—that is, to really se
e evolution taking place—we would have to increase our life spans substantially, which would require a substantial increase in body size as well.

Being as large as Earth would certainly give us a better feel for the lapse of deep, geologic time. Being as big as the Milky Way would be even better. And so on.


This is why so many evolutionary biologists study microbes. To a microbe you are slow and absolutely enormous.


It’s from this present and human point of view that we must continue to work towards the scientific truth about the evolution of all life on Earth, from microbes to mammals. Our inability to truly comprehend deep time does not prevent us from understanding evolution. As long as we accept our limitations we can forge ahead.

Acknowledgments

Alan Walker was the inspiration behind this project which was carried out at Penn State University. Jessica Berry, Stephanie Kozakowski, Gail E. Krovitz, Maria Ochoa, Joseph D. Orkin, and Kelli L. Welker helped compile the fossil hominin database. Brian Shook and Patrick Besong creatively programmed our ideas into the computer. Kevin Stacey, Ken Weiss, and Michael Rogers provided helpful insights, comments, and discussion.


References
Darwin, C. 1872. Origin of Species, 6th edition. John Murray, London.
Dunsworth, HM. 2007. Human Origins 101. Greenwood Press, CT.
Dunsworth, HM. 2011. Deep time in perspective: An animated fossil hominin timeline. PaleoAnthropology 2011: 13-17.

Lamarck, JB. 1809. Additions to Part I, Zoological Philosophy. From 1914 translation.


P.S. Time's Deep, Man...

Thursday, April 7, 2011

Carrots beyond the BioVerse


Friend, and MT reader, Hank Tusinski sent a story that he thought might be a nice addition to our carrot musings of the other day.  He warns that he was in a feverish haze when he remembered this, so it may or may not be true.  But if it's not, it should be.
I recall a Ripley's Believe it or Not, about a couple who got into a spat in a town near where we grew up, and she took off her wedding ring and threw it into the field behind their house. When the remorse set in, and they went hunting for it, of course it was nowhere to be found. Then, however many years later, but many, she pulled a carrot - yes, a carrot - out of the ground, to find it had grown into the ring. How sweet and poetic is that?!?  And yes, I wonder if they gold plated the carrot and memorialized it, or simply grated it and ate it....
Hank, ever the gleaner, searched the web for this story, to no avail.  But he did come close.

Wednesday, April 6, 2011

Senseless tragedies in the Age of Science--what's the point?

Our daughter is a teacher living in Palestine this year.  She loves what she does, she loves the kids and the teaching, but living in Palestine is very hard.  Even for someone who could leave anytime she wanted to.

She told us a chilling story yesterday -- a man associated with the school where she has taught had been killed in cold blood the day before.  He was director of a drama school in the Jenin refugee camp, The Freedom Theatre, that he had started with his mother.  He was shot outside his school, with his son looking on.

The story made international news because the man was Juliano Mer Khamis, a well-known actor and activist.  His mother was an Israeli Jew -- and activist for Palestine -- and his father a Palestinian Christian.



He made a film about his mother's work with Palestinian children, Arna's Children.  He talks about it and other issues here.



This is a terrible tragedy for this man and his family, for the children at his school, for the refugee camp, for other children whose worlds he touched, for Palestine and what Mer Khamis called the Third Intifada, the struggle for culture, music, drama, camera, art amidst the occupation.  But the killing of one internationally known man is no more of a tragedy than the killings that happen in Palestine every day, and are not broadcast around the world. Or than the killings that happen in many other places every day, only making the news when there are enough of them (as in Libya or the Ivory Coast at the moment).

Various theories about why Mer Khamis was targeted are emerging -- the fact that some were not pleased that his school violated Islamic law by teaching girls and boys together is one.  But, whatever the  reason turns out to be -- and whatever your politics -- it is still difficult to understand how this kind of senseless killing can happen in the 'age of science'.

How, in a time when we know so much about the world, can so many people -- who have been exposed to knowledge in school, television, or other media -- feel it is perfectly normal to kill in the name of [YourFavoriteGod]?  And for those who recognize this secular world for what it actually is, how can it be that, with all our knowledge, we can feel so cold towards others -- others we've never seen or met -- as to dissemble them limb from limb with impunity, or even take all that they have, their lives, from them?

Why is the penetration of knowledge so feeble that these horrors are not abated?  What is the point of schools?  What is the point of all our fancy research, most of which goes nowhere fast (except, perhaps, the research that enables people to blow up others they don't see, or only see on a video screen)?   We often voice disagreements about the way some science is done, but we are all trying to advance our understanding of the world.  But is education only tribal acculturation?   Other than for technology, does all this effort serve no other purpose?

It is common to blame human horrogenic behavior on evolution -- the devil in Darwin makes us do it.  One can say that with resignation (it's just how the world is, like it or not), or as supportive justification (we're helping Nature do its handiwork).  But this kind of mutual objectification occurs among religious believers as well as secularists.  It's all too convenient, and certainly in no way justified by what we know in science (including what we know about evolution).

Shouldn't our huge investment in what we believe to be the value of 'education', including but going beyond science, have some better payoff than this?

Tuesday, April 5, 2011

The GWAS of economics?

Why don't economists understand economies?
To try to understand why economics didn't predict or prevent the recent economic crash, BBC Radio4 has done a series on the history of the discipline.  The first of the three programs was about economics as a story of morality, the second about economics as a science, and the third about the psychology of economics.  The central question of the programs is, in a nutshell, why economists can't predict the economy.  Because we write a lot about why geneticists or epidemiologists can't predict disease at MT, this seems like an appropriate topic to expand a bit upon here. The details are all different....but the phenomenon may be similar.

Economics as morality
Economics as a discipline began in Greece, as philosophers like Aristotle thought about the market and how it shaped and reflected morality. Wealth should be used for the Good Life. And indeed many people think of the recent global economic crash as a giant morality tale, with the market forces of greed and evil resulting in harm to millions of innocent victims.  The conclusions people draw about morality, however, seem to vary according to the observer.  To some, the victims weren't all that innocent, or the government should have intervened earlier, or shouldn't have intervened at all, or the bankers should either be stoned or given bonuses, and so forth.  A bit like assessments of success or failure in genetics.

The program discussed the origins of economics as an institution that evolved to build trust among humans, humans being the primates that build trust via "psychological interactions and formal institutions," while, the guest informed us, chimps build trust by coalition management, and bonobos by having sex.

But this seems not only simplistic but a blatantly specious argument because, if economies exist to build trust, why do we need so many laws to regulate economic behavior?  Anti-trust laws, laws against insider trading, laws about how financial institutions work, and on and on, and why is there so much white collar crime?  And didn't the crash happen because there wasn't enough regulation, which is needed because so many economic actors can't be trusted? 

And, if economics is about morality, it's a shifting morality.  Self-interest and selfishness have become acceptable, or even applauded in the last 30 years or so, whereas it used to be shameful to do things for personal profit, and one certainly couldn't boast about it. So clearly much that is real is not hard-wired into human behavior or societal structures.  Of course this is an acute, but largely unheeded lesson for those who yearn for genetic determinism in human behavior, built into our nature by natural selection.

To Adam Smith, how the economy works had to do with what makes humans different from dogs.  We have an inbuilt propensity to barter and exchange, he said, while dogs just fight over bones.  Ours aren't narrow self-interests, but based on our ability to reason. Of course, we know that any human universals--a target of Darwin as well as western economic theorists--are subtle and elusive, if they are even known.

But, don't even lions share?  And ants?  Ok, maybe they don't barter or exchange value for value in human terms, but they do redistribute resources. Some would argue that that's because lions and ants that share are sharing among kin, but even so, this does put a damper on the idea of human exceptionalism.

And still, why did economics get it so wrong about the crash?  Are markets emergent properties that cannot be forecast by or are chaotic relative to, imperfect measurements of their component activities?

Scientific economics
The second program explored whether or not economics is a science, whether it should be, and why as a science it still can't predict economic behavior. At the end of the 19th century, economics books begin to look like mathematics books, and economists began to talk about laws, forces, and mechanisms. If economics is a science, it's a science of observations, like astronomy, not a laboratory science.

To illustrate the law-like nature of economics, an inventor, Bill Phillips, proposed to a Cambridge economist in the 1940s that he could build a machine that could predict the economy.  "I don't understand economics," he said, "but I do understand plumbing."  So, using water flow, he built a model of the Keynesian theory of economics, the Phillips machine.  And indeed it solves Keynesian models, but does it model economies?  Here's a video of the machine that may (or may not) clarify the issues.  What such  devices (and similar kinds of computer simulations) is build in some assumptions and work out the consequences.  But if the assumptions are wrong, or the system is sensitive to conditions at any time, the predictions will follow from the assumptions but won't generate what happens in the real world---which, presumably, is what we care about.


But, just like defenders of the genetic model of disease, many economists insist that some day their models will be more precise, even though we can't estimate with precision yet, nor predict what the net effect of market forces will be.  Economics, even scientific economics, can't yet predict growth or explain or predict the business cycle.  But perhaps that's to do with the human factor.  Again, just like genetics--risk of disease, even if there's a genetic component, seems to have much to do with how we live our lives, which tends to make disease outcomes rather unpredictable.

Homo Economicus
Because economics is so poor at prediction, many economists have decided that this may be because of the human factor, and this is what was discussed in the third program.  The answer must come from understanding humans and what drives their economic behavior.  And yes, to this branch of economics, the answer could be genetic.

So, human behavior is either noble, with implications for economic behavior, or economic behavior is at the mercy of human impulse, and if we can just understand that, we'll understand economics.  Do people have sound economic judgment? Unfortunately, the possibility that we do not has been demonstrated numerous times.

According to one expert, the problem is that economics was established by apes who evolved on the savanna, in small bands of related individuals.  The psychology those apes brought to the task then simply was not up to the complexity of the system they came up with, and that explains why economies run away from us now.

But, assuming that because some human trait began when our ancestors were on the savanna means we can't adapt to current circumstances now that we're no longer on the savanna is just wrong.  We can do calculus, can't we?  We invented rockets and penicillin and nuclear bombs long after our brains evolved to be as able as they are, with presumably no idea of going to the moon.  Successful organisms are nothing if not flexible, adaptable, able to change with changing circumstances, and humans of course are no exception.  And by what reasoning do we go back only to other primates (i.e., other primates alive today)?  The thought processes didn't originate with primates.

John Maynard Keynes believed there was something beastly about our behavior.  He wrote about "animal spirits", referring to the driving force that gets us going in the economy.  Economic models can't explain what makes economies fall into recession, and then what makes them rebound so Keynes said that maybe moods are fundamental to economics.  Populations change their thinking in unpredictable ways, with unpredictable economic results.  The human factor.  This is why Game Theory and even genetics are big in economics today.

The upshot
So, basically, we don't understand what drives economies, nor what drives economic behavior.  Rather akin to how we don't understand disease causation in so many cases, nor know how to predict who will get what.  We do understand how to act, as experts, to continue to ask the public to shed resources on us because of our expertise--that's an undoubted skill.  However, as one economist pointed out in the program, if economists knew anything, planned economies should be more efficient and predictable than the free markets that triumphed over the Soviet Union in the 1980s, but they are not. Statistical approaches to genetic diseases should tell us more than they do. Perhaps, as another guest on the third episode of this series said, we'll eventually understand 9/10ths of the forces that drive markets, but we'll never understand them all.  We'd like to see more geneticists be so accepting of genetic realities.

There is a problem here that is more than the clearly empirical fact of the lack of predictive power in many areas of life, including many areas of genetics.  It is that the 'experts' have a lot of knowledge, but a limited ability to actually predict what we are paid to predict.  In that sense, why should our jobs not be taken away and given to people who actually give us what we'd like: sex, music, other entertainment, new kinds of fast foods, video games, and the like?  Since we still have jobs, clearly experts do provide something that society feels is useful!  Expertise is hard-won and clearly real in many ways.

Yet, experts are the priests of secret knowledge to whom we still turn even knowing that their knowledge, while real, is often not sufficient for accurate prediction.  Even in the age of science, we live on future promise, ignoring past records.  This is very curious!

Monday, April 4, 2011

The lessons of the Land: part IV

Our 3-part series last week on lessons learned in modern genomically based plant breeding was intended to address its conceptual relation to evolution and genetic causation generally, and to problems in human genetics specifically.  We hadn't intended a 4th part, but thanks very much  to conversations with Ed Buckler, the Cornell plant geneticist whose ideas were featured in the Land Report that motivated this series, we wanted to add some further comments.

The idea of genome-based selection (GS) is that you take a sample from a population, of maize or goats, say, and measure phenotypes of interest in each individual, then genotype each individual for a large number of genome-spanning variable sites (markers), just as in genomewide association studies (GWAS).  You use these data to evaluate the contribution that every marker site makes to your trait, thus optimizing a phenotype-predicting score from the genotype.  Then, you use this score to select individuals for breeding.  After a number of generations, you expect an improved stock.

This is very similar in nature to what is done with human populations in some recently advocated methods of using genomewide data to make individualized predictions.  Peter Visscher is probably the author most prominently recognized as developing these methods, though many others are now also involved.

In both human genetics and agriculture, we use a current data set of achieved traits--kernel yield, muscle mass, human stature, blood pressure or disease.  But this is retrospective assessment of genetic associations, and it may only partially reflect genetic causation.  For example, environmental factors may be unmeasured.  Also much of our variation in natural population will be captured in one but not a next sample or not exist in other populations.  These facts place some limits on the predictive power of genomewide data.

Nonetheless, like using parents to predict traits in offspring, if the genetic component is substantial (for example, by measures like heritability, or trait correlations among relatives) there must be regions of the genome that are responsible and that is what this approach finds.  How advantageous it is over measurements of phenotypes in relatives can be debated, as can the amount of contribution to the trait, like disease risk, of large numbers of very rare, never to be seen again, variable sites.  And the prediction is of a net result, which need not be (and often will not be) due to a tractable set of genome sites.  So the biomedical dream of 'personalized genomic medicine' may or may not answer the dreams of its advocates.

The idea should work much better in agricultural breeding, because the population is closed, and genetic variation is systematically, and strongly favored.  Thus, over a few generations the genetic variation can be highly enhanced in the desired direction--at least under the controlled environmental conditions of the selective breeding.

The discrepancy between breeding experience, and the observational setting of human biomedicine--and of evolutionary biology--may, if carefully considered, provide ways in which the former can inform the latter.  There are reasons to think that important changes in view may result.

Friday, April 1, 2011

Three odes to the carrot

As part of MT's sub-category, BioVerse, we like to publish something other than rants and (occasional) raves from time to time.  Poems inspired by questions about life.

Here are three poems to the carrot in response to a discussion about the nature of self-awareness.  We thank Gary Greene, a poetically-inclined MT reader, for his poem, his permission to post it here, and the inspiration it gave us to write our own.




By Gary Greene 

Carrots that feel, beets that pine,
Are they rooted in awareness, 
These veggies of mine?
When arrayed at the store,
Is it dirt that they dish?
Can onions and potatoes
Make a vegetable wish?
Would they choose life in the ground,
Until they lay rotten?
Or to be served up in bowls,
Sauteed or au gratin?
Do they think deeper thoughts,
the deeper they grow?
Do they plan for the future,
or just wait for snow?
When the frost comes early,
do they think warmer things?
And at night, if they dream,
take flight on veggie wings?
We may never know
if veggies are aware,
if they think lofty thoughts,
or simply don't care.
It's so hard to tell,
there's nothing to do,
but pass the green peas,
and the turnip greens, too!




By Ken Weiss
A CARROT IS A THOUGHTFUL THING!

A carrot is a thoughtful thing,
That worries what the rain will bring.
With all its neighbors, sore it grieves,
The gruesome gnawing on its leaves.

Of peas we could the same relate,
And from the lettuce, no debate.
The Fall's last katydid doth moan,
Unanswered calls: it's all alone.

These plaintive cries unnoticed, all,
By humans: blinded to the pall,
And hearing naught, can naught believe.
So, voiceless, plants no pathos leave.

We spare no thoughts for 'thoughtless' being,
Assuming only we are seeing,
Yet trees observe the sun all day,
With leafy retinal display.

We grant no ‘self’ to fish nor fowl,
Hard-wired deem the lions growl.
Thus poems are writ by fools like me,
Who can’t converse with ant, or tree.





By Anne Buchanan

I just hope
that
carrots don't dream.

But if they do
I hope they don't
dream
of having 
legs
to run away
on.

Legs and hips and feet,
in our image.
And arms to raise themselves
from the earth.

If they
were made in
our image,
not only would 
they
feel fear, but
they would 
have to confront
the ethical dilemma
of whether 
or not
to eat themselves.