The examination of these pre-agricultural bones is further clouded by one of the most important assumptions that underlies these age-at-death estimates. These determinations are made by comparisons with contemporary bones where the age of death of the individual is more easily ascertained. This process for estimating the age of a person’s death from skeletal remains, of course, assumes that contemporary dietary habits, exposure to sunlight, vitamin D status, physical conditioning, and general lifeways were approximately equivalent for the Paleo-people whose remains have survived for us to study. In light of these factors, there can be little doubt that such estimates are little more than educated guesses, yet many of such estimates for remains from the Paleolithic place the age at death at about 40 years (1). This constitutes a much more realistic estimate. However, for many reasons, these are also likely to be underestimates.
Further, given current sedentary lifestyles, questionable dietary habits, extensive use of sun screen, and that we live out most of our lives sheltered from the sun, it is not reasonable to expect our bones to be as healthy as those of our pre-agricultural ancestors. Thus, estimates based on such comparisons are likely to grossly underestimate the age at death of skeletal remains of Paleo-people. Perhaps the most compelling evidence against assertions of 30 year life spans for pre-agricultural humans lies in human population expansion, spread, and reproductive cycles.
Humanity, as opposed to the other primates, has spread to all habitable corners of the world. Some scientists believe that the human drift out of Africa took place something less than 100,000 years ago (2). We populated Australia, New Zealand, many Pacific islands, Asia, Europe, and all of the Americas. Current estimates suggest a world population of about 3 million at 35,000 B.C. (3). By the time agriculture was getting started in the Fertile Crescent, our numbers had increased to an estimated 10 to 15 million (4). In the space of 25,000 years, or about a thousand generations, with human populations spread over vast distances, our population increased by 3 to 5 fold.
While these are only estimates, they are based on some hard data combined with a lot of deduction and guesswork. Conversely, we are pretty sure about current world numbers and we have some good evidence for population expansion rates over the last several millennia. But when human populations are as widespread and sparse as they had to be in pre-agricultural times, we had to have many survival advantages or humans wouldn’t have survived and spread out of Africa to populate the world, invent agriculture, civilization, and ultimately, overpopulate the planet.
You may ask: Are these the same advantages we enjoy today? Hardly. The world and humans have both changed a great deal since then. We assume the age of puberty back then based on current and recent past experience. We can be more confident that the impact of prolactin during breast feeding was similar to today, as we also see this in other mammals (5). This lactation-induced hiatus from fertility would tend to reduce family size among nomadic hunter-gatherers. This includes virtually all Paleolithic people. They were all hunter-gatherers. We can also be confident that, given the relative helplessness of human infants, and their very slow development into less vulnerable adults, there must have been population losses to predators, deaths due to inclement weather, deaths from competing with other animals for food and to many other hazards associated with living largely outdoors and following food sources, including premature parental death during the child’s first, very vulnerable years of life. All of these and a host of other factors should have kept our numbers tiny, as has been the case for other primates.
So what were our survival advantages? Well, for one thing, our ancestors were not being crushed under the food guide pyramid or its faulty precepts. They had no access to the processes that make grains edible or produce refined sugar. Other than the occasional, hard-won taste of honey, they ate little that was sweeter than seasonal berries and fruits. However, where possible, they ate what suited their appetites. I find that when I talk with someone who doesn’t eat sugar or sugar substitutes, they often say that they find many common foods, even water, tastes sweet to them. Unlike most of us, their taste buds are not spoiled by sugar. Other than the absence of refined foods, evidence suggests that there is enormous variation in Paleolithic diets, often depending on the specific environment where a particular group lived.
Many arguments have been offered regarding the exact constitution of these hunter-gatherer diets, which we discuss elsewhere, but there is a single common element among the diets that were consumed by hunter-gatherers. These diets, despite their wide diversity, do not foster autoimmune, malignant, and cardiovascular diseases of civilization. More importantly, perhaps, combined with the activities required for their survival, each of these diverse diets fostered a much healthier, more robust human being - One who is stronger, faster, more agile, more resilient, and more able to battle the bacteria common to their habitat. This means that our ancestors were better able to handle the rigors of giving birth, surviving many types of infection, injuries, and battle with predators. Our ancestors were, in the strictest sense, more fit to survive.
Some of them died from the hazards mentioned earlier. Others died in competition with members of their own species for sex, food, or status. We can only say, with certainty, that enough humans survived to reproduce and nurture their young to an age where they, too, could reproduce. Their survival and reproduction was, we know, sufficient to cause a significant increase in their numbers.
If we take an average age for onset of puberty to be around 13 years, then a woman’s first child, after 9 months of pregnancy, would be likely to arrive at about 14 years of age. Advocates of short Paleolithic lifespans suggest that famine was a common challenge. Malnourished women are less fertile, and hence, less likely to conceive, but let’s assume, for the sake of argument, that these young women became sexually active as soon as they were capable of reproducing, and that they were all uncharacteristically well nourished during their child-bearing years. Let’s further assume that most pairs of mother and child survived the rigors of childbirth.
As long as mothers nursed their children, they were unlikely to conceive due to prolactin production, which is stimulated by an infant suckling its mother’s nipples. Prolactin decreases normal levels of the sex hormone, estrogen. This inhibition of estrogen is responsible for loss of the menstrual cycle and consequent losses in fertility among lactating women. We can only guess at how long mothers would breast feed, but it does seem likely that they would have been aware of breast feeding’s protective effect against pregnancy. Assuming that they continued to breastfeed their children to more than two years of age, and allowing another nine months for the second pregnancy, children should be spaced at least about three years apart.
Infant mortality rates were likely high, but in the absence of an infant to suckle, a woman would likely become fertile again in a very short time so the only major impact of high infant mortality rates, if they existed, would be to lower the average life expectancy. Life expectancy after infancy is a very different matter.
The mother would need to continue to nurture, train, and otherwise equip the child until it reached puberty and began to reproduce him/herself. An investment of between 13 to 16 years would be required to nurture a single child to reproductive age. In order for humans to increase their numbers, an average of more than two children for each mother would be necessary. It would probably require an average of four or more children for each fertile woman to achieve the population increases that occurred in the context of the hostile environment in which they lived.
High rates of infant mortality and life spans of less than 30 years for Paleolithic humans would be most unlikely. Humanity could not have grown and spread around the globe given such a limited lifespan. A first child would be likely to enjoy its mother’s nurturing until puberty. Mom would be 26 to 29 years old by then. The next child would have a mother who was 29 to 32 years old. But that would mean that she had probably died before the second child reached puberty. If such suggestions of Paleolithic life spans were accurate, we should have numbers and be geographically restricted in ways that approximate other primate populations. Something is obviously wrong with such assumptions about the life span of Paleo Humans. The most likely alternative is that our Paleo ancestors lived longer than is currently suggested.
Menstrual cycle evidence suggests greater longevity as well. What selective advantage would be conferred by menopause? It occurs at ages ranging from about 40 to 60 years of age. Menopause would only evolve if women were living long enough to use those years to create some selective advantage. If so, then the very fact that menopause occurs and probably enjoys a selective advantage is most likely rooted in the nurturing benefits enjoyed by the children and grandchildren of those women who reached and surpassed the average age of onset for menopause. However, regardless of the selective advantage conferred by menopause, the very fact that it has evolved as a female trait suggests a life span beyond the onset of menopause, perhaps into ages in the 60s or 70s range. That’s a far cry from the assertion of 30 year life span. Clearly, even if infant mortality rates were high enough to reduce average life expectancy in the Paleolithic, many women were living to ages close to double the 30 year suggestion.