Every undigested protein is an allergen

If someone asked you what you thought was the most fundamental, the most essential, the most important health challenge that we face as modern human beings living in industrialised countries, what would you tell them?

Take a moment. Shift your gaze away from this text, and think about it.

When we read or hear something about health and nutrition in the news, on websites, on blogs, on social media, or even in books, the information we encounter is almost always biased and directed  in some way. It is also always restricted in scope. In fact, it is usually very restricted in scope. All this is perfectly natural and expected: whenever we sit down to write, it is usually about something in particular, something specific, some topic we want to address or explore. It’s hard to think of circumstances where this would not be the case.

Moreover, basically everybody who writes anything, does so in order to be read, and therefore naturally attempts to appeal as much as possible to their readership, both in content and in style. But maybe the most influential factor is that we have grown accustomed to information packets, to bite-size bullets of information: quick-to-read, quick-to-scroll-through, and quick-to-either-share-or-forget. And this has above everything else shaped the way information is being presented by all those people out there trying to appeal to more readers. Little can be done to counter this tendency. It’s just how it is at this time.

As a consequence, for all these reasons, we are—the whole world is—migrating away from the mindset that encourages inquiry into the global, the general, the underlying aspects of things. Instead, we are migrating towards an evermore concentrated, focused, laser-beam approach to basically everything. This is true in all fields of study and inquiry to some extent. In matters of nutrition, it is particularly noticeable, and the reason is surely at least in part because we tend to be at the same time very interested and highly sensitive to advice about what we should or should not eat. We take such advice very personally, and often react strongly to it.

Our relationship to food is very deep because it is so constant and continuous, so intimately related to our survival. This relationship starts when we come out of our mother’s womb, and persists throughout each day, every day of our life, until this life of ours itself comes to an end. What in addition makes this relationship so close and so intense is that if we don’t drink or eat, usually even for a few hours, we get headaches and stomach aches, we get light headed, weak, and unable to concentrate and function, we get grumpy and irritable. It is very clear and naturally understandable that we therefore tend to be—that we are—very sensitive to advice about what to eat, but immensely more so to advice about what not to eat, especially if we happen to eat those foods about which the advice is given.

Hence the movement to superficial, non-contentious, bite size bullets of information: ‘blueberries are excellent: they are low in sugar and full of antioxidants’; ‘avocados are amazing: they are not only full of healthy fats but they are also alkalising’; ‘hydrogenated vegetable oils are very bad: they are full of toxic trans fatty acids.’

But what about the essential, the fundamental, the underlying aspects of things?

You have had more than a few minutes to think about it. What would you say, then, to this question of what is most fundamental to the health, to what constitutes the most fundamental health challenge we face? I would say it’s digestion.

Digestion is where everything about us begins and ends. It is in and through the digestive system that we absorb all the nutrients from our food and excrete all solid wastes. It is through the digestive system that we absorb all the constituents of everything that we call body, and excrete all that is toxic, be it produced from the environment or from within through healthy digestive and metabolic processes. Do you find this sufficient to illustrate why digestion is so fundamental? For me it is. But we can go a lot further.

Evolutionary considerations, arguments, and observational evidence, are always useful, and usually very powerful in guiding clear thinking about matters of health. One of the main questions that has and continues to preoccupy evolutionary biologists is that of the growth of the human brain. In this, one of the most compelling ideas put forward to explain its evolutionary history is called The Expensive Tissue Hypothesis. I plan to, in the future, devote much more time to it. But I must refer to it here because of its relevance to digestion.

The Expensive Tissue Hypothesis is based on the fact that there is a strict minimum to the amount of calories any animal requires to survive, the observation that the brain is the most metabolically expensive organ in the body, and the conclusion that it would be very hard for any large complex animal to sustain two systems as energetically expensive as the brain. Because the gut is the second most metabolically expensive, and because both the brain and gut together account for a disproportionately large fraction of the body’s caloric needs, an increase in the size of the brain would necessarily be at the expense of that of the gut, and vice versa. It simply would not be possible to sustain both a large brain and a large gut. And thus, the growth of the brain would have to be accompanied by a shrinking of the digestive system. This is what is observed.

However, it is important to emphasize that it is the shrinking of the digestive system that allowed for the growth of the brain; not the growth of the brain that precipitated the shrinking of the gut. The growth of the brain would only be possible with a surpluss of calories for it to growth and have its increased activity sustained. It is even more important to emphasize that this evolution was the unintended consequence of a shift from a high-fibre, nutrient-poor, plant-based diet, to one consisting mainly of low-fibre, nutrient-rich, animal-based foods.

Number two Silverback Mountain Gorilla (Gorilla gorilla beringei) of Kwitonda Group, Akarevuro, Virunga Mountains, Rwanda

Male mountain gorilla of the berengei berengei subspecies of eastern gorillas in Ruanda (Source: Time). As you can see from the chest muscle definition, this adult male’s bodyfat is low. The huge bulging belly that is apparent when they are seated and relaxed is the consequence of having it hold the very long gut required to process each day approximately 20 kg of fibrous roots, leaves, and stocks of the plants they eat.

It is very interesting—and it is surely related to this evolutionary history—that the gut has by far the largest number of nerve endings, second only to the central nervous system. Moreover, unlike other organs and systems of the body, all of which are entirely controlled by the brain, it is the only one with directive nervous signalling to the brain. Because of this, it is the only organ with a direct influence on the brain. Thus, besides the physical implications, some of which we’ll explore soon, it is quite literally the case that a happy gut means a happy brain. And conversely, a sad, unhappy, depressed brain is very likely to be caused by a dysfunctional gut.

It is a sick, dysfunctional, damaged gut that is the primary characteristic underlying states of disease. This is why I would say that it is a sick, dysfunctional, damaged gut that is the most fundamental health challenge we face today as modern human beings.

I know this might leave you hanging. Especially because we have not yet made any reference to the title. But I promise, we’ll pick up from here next time.

 

Thank you to all our patrons, and in particular Eric Peters, for their continued support. Become a proud sponsor of healthfully and join our patrons today!

In light of evolution

Every animal in the Natural World is bound to live according to its own nature. This is dictated by 550 million years of evolution since the emergence of complex organisms including the very first ancestor of all animals on the planet today. To live according to its own nature means to live in its natural habitat, to eat what it has evolved to eat, to grow, mature, reproduce, age and die according to the way in which every detail of every aspect of its life has been defined and refined by the environment and living conditions of its ancestors over hundreds, thousands, millions of years.

As conditions change, natural selection ensures adaptation. On this question, there are only two possibilities: adapt and evolve or perish and disappear from existence. And this is not a matter of choice or of willingness: it is a biological adaptation that takes place on its own without the conscious intervention of the organism subject to the process.

In the Natural World, through the entire history of life on Earth, it has been thus. It has been thus for the very first living microorganisms, for the very first photosynthetic cyanobacteria, for the first eukaryotes and the first multicellular organisms, for all algae, fungi and plants, and for all animals. It has also been true for humans, for the human animal, the human chordate, the human mammal, the human primate. True, until very recently.

This is what I understood on that day when I watched, shocked and amazed, that female sheep swallow up her own placental tissues dripping with blood almost immediately after she had given birth to that helpless lamb that was lying unmoving in the shade of a large oak tree.

The earliest dates for which we have evidence that people settled in relatively large settlements and sustained themselves by cultivating cereal grains and tending to herds of domesticated animals is about 10 to 12 thousand years ago. Before that, all human populations throughout Africa, Asia and Europe were seasonal nomads that followed the animals on which they relied for food, clothing and tools.

An exception to this might be the south Pacific (Melanesia and Polynesia), lands that were settled some 50-60 thousand years ago, but that, unfortunately for those settlers, had little food resources: no large seeded grasses, no easily domesticable animals, and very few fruit-bearing trees or edible wild vegetables. They found a starchy tuber that could be eaten after some processing to remove the toxins it contains. So, this is what they survived on, and this is primarily what they still survive on today.

Needless to say that even with the intake of minimal amounts of protein from some animal sources, and today they have domesticated pigs, these people–all the different clans and tribes that evolved in that part of the world–have always been extremely restricted in their evolution by the need to devote so much time to their most essential requirement for surviving longer than a few weeks or months. Moreover, can anyone be surprised by the fact that this is the part of the world where cannibalism has always been practiced and still is practiced to this day?

If you were starved of protein, not just for a few days, a few months or even a few years, but for your entire lifetime, generation after generation, of course you would eat your dead rivals and enemies. There’s no question or doubt about it. It would be a waste not to. Not only that, but you would also certainly go out of your way to find and make rivals and enemies in order to maximise your meat base. No question or doubt about that either. Naturally, this is what we see there: hundreds of small tribes sharing the scarce natural resources in this inhospitable land by intense rivalry and continuous warring. You would do the same. I know I would. You can be sure of that.

The fundamental difference between humans and all other animals is that they are bound–forced by natural selection–to eat only what they have evolved eating. Humans are not only able to disregard this biological framework to which we really are in fact bound as all other animals are, but to actually eat and live in ways that are completely contrary to what is prescribed to them, to us, by our evolutionary history and by natural selection. This is true whether the constraints are imposed upon us by our environment, climate, geography and available resources, or defined by the beliefs that shape our worldview.

The former is the dominant in most of the world, but the latter is definitely pervasive in industrialised countries where, for practical purposes, there are no constraints on the availability of foods at any time during the year or any moment during the course of one’s whole life, really. And this is what we are addressing here: not the scarcity of food and the restrictions on dietary regimens in those struggling to get enough food for themselves and their dependents, but the effects on our health of restrictions we place on our own diet based on beliefs.

This fundamental difference is well illustrated by the fact that large carnivore like jaguars, panthers, tigers and lions eat meat exclusively. They never think about it, they don’t consider what they feel like having for supper, they don’t sometimes go grazing a little grass or other plants here and there: they always only eat meat, and have been for millions of years. Consequently, these large felines have only sharp teeth without any flat ones for grinding fibres, they have a shorter and simpler digestive tract that measures about 7 metres (compared to about 10 metres for humans), their proportionally larger stomachs secrete such strong concentrations of hydrochloric acid that the pH inside it after meals drops to values around 1 (compared to  around 2-3 for humans), the lowest (most acidic) on the logarithmic pH scale, and their livers have much greater capacity (about 10 times the one we have) to concentrate uric acid out of the bloodstream and excrete it in the urine.

Cows, bisons, buffalos; sheep, goats, lamas, alpacas; horses, ponies, donkeys, mules and so many other animals eat a diet that consists of basically only grass and grass seeds, have completely different adaptations: they have large thickly enamelled flat teeth for grinding over and over again, and for hours on end throughout the day, those tough cellulose structures of the plant that lock in the nutrition they need to extract, they have extra long digestive systems, some of them with several stomach-like sacs along the way, that actually allows the chewed up grasses to travel back and forth a number of times to maximise the extraction of nutrients, and they have a purely alkaline digestive system, secreting no hydrochloric acid at all, simply because this is what is most suitable and necessary for the optimal digestion and absorption of the sugars, minerals and vitamins present in the grass they live on.

These are just a few examples of evolutionary adaptations to a diet of only meat seen in obligate carnivores like large felines or in herbivore grazers, but they are most appropriate because they pertain to the digestive system on which is built every other system and on which our health and survival depends most directly.

Like feline carnivores, herbivores do not think about what they will eat for their next meal, what they feel like having for breakfast or for lunch. They always eat the same things, grasses and other little leafy plants, and in the late summer, fall and winter, the seeds of the grasses and other plants that have dried and gone to seed. How much of each depends on where they live and how the climate is. It never depends on their thoughts and feeling about what they should eat. And if we were to offer the lion or the tiger something other than fresh meat, a nice big bowl of freshly cut grass or grass seeds like oat kernels, for example, they wouldn’t touch it because for them, it is not food. If we were to offer a cow or a horse a big juicy steak from a gazelle or antilope they would in exactly the same way not even look at it or sniff it because for them, this is not food.

All animals eat only the foods that they have evolved to eat in order to live healthy for the right amount of time to allow them to reproduce and raise their offspring to the point where the offspring can themselves do the same for the next generation. For millions of years this process takes place and refines every detail of the unique characteristics of their bodies, of their physiologies and their biochemistries, of their physical aptitudes and their psychological makeup. Animals do not comprehend this: they know it in their natures, they know it in their instincts, they know it in their very bones.

We, humans, have the ability to comprehend this, at least when it is taught or explained to us, but because we think, we analyse, we believe, we rationalise, we justify and we convince ourselves and others of basically anything we want using more or less clever logic, more or less sound analyses and rationalisations, and, in the end, more or less convincing arguments and justifications. And we excel at this. We excel at it remarkably.

What comes of it? We end up eating and drinking whatever we believe we can or whatever we believe we should, whatever the reason or lack of reason. We eat bread and jam every morning because this is what we’ve always done, because this is what our parents always did, because this is what everyone around us has always done, and because it tastes so good. We eat at McDonald’s, Burger King, Taco Bell or Pizza Hut at lunch because it’s fast, convenient, and also because it tastes so damn good. We feed ourselves and our kids pasta with jarred tomato sauce for supper because it’s the easiest meal we can make, everyone loves it, and it leaves us with a feeling of being full and satisfied. We eat only plant foods. We eat only animal foods. We eat only raw foods. We eat only brown rice. We eat only salad. We eat no fat. We eat mostly fat. We eat no carbs. We eat mostly carbs. We eat in this way or in that fashion. We eat in all sorts of ways for all sorts of reasons and we somehow never ask ourselves what has this body evolved to eat: what we should eat.

In this respect, the situation between humans and all other animals is, at this stage, radically different. So different it couldn’t be more different: animals instinctively eat only what they have evolved eating and therefore evolved to eat; we eat only what we feel like eating or what we think or believe we should. We have lost our food instincts and overrun them with beliefs. We do not care to ask ourselves what our evolutionary history, that of our species as well as that of our personal ancestry, tells us about what we have evolved eating, and we trust the word of food “scientists” that tell us preposterous things such as eating egg yolks and animal fats causes heart disease, or that eating large sweet fruits and whole grains is good for us, or that we should drink milk to have strong bones, or that a big brain like ours needs lots of sugar. All preposterous. All mistaken. All unfounded. But we believe. And we listen. For decades on end before the weight of evidence begins to turn the light around. All the while getting fatter and sicker eating inappropriately for our constitution.

There are at least two ways by which we can approach the problem of trying to figure out what our long past ancestors would have eaten and preferred eating through the millennia given the constraints imposed upon them by the environment and climate: we can consider the archaeological evidence we have gathered, and combine that with as much as we have learned in the realm of evolutionary biology and physiology, trying to trace back the evolution of the different systems of the body, in particular the digestive system, coupled with the evolution of our brain; the other approach is to look at the energetics of survival and work our way through a series of deductions based on what we know and what we can learn from this process itself.

One of the important differences between our closest cousin, the modern chimpanzee, and ourselves is that a chimp eats mostly raw, fibrous plant foods (2/3 stems and leaves and 1/3 small fibrous fruit), and spends many hours each day chewing through these in order to feed itself. As a result, very strong jaws and thickly enamelled teeth together with a long digestive tract through which all these fibrous and nutrient-poor foods must pass as slowly as possible to extract as much as possible out of them. Naturally, this requires a specific kind, and well-developed intestinal flora. As is also natural to expect, and as is in fact the case, the intestinal flora of microorganisms upon which animals depend for proper digestion, and ultimately for survival, develops and adapts to the foods eaten that make their way through the intestines, on the long term, of course, but also on the short term.

What we see in the fossil record is that, following the Miocene that lasted for about 18 million years from 23 to 5 million years ago and that was dubbed the golden age of the apes because they flourished all over the world, there were, in different parts of the world, between 13 and 9 million years ago, several genera of hominoids (something between apes and hominids), and that the earliest members of our group lived at the end of the Miocene and beginning of the Pliocene between 7 and 4.5 million years ago. Molecular studies on DNA also suggest from a completely independent analysis (rate of DNA mutations) that our line must have branched off from the common ancestor we share with chimpanzees around 6-7 million years ago. So it is pretty clear that this is the time around which this separation of lineages must have occurred.

There are two lines of structural changes used to evaluate and follow the evolution we are trying to trace from that oldest ancestor to the modern forms in our genus Homo: The first looks at changes that, in the structure of the skeleton, especially in the hips, legs and feet, but also in the shoulders, arms and hands, betray evidence for an upright walking posture and manual dexterity as opposed to structures consistent with knuckle walking and tree climbing; the second looks at changes in the upper spine, skull, jaws and teeth that also indicate upright posture (skull) and less ape-like features including smaller canines, smaller top and brow ridges, and a flatter and taller face and forehead. Both lines of evolutionary changes lead to the following scenario as the most likely.

Currently, the best contenders for the title of our last common ancestor with the chimp are Sahelanthropus tchadensis (dated at 6-7 million years), Orrorin tugenensis (dated at 6 million years), and Ardipithecus (kaddaba at 5.8-5.2 and ramidus at 4.5-4.3 millions years). All of these fossil species, no matter how little evidence there actually is in some cases, show strong evidence for evolutionary adaptations to upright walking based on the shape of the hip bone or femur or feet or skull. Teeth and skulls also show smaller canines and larger and thicker molars both of which indicate that they ate tougher more fibrous foods like leaves, stems and roots.

As is very clearly illustrated in the figure below, from the oldest australopithecines (africanus), the trend towards larger, flatter and even more thickly enamelled teeth, wider and stronger jaw bones, and thicker skulls with powerful top ridges and sideways flaring cheekbones all constructed to sustain the pressure generated while chewing, continues to later species and peaks in Paranthropus Boisei, believed to be the last of the australopithecines, and probably the most robust of the toughest fibre-chewers ever. But while the trend towards narrower hips, longer femurs, thicker heel bones and higher foot arches, all needed to increase mechanical efficiency in upright locomotion, continues to be evident in the later species, we see a reversal in the trend towards better fibre-chewers, in the shrinking of teeth and jaws, the disappearance of the top ridge and flaring cheekbones, and the decrease in brow ridge in the fossils of Homo habilis and in the very well preserved 1.6 million year old Turkana or Nariokotome Boy, the best specimen we have of our ancestral species Homo ergaster.

hominidEvolution-skullsAndJaws-3stages

This is most naturally and sensibly interpreted as the adaptation from a chimp-like diet based primarily on fruit and other plant foods with the rare feasting on animal flesh from  group hunts of thought to be important mostly in establishing a clear social order in their hierarchical structure, in the oldest australopiths; to a change in diet towards tougher and more fibrous and naturally less desirable leaves and stems, fallback foods, as they are called, that were available to them after migration out of the depths of the forest and into the dry savannah; and to eventually the shift towards more fibre-less animal foods, rich in calories from fats and protein, only a very small amount of which was necessary for survival in comparison to the amount of fibrous and nutrient-poor plant foods.

The implications are clear and also obvious: 1) More fibrous nutritionally-poor plant foods led to adaptations for chewing them but also for processing them internally and must have been associated with a longer much more herbivore-like digestive tract and system. 2) More nutritionally-rich fibre-less animal foods led to the loss of the need for large teeth, powerful jaws and thick skulls, and also must have led to a shrinking of the digestive tract and evolution of digestive adaptations needed to process animal protein and fat, which would include the need for hydrochloric acid in the the stomach to breakdown protein, and bile from the liver to emulsify fats, as well as a new bacterial flora which would have also been entirely different depending on the diet. And 3) the more animal foods were eaten, the more the brain grew in volume, both in absolute terms and relative to body size.

These are the most important conclusions from this exploration of our earliest evolutionary history as a species, which also very closely tie-in with our reflections about what we choose to eat and the reasons we invoke or construct in justifying these choices to ourselves and others, because it shows us as plainly and straight-forwardly as is possible to imagine, that in order to live healthy and thrive throughout our life over its natural lifespan, we are bound to eat what our ancestors have evolved eating in exactly the same way as all other animals are, and that this is dictated by our anatomy, physiology and biochemistry, independently of what we think and of what we believe.

In the next part, we will explore the question of energetics and food selection, what a hominid would naturally do–what you and what I would do–when faced with the need to seek out food for its own survival, and come back to my own story in more practical terms. And if you are interested in reading more about the topics we touched upon in this article, I recommend Ian Tattersall’s Masters of the Planet, Daniel Lieberman’s The Story of the Human Body, Jared Diamond’s The Third Chimpanzee, and Yuval Noah Harari’s Sapiens: A Brief History of Human Kind. Darwin’s On the Origin of Species is truly remarkable in scope, in detail, in depth and in foresight. Even if it doesn’t relate specifically to the details of the evolution of our genus Homo, it is the foundation of the broadest context in which we as intelligent and literate being understand evolution of all species everywhere since the emergence of life on this planet.

If you think this article could be useful to others, please ‘Like’ and ‘Share’ it.

The sun, our Earth, and the colour of your skin

Skin colour is the most obviously visible manifestation and expression of our evolutionary history. This history is carried over the course of hundreds of thousands of generations and tens of thousands of years. What we have to understand is that each one of us—as an individual, a person, a self—has nothing to do with the colour of our skin, the colour of our skin has nothing to do with us, and we have no choice in the matter. What we must also understand is that to be optimally healthy, we have to live and eat in accordance with the colour of our skin and what information it carries about our ancestry. All of this is true for you, and it is true for everyone of every colour in the magnificent spectrum of human skin colours as it exists on the planet today. Let me explain why.

skinColourPalette

(Photo credit: Pierre David as published in this article of the Guardian)

The Sun, like every other star in the universe, formed from the gravitational collapse of a huge cloud of gas. This happened about 5 billion years ago. All the planets, like every other planet everywhere in the universe, formed from the left over debris that wasn’t needed or used in making the Sun, and that remained orbiting around it in a large, flat accretion disk consisting of 99% hydrogen and helium gas and only 1% of solid dust particles. In a blink of an eye, a million years or so, the disk was replaced by a large number of planetesimals. An additional couple hundred million years or so, and the planets of our Solar system were formed.

Beyond the snow line, the radius from the Sun past which water can only exist as ice and where the temperature is below -120 C, volatiles froze into crystals, and were formed from massive icy cores the gas giants: Jupiter (the king at 320 times the mass of the Earth), Saturn, Uranus and Neptune. Within the snow line were formed the rocky planets: Mercury, Venus, Earth and Mars. About 4.5 billion years ago the Solar system was in place. It was in place but not quite like we know it today. It was fundamentally different in several ways, especially in regards to what concerns us here, which is how the Earth was: a fast-spinning burning inferno of molten rock spewing out of volcanos everywhere and flowing all over the globe, completely devoid of water, oxygen, carbon and other volatiles species.

The Earth formed more or less simultaneously with a very close neighbour about the size of Mars. Inevitably, soon after their formation, they collided. This apocalyptic encounter tilted the Earth off its original axis and destroyed the smaller planet that, in the collision, dumped its iron core into the Earth, and expelled about a third of our planet into the atmosphere. Most of the stuff rained back down, but some of the material lumped into larger and larger lumps that eventually resulted in the moon, our moon. When it formed, the moon was a lot closer—it would have looked twice as large as it does now, and the Earth was spinning approximately five times faster than it does today—a day back then would have lasted only 5 hours. Because of the proximity between them, huge tidal forces would have deformed the liquid Earth on a continuous cycle driven by its super short 5-hour days. This would have heated the Earth tremendously by squeezing its insides from one side and then from the other, and caused massive volcanic activity all over the globe.

But this inelastic gravitational interaction, this drag of the moon on the Earth worked, as it still does, to sap rotational energy from the Earth and transfer it to the smaller and far less rotationally energetic moon. This made, and continues to make, the Earth slow down, the moon speed up and therefore drift out into a progressively larger orbit. The moon’s drag on the Earth continues to make the Earth’s spin slower and the moon’s orbit larger, but at an increasingly slower rate, now of 3.8 cm per year. This will continue until there is no more rotational energy to be transferred from the Earth to the moon, at which point we will be tidally locked in order with the moon, and not only will we always see the same side of the moon as we do today, but the moon will also always see the same side of the Earth. For what it’s worth, this will happen way after the Sun has come to the end of its life, and thus in more than 5 billion years. So, for now, this is definitely not a major issue.

Besides this important difference in the Earth’s spin rate and its relationship with the moon, there were a lot of left overs from the Sun’s formation that had clumped up in asteroids and comets whirling around in all sorts of both regular and irregular orbits that had them sweeping across the Solar system from the furthest reaches and most distant places to the inner regions near the Sun and rocky planets. The Heavy Bombardment lasted for a period of approximately 500 million years from about 4.3 to 3.8 billion years ago. During this tumultuous early history of our Solar system, a lot of these asteroids and comets flying past the Earth and the other rocky inner planets were gravitationally captured and pulled in towards the planet to crash on the surface or just swoop down into the atmosphere, leaving behind all or some of their mostly volatile constituents: water and carbon compounds. The Earth would have been regularly bombarded by massive asteroids, and the energy dumped by the impacts would have made it a hellish place covered in flowing lava, obviously without any crust, but rather only molten rock flowing everywhere and volcanos spewing out noxious gases and spilling out more molten rock that merged into the already flowing streams of lava. Very inhospitable.

But with these brutal hundreds of millions of years of bombardment from asteroids and comets, water and carbon compounds were brought to our planet. Given how hot it was, the water was in the atmosphere as vapour, and so were the carbon monoxide and dioxide as well as methane. However, these were now bound to the planet gravitationally and couldn’t escape back into space. Once the bulk of the randomly orbiting solar system debris had been cleared out and incorporated into the various planets onto which they had fallen, the bombardment came to an end, and the Earth started cooling down. It is believed that the last major sterilising impact would have hit the Earth around 3.9 billion years ago.

Cooling during a few thousand years allowed the formation of a thin crust. Further cooling then brought on thousands of years of rain that dumped most of the water vapour from the atmosphere onto the surface. This formed vast planet-spanning oceans. The whole planet was at this point still super hot, but also super wet, and therefore super humid, with the surface practically entirely underwater, lots of active volcanos all over the place but otherwise no mountains. Nevertheless, there would have been some  slightly more elevated places, like on the flanks of volcanos, that would have been dry at least some of the time, leaving some spots where water could accumulate in ponds and stagnate. As soon as these conditions were present, around 3.8 billion years ago, the Earth saw its first microbial life emerge.

Claims for the earliest evidence of life at 3.8, 3.7 or 3.5 billion years are still controversial, but it is well established that hydrogen cyanide dissolved in water produces a diversity of essential biological molecules like urea, amino acids and nucleic acid bases; that formaldehyde in slightly alkaline water polymerises to form a range of different sugars; that amino acids, sugars and nucleic acid bases as well as fatty acids have been found in carbonaceous meteorites; and that by 3 billion years ago, prokaryotes (organisms made of cells without a nucleus) were widespread.

There was a major problem, a major impediment to life, that had to be overcome. This was the fact that the entire surface of the Earth was exposed during the day to the Sun’s UV radiation, and UV rays destroy biological structures and DNA. The cleverest of tricks would have been to find a way to absorb these energetic photons and use the energy for something.

Nature is very clever: by 3.5 billion years ago, chlorophylls believed to have developed in order to protect proteins and DNA of early cells appeared, and chlorophyll-containing cyanobacteria—the oldest living organisms and only prokaryotes that can do this—had developed the ability to absorb light, use that energy to split water molecules and use the free electron from the hydrogen atom to sustain their metabolism, spewing out the oxygen in the process. Oxygen accumulated in the crust for a billion years before the latter became saturated with it and unable to absorb any more. Evidence for increasing oxygen levels in the atmosphere is first seen at around 2.5 billion years ago. By 2.2 billion years ago, oxygen concentrations had risen to 1% of what they are today.

Increasing concentrations of reactive and corrosive oxygen was devastating for all forms of life that, at this stage, were all anaerobic: the oxygen was combining with everything it got in contact with creating all sorts of reactive oxygen species (free radicals) that went around causing damage, exactly as they do in our bodies and that of all animals today, and which, in the absence of antioxidants to neutralise them accelerated ageing and death. These were the only card that these simple anaerobic organisms were dealt.

Nevertheless, for another reason entirely, atmospheric oxygen was a blessing because it turned out to be an excellent UV shield. Not only that, but the splitting of oxygen molecules (O2) into oxygen atoms promoted the recombination of these free-floating oxygens into ozone (O3) that turns out to be an even better UV absorbing shield. So, the more photosynthesis was taking place on the surface, the greater the concentration of atmospheric oxygen grew. The more molecular oxygen there was in the atmosphere, the more ozone could be formed. And the more ozone there was to protect and shield the surface from the harsh UV radiation from the Sun, the more complex and delicate structures could develop and grow. Pretty cool for a coincidence, wouldn’t you say?

By 2 billion years ago—within 200 million years—the first eukaryotes appear (organisms made of cells with a nucleus). This makes good sense considering that these simple organisms and independently-living organelles had a great survival advantage by getting together in groups to benefit from one another and protect each other behind a membrane while making sure the precious DNA needed for replication and proliferation was well sheltered inside a resilient nucleus. Note here that these would have been trying to protect themselves both from the damaging UV radiation streaming down from the Sun (it’s estimated that DNA damage from UV exposure would have been about 40 times greater than it is today), as well as from the corrosive oxygen floating in the air (imagine how much more oxidising it is today with concentrations 100 times greater than they were). And in there, within each of these cells, there were chloroplasts—direct descendants from the first UV absorbers and converters, the cyanobacteria—whose job was to convert the photons from the sun into useful energy for the cell.

In all likelihood unrelated to this biological and chemical evolution of the Earth’s biosphere and atmosphere, a long period of glaciation between 750 and 600 million years transformed the planet into an icy snow and slush ball. And with basically all water on the surface of the globe having frozen over, all organisms under a thick layer of ice and snow, photosynthetic activity must have practically or completely ceased. Fortunately, without liquid water in which to dissolve the atmospheric carbon dioxide into the carbonic acid that in turn dissolves the silicates in the rocks over which is streams and carries down to the ocean floor for recycling by the active tectonic plates, all the carbon dioxide sent into the atmosphere by the volcanos just accumulated. It is believed to have reached a level 350 times higher than it is now. This is what saved the planet from runaway glaciation.

Thanks to this powerful greenhouse of CO2, the ice and snow eventually melted back into running streams and rivers, and flowing wave-crested seas and oceans. With water everywhere and incredibly high concentrations of CO2, plant life exploded. And soon after that, some 540 million years ago, complex animals of all kinds—molluscs, arthropods and chordates—also burst into existence in an incredible variety of different body plans (morphological architectures), and specialised appendages and functions. This bursting into life of so many different kinds of complex animals, all of them in the now already salty primordial oceans, is called the Cambrian Explosion. Complex plant life colonised the land by about 500 million years ago, and vertebrate animals crawled out of the sea to set foot on solid ground around 380 million years ago.

Clearly, all plant life descends from cyanobacteria, first to develop the ability to absorb UV radiation, and without complex plant life, it is hard to conceive of a scenario for the evolution of animal life. The key point in this fascinating story of evolution of the solar system, of our Earth and of life on this planet as it pertains to what we are coming to, is that the light and energy coming from the Sun are essential for life while being at the same time dangerous for the countless living organisms that so vitally depend on it. In humans and higher animals this duality is most plainly and clearly exemplified by the relationship between two essential micronutrients without which no animal can develop, survive and procreate. These vital micronutrients are folate and vitamin D.

What makes folate (folic acid or vitamin B9) and vitamin D (cholecalciferol) so important is that they are necessary for proper embryonic development of the skeleton (vitamin D), and for the spine and neural tube as well as for the production of spermatozoa in males (folate). Vitamin D transports calcium into the blood from the intestinal tract making it available to be used in building bones and teeth; folate plays a key role in forming and transcribing DNA in the nucleus of cells, making it crucially important in the development of all embryonic cells and quickly replicating or multiplying cells (like spermatozoa).

Here’s the catch: vitamin D is produced on the surface of the skin (or fur) through the photochemical interaction of the sun’s UV-B rays and the cholesterol in the skin; folate is found in foods, mostly leafy greens (the word comes from the latin folium that means leaf), but it is broken down by sunlight.

What this translates to is this: too little Sun exposure of the skin leads to vitamin D deficiency, which leads to a deficiency in the available and useable calcium needed to build bones, which in turn leads to a weak, fragile and sometimes malformed skeletal structure—rickets; too much Sun exposure leads to excessive breakdown of folate, which leads to folate deficiency, and which in turn leads to improper development of the quickly replicating embryonic cells of the nervous system and consequent malformation of the neural tube—spina bifida.

The most important thing of all for the survival of a species, is the making and growing of healthy babies and children so that they can make and grow other generations of healthy babies and children. This is true for all living beings, but it is not just true: it is of the highest importance, and it has been—taking evolutionary precedence over everything else—since the dawn of life on Earth. Here is how the biochemistry of the delicate balance between these two essential micronutrients evolved.

Six to seven million years ago, our oldest ape-like ancestors walked out of the forest and into the grassy savannah most probably to look for food. (Isn’t this what also gets you off the couch and into the kitchen?). It is most probably the shift in climate towards hotter and dryer weather and, in response to that, the shrinking of their woodlands, that pushed them to expand their foraging perimeter out into the plains that were growing as the forests were shrinking.

Our first australopith ancestors, these ancestors that we share with modern chimpanzees, would have been in all likelihood covered in hair with pale skin underneath (just as chimps are today), their exposed skin growing darker in time with exposure to sunlight. Having left the forest cover, they were now exposed to the hot scorching Sun most of the day, while walking around looking for food, before going back to the forest’s edges to sleep in the trees.

Natural selection would now favour the development of ways to stay cool and not overheat. This meant more sweat glands to increase cooling by evaporation of water on the surface of the skin. It also meant less hair for the cooling contact of the air with the wet skin to be as effective and efficient as possible. But less hair implied that the skin was now directly exposed to sunlight. To protect itself from burns and DNA damage, but also to protect folate, natural selection pushed towards darker skin: more melanocytes producing more melanin to absorb more photons and avoid burning and DNA damage.

In these circumstances, the problem was never too little sun exposure; it was too much exposure, and thus sunburns and folate deficiency. So these early hominids gradually—and by gradually is meant over tens of thousands of years—became less hairy and darker-skinned. They also became taller and leaner, with narrow hips and long thin limbs: this gave less surface area exposed to the overhead sun but more skin surface area for sweating and cooling down, together with better mechanical efficiency in walking and running across what would appear to us very long distances in the tens of kilometres every day, day after day, in foraging and hunting, always under a blazingly hot sunshine. This process that is described here in a few sentences took place over millions of years, at least 3 or 4 and most probably 5 or 6 million years. The Turkana boy, a 1.6 million years old fossilised skeleton is definitive proof that by that time, hominids were already narrow-hipped and relatively tall.

From an evolutionary standpoint it couldn’t be any other way. While keeping in mind that we are still talking about ancient human ancestors, and not modern homo sapiens, nonetheless, did you, as you were reading these sentences, start to wonder who today would fit such a physical description of being hairless, dark-skinned, tall, lean and narrow hipped? Naturally: savannah dwelling modern hunter-gatherers, and, of course, the world’s best marathon runners. It makes perfect sense, doesn’t it?

Taking all currently available archaeological, paleontological, anthropological, as well as molecular and other scientific evidence as a coherent whole brings us to the most plausible scenario in which all humans on the planet today descend from a single mother who was part of a community of people living somewhere on the western coast of Africa; that it is this group of modern homo sapiens that first developed and used symbolic language to communicate and transmit information and knowledge acquired through their personal and collective experiences; and that it was descendants of these moderns who migrated in small groups, in a number of waves, first into Asia and later into Europe, starting 70 to 100 thousand years ago.

It is very interesting that we also have evidence that moderns had settled areas of the middle east in today’s Israel and Palestine region as early as 200 thousand years ago, and that these moderns shared the land and cohabited with Neanderthals for at least 100 thousand years, using the same rudimentary tools and technologies, without apparently attempting to improve upon the tools they had. Meanwhile, this other group of western African coast moderns had far more sophisticated tools that combined different materials (stone, wood, bone), as well as decorative ornaments and figurines.

Thus, although equal or close to equal in physical structure, appearance, dexterity and skills—a deduction based on fossils and evidence that newer and better tools were immediately adopted and replicated in manufacture by moderns to whom they were introduced by other moderns—it is clear that different and geographically isolated communities of moderns ate differently, lived differently, developed differently and at different rates.

This is not surprising, really. Some children start to speak before they turn one, while other do not until they are two, two and a half or even three. Some children start to walk at 10 or 11 months, while others just crawl on the ground or even drag their bum in a kind of seated-crawl until they are three or more. And this is for children that watch everyone around them walking all day long, and listen to everyone around them speak using complex language also all day long. Now, what do you think would happen if a child grew up without being exposed to speech? Why would they ever, how could they ever start to speak on their own, and to whom would they speak if nobody spoke to them?

Fossil evidence shows that the structures in the ear and throat required for us to be able to make the sounds needed for refined speech and verbal communications were in place (at the very least 200 thousand years ago) tens and even hundreds of thousands of years before the first evidence of symbolic thought (70-50 thousand years ago) and together with it, it is assumed, advanced language.

Symbolic thinking in abstract notions and concepts is the most unique feature of our species. It is the hallmark of humans. And it is the most useful and powerful asset we have in the evolutionary race for survival. Sophistication in symbolic thought can only come with sophistication in language and in the aptitude for language: it is only by developing and acquiring more complex language skills that more complex symbolic thinking can come about, and more sophisticated symbolic thinking naturally leads to developing a more sophisticated and refined language in order to have the means to express it.

It’s surely essential to recognise that this is as true for our ancestors, those that developed that first symbolic language, as it is for you and me today. The difference is that then, the distinction was between those few moderns that used symbolic language and those that didn’t, whereas today, the distinction is more subtle because everyone speaks at least one language to a greater or lesser extent. Nonetheless, anyone can immediately grasp what is described here by listening to Noam Chomsky lecture or even just answer simple questions in the course of an interview.

As they moved northward, settling in different places along the way, staying for thousand or tens of thousands of years, then leaving their settlements behind, either collectively or in smaller groups, and moving on to higher latitudes before settling again somewhere else, these people encountered a wide range of different climates and geographical conditions: usually colder, sometimes dry and sometimes wet, sometimes forested and sometimes open-skyed, sometimes mountainous and sometimes flat. In all cases, they were forced to immediately adapt their living conditions, building suitable dwellings and making adequate clothing. This, we know for sure, because they would have simply not survived otherwise, and it is only those that did survive that are our direct ancestors.

Evolutionary adaptation through natural selection of traits and characteristics arising from small—and, on their own, insignificant and typically unnoticeable—random genetic mutations also took place as it does in every microsecond and in every species of animals and plants. But this, we know to be a slow process that is measured on the timescale of tens of thousands of years (10, 50 even 100). Now, consider the evolutionary pressure—the ultimate evolutionary pressure—of giving birth to healthy and resilient offspring that will grow up to learn from, take care of, and help their parents. The most pressing evolutionary need at these higher latitudes was for the body to more efficiently make and store vitamin D from the incoming UV-B rays that, (and this is an important detail often overlooked or under appreciated), make it to the surface only when the Sun is high in the sky and have less atmosphere to go through. This stringent restriction on the few hours near midday when UV-B can make it to the surface is both constraining and life-saving: it is constraining because only during those hours can the essential vitamin D be made, and it is life-saving because a continual exposure to this energetic, DNA-damaging UV radiation would in time sterilise the surface of the entire planet.

The higher the latitude, the lower the Sun’s path on the sky throughout the year and especially during the winter months. Therefore, the shorter is the season during which UV-B rays reach the surface and during which it is possible for vitamin D to be produced on the skin or fur of animals. The only solution to this severe evolutionary pressure is as little body hair and as little pigmentation as possible (think of the completely white polar bears, arctic wolves, foxes and rabbits). As an aside, what else do you think as advantageous in the cold? The opposite as what is in the hot sun: more volume for less surface area; a smaller and stockier build that keeps heat better, exactly as we see in the cold-adapted Neanderthal.

Settled in a place that provides what we need to live relatively comfortably, we tend to stay there. This has always been true, and even if it has changed in the last few generations in industrialised western countries, we have witnessed this phenomenon up until very recently on islands like Sardinia, Crete, or Okinawa, remote valleys in the Swiss Alps, the Karakoram, Himalayas or Andes, and in other geographically isolated pockets of people with genetic characteristics homogenous amongst themselves but distinct with respect to other human populations. And thus across the world we find a whole spectrum—a rainbow—of different colours and shades of skin, different colours of hair and eyes, different amounts and textures of body hair, of different physical builds and morphologies, of different metabolic and biochemical sensitivities, all seen on a continuum, all dependent upon the evolutionary history of the subpopulation where particular characteristics are seen to be present or absent to a greater or lesser extent, and all of this driven by the evolutionary pressures to adapt and maximise the survival probability of our offspring, our family, our clan, our species, by optimising the amount of folate and vitamin D through the delicate balance between not enough of the latter from under-exposure to UV-B’s that produce it, and not enough of the former from excessive exposure to the same UV-B’s that destroy it.

What this tells us is that, for one thing, we have absolutely nothing to do with the colour of our skin, eyes and hair, and nothing to do with any of the physical and biochemical characteristics we have inherited. It tells us that this has nothing to do with our parents or grand parents either, really, because these are particularities that have evolved over tens of thousands of years of evolution in a very long line of ancestors that settled in a place, stayed put and lived at a particular latitude in a particular geographical setting with a particular climate. It tells us, in the most obvious manner, that because this is so, discrimination based on colour or physical features is not jut unfounded, but it is simply absurd.

If you’re black, you’re black. If you’re white, you’re white. If you’re chocolate or olive-skinned, then you’re chocolate or olive-skinned. If you are “yellow” or “red” then that’s just how it is. And who cares how you phrase it or not, try to be “political correct” and avoid speaking of it. That’s just silly. All of it is simply just the way it is. In the same way, if you’re short or tall, hairy or not, thin or stocky, it is just the way it is. However you are and whatever features you consider, there is never anything more or less about it, never anything more or less about any of these features: it is an expression of our genetic ancestry going back not just a few but hundreds of thousands of generations.

What this also tells us is that we have to take this information into account in everything we do, especially in regards to what we eat, where we live, and how much or how little we expose ourselves to the Sun’s vitally important UV-B rays. Disregard for these fundamentally important details leads to what we see in the world in this modern era where we all live wherever we want, more or less, and find ourselves with our olive or dark brown skin living in at high northern latitudes, or with our fair or milk-white skin living near the equator with strong overhead sun all year round, and see the consequent high rates of vitamin D deficiency and rickets in our dark-skinned northern dwellers, together with the similarly high rates of folate deficiency and spina bifida in our fair-skinned southern dwellers.

In general, if you are dark-skinned you need to expose your skin to the sun a lot more than if you are fair-skinned, because you will both produce less vitamin D and store less. If you are fair-skinned you need less exposure and will tend to store the vitamin D more efficiently for longer periods of time. As for folate, we all need to eat (or drink) leafy greens (i.e., foliage) and green veggies.

However, there is an additional complication that makes matters worse (far worse) That complication is that in this day and age, we all live inside, typically sitting all day facing a computer screen, and sitting all evening eating supper and then watching TV. Not everyone, of course… but most people. Not only that, but most of us all over the world now eat more or less the same things: highly processed packaged foods usually high in processed carbs and low in good, unprocessed fats, high in chemicals of all kinds and low in nutrients, and hardly any leafy and green veggies or nuts and seeds. And boy do we love our Coke, our daily bread, our fries and potatoes, our pizzas and big plates of pasta, and our sweets and desserts! Not everyone, of course… but most people. Consequently, we are all as deficient in folate as we are in vitamin D. We are all as deficient in unprocessed fats and fat-soluble vitamins as we are in all other essential micronutrients. How depressing.

But once we know this, once we have been made aware of this situation, we can correct the problem by switching to a diet of whole foods—of real foods—rich in folic acid and fat-soluble vitamins like A, D, E and K2, (the inuits, for example, get all their vitamin D and the other fat-soluble vitamins from the fat of the whales and seals they eat), and supplementing adequately to maintain optimal levels of both vitamin D (50-80 ng/ml or 125-200 nmol/L) and folate (>5 ng/ml or >11 nmol/l), especially during conception, pregnancy and early childhood, but throughout life and into old age.

There’s one last thing I wanted to mention before closing, and in which you might also be interested: can we ask if one is more important then the other, folate or vitamin D, and do we have a way to answer this question from an evolutionary standpoint? Well, here is something that suggests an answer: in all races of humans on Earth, women are on average about 3% lighter in skin colour than men of the same group. For decades, researchers (mostly old men, of course) were satisfied with the conclusion that this was the result of sexual selection, in the sense that men preferred lighter skinned women and so this is how things evolved over time. Of course, most of you will agree with me now that this just sounds like a cop-out or at best a shot in the dark from a possibly sexist male perspective.

Most of you will surely also agree that considering the question from the perspective of the importance of vitamin D versus folate is clearly more scientific in spirit than claiming sexual selection to explain the difference. And if women are lighter than men no matter where we look on Earth, this strongly suggests that it is either more difficult to build up and maintain good levels of vitamin D to ensure healthy offspring, or that it is more important. In today’s world, it certainly is true that it is far easier to have good levels of folate because even if you stay inside all day, as long as you eat leafy greens or drink green juice, your folate levels will easily be higher than the optimal minimum of 5 ng/ml, and probably much higher, like mine which are five time higher than that at 25 ng/ml.

So, for us today, especially if we eat greens, there is no question that we have to pay much closer attention to our vitamin D levels that tend to be way too low across the board all over the world. We can hypothesise that if we continue evolving over millennia following this indoors lifestyle that we have, humans everywhere will continue to lose both body hair and pigmentation, even those who live in sunny countries, because they don’t expose themselves to the Sun. I would like to encourage you to instead expose your skin to the amount of sunlight that is in accord with your complexion, drink green juice, monitor your vitamin D levels at least once per year, and take supplements to ensure both stay in the optimal range (I recommend taking A-D-K2 together to ensure balance between them, better absorption and physiological action). That alone, even if you don’t do anything else, will be of great benefit to you, and, if you are a soon-to-be or would-like-to-be mother, of even greater benefit to your child or children.

And next time you go out, and each time after that, pay attention, look and appreciate the amazing richness and beauty of all the different skin colours and unique physical features of all the people you see all around. What you will be seeing is the inestimable richness and incalculable pricelessness of our collective human ancestry expressing itself vividly and openly, nothing held back and nothing hidden, for everyone to see and appreciate.

If you are interested in reading more about the topics touched upon in this article, its contents draw from the books Life in the Universe, Rare Earth, Masters of the Planet, The Story of the Human Body and the Scientific American special issue Evolution that features the article, entitled Skin Deep, that prompted me to write this post. And please share this post: we all need to do what we can to help overcome discrimination based on race and appearance.

If you think this article could be useful to others, please ‘Like’ and ‘Share’ it.

Living healthy to 160 – insulin and the genetics of longevity

Of the most remarkable discoveries of the last 15 years, discoveries that might well turn out to be the most remarkable of the 21st century, are those of the telomere—a little tail at the end of our DNA whose length tells us how long we have left to live, and of the enzyme telomerase—the specialised protein whose job it is to try to repair the telomeres so that the cells (and we) can live longer and, from an evolutionary perspective, increase the probability that we’ll have more babies. This and other research into the biology of ageing and the details relating to the transcription of DNA, and the expression or suppression of genes is truly amazingly fascinating. I will turn to this in time, but think it would be jumping the gun to do so now.

What is definitely one of the most remarkable discoveries of the 20th century pertains to the hormone insulin. I am not, however, here referring to the fact that its discovery revolutionised medicine by allowing the saving of countless diabetics from highly premature and painful deaths, usually preceded by torturous amputations of their feet or legs and all the of the horror and misery brought on by these seemingly barbaric and radically extreme measures. (And don’t for one second imagine that such amputations are a thing of the past: I know for a fact—heard directly from the mouth of a practicing orthopaedic surgeon—that amputations are the reality of his everyday, performing sometimes two in a single day.) I’m not either, at least this time, talking about insulin as the master metabolic hormone that regulates the storage into cells of nutrients circulating in the bloodstream. What I am referring to as one of the 20th century’s greatest discoveries in regards to insulin is that of its role in regulating the rate of ageing.

Something that is almost as remarkable is that we hardly ever hear or read about this. For me, that’s really strange. But whatever, I’m not going to hypothesise and speculate to come up with an explanation for why this is. Insulin as regulator of the rate of ageing is what we’ll look at in this article.

Why do mice live two years but bats fifty? Why do rats live three years, but squirrels fifteen. Why do some tortoises live hundreds of years? Why do the smallest dogs, like Chihuahuas, live about twenty years, while the largest, like Great Danes, live five to seven years only? And why do we, humans, live around 80 years, rarely making it to 90, and very rarely to 100 years of age? It is this line of questioning that triggered in the late 80’s and early 90’s a geneticist working in evolutionary biology to hypothesise, for the first time, that ageing could be genetically regulated, at least to a certain extent.

It was the discovery and subsequent realisation in evolutionary biology at that time, that a large number of fundamental cellular processes and mechanisms regulated by a variety of genetic expressions were common to widely different organisms. The realisation was that because all animal life must necessarily share a common ancestor, it is not only logical that the most fundamental functions of cells and especially of how genes express themselves under the influence of hormones essential for life could be the same, but that it should be, to a great extent, expected to be that way. And even though these considerations may seem obvious in retrospect, the fact is that there was only one person with this knowledge, asking these questions, and having the means to do something about seeking an answer to some. Cynthia Kenyon, Professor at UCSF, was this person.

The subject was quick to choose: the tiny worm that Kenyon had already been studying for years, C. elegans, was perfect because it is simple but nonetheless a complex animal, and because it has a short natural lifespan of about 30 days. The first step was clearly defined: find at least one long-lived individual. What seems very surprising from our current vantage point it that she couldn’t readily find one: she couldn’t convince anyone to join with her in this endeavour. Everyone was at that time convinced that ageing was something that just happened: things just wore out and deteriorated with use and with time; nothing to do with genes. But how could this be if different species—some very physically similar—are witnessed to have such widely different lifespans? It just had to be genetic at some level, Kenyon thought. Eventually, after a few years of asking around and searching, she found a young PhD student that was up to it, and set out to find a long-lived mutant.

A number of months down the road a long-lived mutant was found and immediately identified as a ‘DAF-2 mutant’. This mutation made the DAF-2 gene—a gene responsible for the function of two kinds of hormone receptors on a cell’s membrane—less active. The next step was to artificially create a population of DAF-2 mutants and see how long they live, statistically speaking, compared to normal C. elegans. It was found that the genetically ‘damaged’ worms, the ones for which they had turned down the expression of the DAF-2 gene, lived twice as long: starting with exactly the same number of worms, it took 70 days for the last one of the mutants to die compared to 30 days in the normal population.

But an additional observation was made: the curve that traced the fraction of worms remaining was stretched by a factor of two from about the start of adulthood for the mutants. They had the same relatively short childhood but then for the remainder of their lives, for every day in the life of the normal worms, the mutants would live two days. The most impressive was that they were really half their chronologically equally aged cousins in all respects: external appearance, level of activity and reproduction.

To make you appreciate this point as much as you should, this observation with respect to not just the lifespan but notably the healthspan of C. elegans would translate in human terms in someone being 80 years old but looking and acting like a 40 year old in the sense that nobody could tell that they were not 40, let alone 80 years old. Just like Aragon in the The Lord of the Rings. This person would be like a 40 year old at 80, like a 60 year old at 120, and like an 80 year old person coming to the end of their life by the time they were 160! Can you even imagine that? Hard isn’t it. But this is exactly what Kenyon and her team were looking at in these experiments with these little worms.

Now they wanted to understand the effect of the DAF-2 gene, or rather, understand the effect of suppressing its expression in the DNA of each cell’s nucleus at different developmental stages. If it was turned off completely, the worms would die: clearly, DAF-2 expression, at least in C. elegans, is essential for life. If it was suppressed immediately after birth (hatching), the little worms would enter the Dauer state in which they don’t eat, don’t grow, don’t reproduce, and basically don’t move either: they just sit and wait. Wait for what? For better times!

This Dauer state is a remarkable evolutionary adaptation seem in some species that allows the individual to survive during periods of severe environmental stress such as lack of food or water, but also high UV radiation or chemical exposure, for example, for long periods of time with respect to their normal lifespan in a very efficient kind of metabolic, physiological and reproductive hibernation. What’s really cool is that inducing worms out of the Dauer state, no matter how long they’ve been in it, they begin to live normally again, moving and eating, but also reproducing. So, in the Dauer state C. elegans literally stops ageing altogether and waits, suspending metabolic activities and physiological functions until conditions for reproduction and life become adequate once again.

celegansfasting

Taken from Worms live longer when they stop eating  (http://www.bbc.co.uk/nature/2790633)

If DAF-2 expression was turned back up to normal, then they moved out of Dauer and resumed their development stages equivalent to childhood, teenage-hood, and then adulthood, but didn’t live any longer as adults. Finally, suppressing DAF-2 expression at the onset of adulthood resulted in the extended lifespan as originally observed. The conclusion was therefore clear: DAF-2 expression is essential for life and necessary for normal and healthy growth and development in immature individuals from birth until they reach maturity, and suppressing DAF-2 expression was only effective at extending both lifespan and healthspan in mature individuals.

Going further, they now wanted to understand how DAF-2 suppression actually worked to extent healthspan: what were the actual mechanisms that made the worms live longer when DAF-2 expression was turned down. For this, Kenyon’s team needed to look at all of C. elegans’s 20000 genes and figure out how they affect each other. (Note that this is also more or less how many genes we have, but C. elegans has only 3 chromosomes and is also hermaphrodite.) The sequencing of the worm’s genome was done in 1998, and what was found after analysis was very interesting:

The DAF-2 gene activates a phosphorylation chain that attaches phosphate groups onto the DAF-16 transcription factor. In normal individuals the DAF-2 gene is expressed normally, the phosphorylation chain works unimpeded, and the DAF-16 transcription factor is inactivated. In the mutants, the DAF-2 gene expression is suppressed, and as a consequence, the DAF-16 transcription factor is not inactivated and instead accumulates in the nucleus. There, DAF-16 encodes what Kenyon’s team showed to be the genetic key to health and longevity they were looking for from the start of this now decade long pursuit: the FOXO gene.

What does FOXO do? It promotes the expression of other genes, at least four other genes: one responsible for manufacturing antioxidants to neutralise free radicals the largest amount of which are produced by the mitochondria as they make energy for the cell; a second responsible for manufacturing ‘chaperons’ whose role as specialised proteins is to transport other proteins and in particular to bring damaged ones to the cell’s garbage collector and recycling facility to promote the replacement of those damaged proteins by new and well-functioning ones; a third responsible for manufacturing antimicrobial molecules that increase the cell’s resistance to bacterial and viral invaders; and the fourth that improves metabolic functions and in particular fat transport (reduce) and utilisation (increase).

It is these four genetically regulated cellular protection and repair mechanisms, the cumulative combined effects of all these increased expressions of antioxidants, chaperons, antimicrobials and metabolic efficiency—all of them at the cellular level—that allow the lucky DAF-2 suppressed mutants to live twice as long twice as healthy. Remarkable!

Now that all the cards about how the long-lived mutants actually live twice as long as expected under normal conditions are laid on the table, and that there is only one detail I left out of the story up to this point, tell me: can you guess what are the two sister hormones to which the cell’s sensitivity through the activity of its receptors for them are controlled by the DAF-2 gene? It’s a trick question because I told you half the answer in the introduction: The DAF-2 gene encodes the hormone receptors for both insulin and the primary form of insuline-like growth factor IGF-1. Surprised? It isn’t surprising, really. In fact, it all makes perfect sense:

Insulin and IGF-1 promote growth; nutrient absorption and cellular growth and reproduction are essential for life and thus common to all living organisms, including the more primitive of them like yeasts; growth in immature individuals is fundamental for health and for ensuring they reach maturity; but growth in adults, in mature individuals, just means ageing, and the more insulin and IGF-1 there is, the faster the rate of cellular damage and deterioration, the more genetic mutations from errors in transcription, the more pronounced the deterioration of the brain and the heart, of the arteries and the veins, of the muscles, the bones and the joints, and obviously, the faster the rate of ageing. Because what is ageing if it is not the word we use to describe the sum total, the multiple negative consequences, the end result of all of these deteriorations in these vital organs and systems but also everywhere else throughout the organism, all of it starting at the cellular level, in the nucleus of every cell.

About the necessity of insulin for normal growth, you should definitely not think that these observations impliy we should stimulate insulin secretion in the young in order to ensure proper growth. Totally not! The body knows exactly when and how much insulin is needed at any given time. In fact, any additional stimulation of insulin promoted by eating simple and starchy carbs actually deregulates the proper balance of hormones that the body is trying to maintain. This deregulation from a sugar laden diet in children is the very reason for many wide spread health problems in our youth most important of which is childhood obesity and the metabolic and physiological stresses this brings on. So, leave it to mother nature to know how to regulate the concentration of insulin in the bloodstream. Do not disrupt the delicate biochemical balance by ingesting refined carbohydrates: it’s the last thing anyone needs for good health and long life.

The first results were so interesting that several other groups joined in this research into the genetics of ageing. Not as much as one would think, but at least a handful of other groups began to apply and expand the techniques to other species. Unsurprisingly, the same effects, although with different magnitudes, were seen in these very different species, from an evolutionary standpoint: fruit flies and mice. In addition, the connection was made with lifespan-extending experiments using calorie-restriction, which have also been carried out on mice and other animals (we’ll look into this another time). And beyond the work around DAF-2, DAF-16 and FOXO, Kenyon’s group investigated other ways to influence lifespan and found two more.

The first was by disabling some of the little worm’s sensory neurones of which there are very few, making it easy to test and determine the influence they have separately and in combinations. They tested smell and taste neurones, found that disabling some would extend lifespan while disabling others didn’t. They also found that disabling different combinations of smell and taste neurones could have nulling effects. The second was playing with the TOR gene expression. For now, however, we will leave it at that.

As the fact that it is rare and relatively hard to come by this work without actually looking for it, there is something else I find very hard to comprehend. In Kenyon’s various lectures on this work, there is usually a mention of the biotech company she founded called Elixir Pharmaceuticals and how they aim to find one or more drugs that can suppress DAF-2 expression in humans without causing negative side-effects in order to extend lifespan and healthspan as was done in C. elegans with genetic manipulation. That’s fine, and does make sense to a certain extent, especially if we can find not chemical drugs but natural plant-derived compounds that have this effect on us.

The thing that doesn’t make sense and that is hard to understand from the naive perspective of the honest scientist looking for the simplest possible solution to a problem of inferring something we don’t know from information that relates to what we want to know: in this case this mean the simplest way to make the best use of this information and apply what we have learnt from these two and half decades of research in a way that we know would be beneficial in promoting a longer and healthier lifespan in humans without risks through the introduction of foreign substances in our body. Because they haven’t, here I offer my attempt to do this.

We have, thanks to Kenyon and others, understood in great detail how lifespan in complex organisms can be, to a great extent, genetically regulated, and which genes, transcription factors and mechanisms are involved in the process of regulating the rate of ageing in conjunction with the propensity for developing age-related degenerative diseases. In the final analysis, the main players are the DAF-2 gene that tunes up or down the sensitivity of insulin and IGF-1 receptors, the DAF-16 transcription factor that encodes the FOXO gene but is made inactive by the expression of DAF-2, and the star FOXO longevity gene that promotes the expression other genes responsible for stimulating the cell’s most powerful protection and repair mechanisms.

We have, from many decades of research on calorie-restriction and fasting in animals including humans (and which we’ll explore elsewhere), understood that this is an extremely effective way to extent both lifespan and healthspan and basically eliminate the occurrence of age-related degenerative diseases by greatly increase resistance to health disorders of all kinds. Some key observations on calorie-restricted animals include their very low blood levels of sugar, insulin and IGF-1, high metabolic efficiency and ability to utilise fat demonstrated by low blood levels of triglycerides, and their remarkably younger appearance with increased energy and activity levels.

And finally, we have, from more than a century of observations and research, concluded that diabetics, whose condition is characterised by very high levels of blood glucose, insulin and triglycerides, are plagued by a several-fold increase in rates of cancer, stroke, heart disease, kidney disease, arthritis, Alzheimer’s and dementia, basically all the age-related degenerative diseases known to us, and in addition, also a several fold increase in their rate of ageing based on the spectrum of blood markers used for this purpose, their appearance, but also on the length of their telomeres.

Is it not, therefore, obvious from these observations that high blood sugar, high insulin and high triglycerides are hallmarks of accelerated ageing and a propensity for degenerative diseases, while low blood sugar, low insulin and low triglycerides are instead necessarily related to extended lifespan, extended healthspan and increased resistance to all disease conditions including those categorised as degenerative, and this, independently of the actual mechanisms involved?

Is it not, therefore, plausible from these observations that the genetic mechanisms relating to the function of the DAF-2 gene, DAF-16 transcription factor and FOXO gene in conferring to the DAF-2 mutants twice as long a life can, in fact, be activated and enhanced epigenetically by creating an environment in the organism that is conducive to it: simply by keeping blood sugar, insulin and triglycerides as low as possible? In other words, isn’t it plausible from these observations that by manipulating the biochemistry to ensure that blood sugar, insulin and triglycerides are throughout the day and night as low as possible depending on the organisms requirements, that this will actually translate into the activation of the FOXO gene to enhance protection and repair at the cellular level and thus extend lifespan and healthspan?

And what is, not only the easiest and simplest, but also the most effective way to do this? It is to eliminate insulin-stimulating carbohydrates—sugars and starches—from the diet completely. This, within 24-48 hours, will allow sugar levels to drop to a functional minimum. The low blood sugar will allow the pancreas to reduce production and insulin levels to drop bit by bit. Lowered insulin will eventually allow the cells to start using the fat circulating in the blood, and in time, increase in efficiency, thereby dropping triglyceride levels lower and lower.

Why is it you think that Kenyon never mentions this anywhere? Do you think that this has simply not occurred to her? I honestly don’t know. But if there is a single thing to remember it is this: insulin is necessary for life; in the immature individual, insulin regulates growth; in the mature individual, insulin regulates the rate of ageing and the propensity for degenerative diseases. Hence, if you are a mature individual, and by this I mean full grown, and if you want to live long and healthy, the very first thing you need to do is to keep the concentration of insulin circulating in your blood as low as possible. Everything else that we can do to extend healthspan and lifespan is secondary to this.

If you think this article could be useful to others, please ‘Like’ and ‘Share’ it.