The rise and fall of the quartz watch

To those of us old enough to remember it, the autumn of 1973 was not perhaps what Charles Dickens would have classified as “the best of times”. War had broken out in the Middle East, the Watergate scandal was making life difficult for the newly-re-elected Richard Nixon and the late and thoroughly unlamented General Pinochet had just seized power in Chile. Britain had begun the year joining the EEC (the forerunner of the EU) but was now in the grip of the Three Day Week as the confrontation between the Tory Prime Minister Edward Heath and the miners showed no sign of abating. Inflation was spiralling out of control and recession seemed inevitable.

It would have been about that time that I saw in the window of a jewellers shop in Wendover in Buckinghamshire something that caught my imagination – a Seiko quartz watch. I knew from the encyclopaedia that we had had at home since my early childhood that a quartz clock was an extremely accurate timepiece, but it was completely news to me that somebody had managed to shrink the complex electronics to the size of something that could be fitted into a wristwatch. In fact the first quartz watches appeared in Japan in 1969, but it obviously took time for them to make their way to the Home Counties (it must also be remembered that wide-spread access to the internet was still a quarter of a century off).

The watch had a claimed accuracy of 1 minute per year, which was quite sensational because even a well-regulated mechanical watch could – and still can – be off by that amount in a few days. It cost £100 – a considerable sum of money for the time. Soon after Seiko began marketing their watches very actively in the UK with the advertising tag “Some day all watches will be made this way”.

Rarely if ever has an advertising slogan proved more accurate; within a decade the mechanical wristwatch had all but disappeared from the windows of high street retailers. The first cheap quartz watches appeared around the second half of 1975. Unlike the analogue Seiko, these watches featured digital displays. The first models used light emitting diode (LED) displays of the type used by the electronic calculators of that time (calculators were also considered cool cutting-edge gadgets in the mid ‘70s) but had the major disadvantage that it was necessary to press a button in order to read off the time (I possessed one made by Samsung – a company virtually unknown in the West at the time). This type of display soon gave way to the now-familiar liquid crystal display (LCD) still found in brands like the ever-popular Casio G-Shock. A watch where one can read of the time as – say – 1:52 PM rather than “just after ten to two” might seem to be at a major advantage, but here the quartz revolution stuttered slightly. Most people actually preferred the older analogue displays and these days the majority of wristwatches have this type of display.

For the Swiss watch industry, quartz represented a major challenge. What happened next is best considered through the very different directions taken by two of Switzerland’s most prominent watchmakers – Rolex and Omega. Omega embraced the new technology full on. In 1974 they launched the Megaquartz Marine Chronometer, which remains to this day the most accurate wristwatch ever made. But – not helped by the adverse economic conditions of the time – Omega struggled and only within the last decade has the brand begun to regain its former strength. Rolex for their part did absolutely nothing. They carried on making exactly the same models – and they kept on selling! This policy was successful – today Rolex is by far the world’s largest producer of luxury wristwatches. It was many years before they even bothered to produce a quartz watch – the Oysterquartz. But despite an accuracy of 5 seconds per year – not far off the Omega Megaquartz – it was not a success and was eventually discontinued.

Round about the end of the 1980s the tide turned as more and more purchasers of high-end watches bean to reject quartz in favour of traditional mechanicals. Why one might ask, when a quartz watch is so much more accurate? There are a number of possible reasons – one obvious advantage a mechanical watch has over its quartz counterpart is that it never needs a battery. But battery-less technologies such as eco-drive (solar) and kinetic (rotor-driven dynamo) have largely failed to penetrate the high-end market. And in any case changing the battery every few years is far cheaper and less time-consuming than the regular servicing mechanical watches require to keep them in working order.

The answer is to some extent to be found with the so-called “display back”. Many mechanical watches now have a transparent back, so the movement can be viewed. Look at the intricate and exquisitely-finished movement in a Patek Phillippe or a Lange and compare it with an electronic chip. No contest! Even the nicely-decorated UNITAS hand-wound movements found in many mid-range watches such as the Stowa Marine Original beats a quartz movement hands down in the beauty stakes. To be blunt, one is a micro-machine, a marvel of precision engineering; the other is nothing more than an electrical appliance.

Today the vast majority of luxury watches are mechanical. Most of the high-end quartz watches, such as the Omega Megaquartz, the Rolex Oysterquartz and the Longines Conquest VHP, have long since ceased production. The Citizen Chronomaster, rated to within 5 seconds a year, remains a current model but it is not widely available outside of Japan. The advent of radio control, whereby a watch can synchronize itself to the time signals from Rugby, Frankfurt, Colorado etc has meant that super-accurate quartz movements are now largely redundant, virtually killing off innovation in the field. Most modern quartz watches, when not synchronized to a time signal, are actually far less accurate than the Seiko I saw in that jeweller’s shop window almost three and a half decades ago.

© Christopher Seddon 2008

Biological Classification and Systematics

The Linnaean classification

Scientific classification or biological classification is how species both living and extinct are grouped and categorized. Man’s desire to classify the natural world seems to be very deep rooted and the fact that many traditional societies have highly sophisticated taxonomies suggests the practice goes back to prehistoric times. However the earliest system of which we have knowledge was that of Aristotle, who divided living organisms into two groups – animals and plants. Animals were further divided into three categories – those living on land, those living in the water and those living in the air, and were in addition categorised by whether or not they had blood (those “without blood” would now be classed as invertebrates). Plants were categorised by differences in their stems.

Aristotle’s system remained in use for hundreds of years but by the 16th Century, man’s knowledge of the natural world had reached a point where it was becoming inadequate. Many attempts were made to devise a better system, but the science of biological classification remained in a confused state until the time of Linnaeus, who published the first edition of his Systema Naturae in 1735. In this work, he re-introduced Gaspard Bauhin’s binomial nomenclature and grouped species according to shared physical characteristics for ease of identification. The scheme of ranks, as used today, differs very little from that originally proposed by Linnaeus. A taxon (plural taxa), or taxonomic unit, is a grouping of organisms. A taxon will usually have a rank and can be placed at a particular level in the hierarchy.

The ranks in general use, in hierarchical order, are as follows:

Phylum (animals or plants) or Division (plants only)

The prefix super- indicates a rank above; the prefix sub- indicates a rank below. The prefix infra- indicates a rank below sub-. For instance:


Even higher resolution is sometimes required and divisions below infra- are sometimes encountered, e.g. parvorder. Domains are a relatively new grouping. The three-domain system (Archaea, Bacteria and Eukaryota) was first proposed in 1990 (Woese), but not generally accepted until later. Many biologists to this day still use the older five-kingdom system (Whittaker). One main characteristic of the three-domain system is the separation of Archaea and Bacteria, previously grouped into the single prokaryote kingdom Bacteria (sometimes Monera). As a compromise, some authorities add Archaea as a sixth kingdom.

It should be noted that taxonomic rank is relative, and restricted to the particular scheme used. The idea is to group living organisms by degrees of relatedness, but it should be bourn in mind that rankings above species level are a bookkeeping idea and not a fundamental truth. Groupings such as Reptilia are a convenience but are not proper taxonomic terms. One can become too obsessed with whether a thing belongs in one artificial category or another – e.g. is Pluto a planet or (closer to home) does habilis belong in Homo or Australopithecus; does it really matter if we lump the robust australopithecines into Australopithecus or split them out into Paranthropus?


Systematics is the study of the evolutionary relationships between organisms and grouping of organisms. There are three principle schools of systematics – evolutionary taxonomy (Linnaean or “traditional” taxonomy), phenetics and cladistics. Although there are considerable differences between the three in terms of methodologies used, all seek to determine taxonomic relationships or phylogenies between different species or between different higher order groupings and should, in principle, all come to the same conclusions for the species or groups under consideration.

Some Terminology and concepts

One of the most important concepts in systematics is that of monophyly. A monophyletic group is a group of species comprising an ancestral species and all of its descendants, and so forming one (and only one) evolutionary group. Such a group is said to be a natural group. A paraphyletic group also contains a common ancestor, but excludes some of the descendants that have undergone significant changes. For instance, the traditional class Reptilia excludes birds even though they evolved from an ancestral reptile. A polyphyletic group is one in which the defining trait evolved separately in different places on the phylogenetic tree and hence does not contain all the common ancestors, e.g. warm-blooded vertebrates (birds and mammals, whose common ancestor was cold-blooded). Such groups are usually defined as a result of incomplete knowledge. Organisms forming a natural group are said to form a clade, e.g. the amniotes. If however the defining feature has not arisen within a natural group, it is said to be a grade, e.g. flightless birds (flight has been given up by many unrelated groups of birds).

Characters are attributes or features of organisms or groups of organisms (taxa) that biologists use to indicate relatedness or lack of relatedness to other organisms or groups of organisms. A character can be just about anything that can be measured from a morphological feature to a part of its genetic makeup. Characters in organisms that are similar due to descent from a common ancestor are known as homologues and it is crucial to systematics to determine if characters under consideration are indeed homologous, e.g. wings are homologous if we are comparing two birds, but if a bird is compared with, say, a bat, they are not, having arisen through convergent evolution, a process where structures similar in appearance and function appear in unrelated groups of organisms. Such characters are known as homoplasies. Convergences are not the same as parallelisms which are similar structures that have arisen more than once in species or groups within a single extended lineage, and have followed a similar evolutionary trajectory over time.

Character states can be either primitive or derived. A primitive character state is one that has been retained from a remote ancestor; derived character states are those that originated more recently. For example the backbone is a defining feature of the vertebrates and is a primitive state when considering mammals; but the mammalian ear is a derived state, not shared with other vertebrates. However these things are relative. If one considers Phylum Chordata as a whole, the backbone is a derived state of the vertebrates, not shared with the acrania or the tunicates. If a character state is primitive at the point of reference, it is known as a pleisiomorphy; if it is derived it is known as an apomorphy (note that “primitive” trait in this context does not mean it is less well adapted than one that is not primitive).

Current schools of thought in classification methodology

Biologists devote much effort to identifying and unambiguously defining monophyletic taxa. Relationships are generally presented in tree-diagrams or dendrograms known as phenograms, cladograms or evolutionary trees depending on the methodology used. In all cases they represent evolutionary hypotheses i.e. hypotheses of ancestor-descendant relationships.

Phenetics, also known as numerical taxonomy, was developed in the late 1950s. Pheneticists avoid all considerations of the evolution of taxa and seek instead to construct relationships based on overall phenetic similarity (which can be based on morphological features, or protein chemistry, or indeed anything that can be measured), which they take to be a reflection of genetic similarity. By considering a large number of randomly-chosen phenotypic characters and giving each equal weight, then the sums of differences and similarities between taxa should serve as the best possible measure of genetic distance and hence degree of relatedness. The main problem with the approach is that it tends to group taxa by degrees of difference rather than by shared similarities. Phenetics won many converts in the 1960s and 1970s, as more and more “number crunching” computer techniques became available. Though it has since declined in popularity, some believe it may make a comeback (Dawkins, 1986).

By contrast, cladistics is based on the goal of producing testable hypotheses of genealogical relationships among monophyletic groups of organisms. Cladistics originated with Willi Hennig in 1950 and has grown in popularity since the mid-1960s. Cladists rely heavily on the concept of primitive versus derived character states, identifying homologies as pleisiomorphies and apomorphies. Apomorphies restricted to a single species are referred to as autapomorphies, where as those shared between two or more species or groups are known as synapomorphies.

A major task for cladists is identifying which is the pleisiomorphic and which is the apomorphic form of two character states. A number of techniques are used; a common approach is outgroup analysis where clues are sought to ancestral character states in groups known to be more primitive than the group under consideration.

In constructing a cladogram, only genealogical (ancestor-descendent) relationships are considered; thus cladograms may be thought of as depicting synapomorphy patterns or the pattern of shared similarities hypothesised to the evolutionary novelties among taxa. In drawing up a cladogram based on significant numbers of traits and significant numbers of taxa, the consideration of every possibility is beyond even a computer; computer programs are therefore designed to reject unnecessarily complex hypotheses using the method of maximum parsimony, which is really an application of Occam’s Razor.

The result will be a family tree – an evolutionary pattern of monophyletic lineages; one that can be tested and revised as necessary when new homologues and species are identified. Trees that consistently resist refutation in the face of such testing are said to be highly corroborated.

A cladogram will often be used to construct a classification scheme. Here cladistics differs from traditional Linnaean systematics. Phylogeny is treated as a genealogical branching pattern, with each split producing a pair of newly-derived taxa known as sister groups (or sister species). The classification is based solely on the cladogram, with no consideration to the degree of difference between taxa, or to rates of evolutionary change.

For example, consider these two classification schemes of the Phylum Chordata.

Classification Scheme A (Linnaean):

Phylum Chordata
Subphylum Vertebrata (vertebrates)
Superclass Pisces (fish)
Class Amphibia (amphibians)
Class Reptilia (turtles, crocodiles, snakes and lizards)
Class Mammalia (mammals)
Class Aves (birds)

Classification Scheme B (Cladistic):

Phylum Chordata
Subphylum Vertebrata
Superclass Tetrapoda
Subclass Lissamphibia (recent amphibians)
Superclass Amniota
Class Mammalia (mammals)
Class Reptilomorpha
Subclass Anapsida (turtles)
Subclass Diapsida
Infraclass Lepidosaura (snakes, lizards, etc)
Infraclass Archosauria
Order Crocodilia (crocodiles, etc)
Class Aves (birds)

In Scheme A, crocodiles are grouped with turtles, snakes and lizards as “reptiles” (Class Reptilia) and birds get their own separate grouping (Class Aves). This scheme considers physical similarities as well as genealogy; but the result is the scheme contains paraphyletic taxa. Scheme B strictly reflects cladistic branching patterns; the reptiles are broken up, with birds and crocodiles as a sister group Archosauria (which also included the dinosaurs). All the groupings in this scheme are monophyletic. It will be noted that attempts to append traditional Linnaean rankings to each group runs into difficulties – birds should have equal ranking with the Crocodilia and should therefore be also categorised as an order within the Archosauria; not their own class, as is traditional.

Traditional Linnaean systematics, now referred to as evolutionary taxonomy, seeks to construct relationships on basis of both genealogy and overall similarity/dissimilarity; rates of evolution are an important consideration (in the above example, birds have clearly evolved faster than crocodiles); classification reflects both branching pattern and degree of difference of taxa. The approach lacks a clearly-defined methodology; tends to be based on intuition; and for this reason does not produce results amenable to testing and falsification.

© Christopher Seddon 2008

Linnaeus – Princeps Botanicorum

There are very few examples of scientific terminology that have become sufficiently well-known to have become a part of popular culture. The chemical formula for water – H2O – is certainly one; it is so familiar it has even featured in advertisements. Another is the equation E = mc squared – while not everybody knows that it defines a relationship between mass and energy, most will have heard of it and will be aware it was formulated by Albert Einstein.

But the most familiar scientific term of all has to be Homo sapiens – Mankind’s scientific name for himself.

The term was originated by the 18th Century Swedish scientist Carl von Linné (1707-78), better known as Linnaeus, who first formally “described” the human species in 1758. It means (some would say ironically!) “wise man” or “man the thinker”. It is an example of what biologists call the binomial nomenclature, a system whereby all living things are assigned a double-barrelled name based on their genus and species. These latter terms are in turn part of a bigger scheme of classification known as the Linnaean taxonomy, which – as the name implies – was introduced by Linnaeus himself.

Man has been studying and classifying that natural world throughout recorded history and probably much longer. A key concept in classification of living organisms is that they all belong to various species, and this is a very old idea indeed, almost certainly prehistoric in origin. For example, it would have been obvious that sheep all look very much alike, but that they don’t look in the least bit like pigs, and that therefore all sheep belong to one species and all pigs belong to another. Today we refer to organisms so grouped as morphological species.

In addition, the early Neolithic farmers must soon have realised that while a ewe and a ram can reproduce, and likewise a sow and a boar; a ewe and a boar, or a sow and a ram cannot. Sheep and pigs are different biological species, though this definition of a species was not formalised until much later, by John Ray (1628-1705), an English naturalist who proclaimed that “one species could never spring from the seed of another”.

The first attempt at arranging the various species of living organisms into a systematic classification was made by the Greek philosopher Aristotle (384-322 BC), who divided them into two groups – animals and plants. Animals were further divided into three categories – those living on land, those living in the water and those living in the air, and were in addition categorised by whether or not they had blood (broadly speaking, those “without blood” would now be classed as invertebrates, or animals without a backbone). Plants were categorised by differences in their stems.

Aristotle’s system remained in use for hundreds of years but by the 16th Century, Man’s knowledge of the natural world had reached a point where it was becoming inadequate. Many attempts were made to devise a better system, with some notable works being published by Conrad Gessner (1516-65), Andrea Cesalpino (1524-1603) and John Ray (1628-1705).

In addition Gaspard Bauhin (1560-1624) introduced the binomial nomenclature that Linnaeus would later adopt. Under this system, a species is assigned a generic name and a specific name. The generic name refers to the genus, a group of species more closely related to one another than any other group of species. The specific name represents the species itself. For example lions and tigers are different species, but they are similar enough to both be assigned to the genus Panthera. The lion is Panthera leo and the tiger Panthera tigris.

Despite these advances, the science of biological classification at the beginning of the 18th Century remained in a confused state. There was little or no consensus in the scientific community on how things should be done and with new species being discovered all the time, the problem was getting steadily worse.

Step forward Carl Linné, who was born at Rashult, Sweden, in 1707, the son of a Lutherian curate. He is usually known by the Latinised version of his name, Carolus Linnaeus. It was expected that young Carl would follow his father into the Church, but he showed little enthusiasm for this proposed choice of career and it is said his despairing father apprenticed him to a local shoemaker before he was eventually sent to study medicine at the University of Lund in 1727. A year later, he transferred to Uppsala. However his real interest lay in Botany (the study of plants) and during the course of his studies he became convinced that flowering plants could be classified on the basis of their sexual organs – the male stamens (pollinating) and female pistils (pollen receptor).

In 1732 he led an expedition to Lapland, where he discovered around a hundred new plant species, before completing his medical studies in the Netherlands and Belgium. It was during this time that he published the first edition of Systema Naturae, the work for he is largely remembered, in which he adopted Gaspard Bauhin’s binomial nomenclature, which to date had not gained popularity. Unwieldy names such as physalis amno ramosissime ramis angulosis glabris foliis dentoserratis were still the norm, but under Bauhin’s system this became the rather less wordy Physalis angulata.

This work also put forward Linnaeus’ taxonomic scheme for the natural world. The word taxonomy means “hierarchical classification” and it can be used as either a noun or an adjective. A taxonomy (noun) is a tree structure of classifications for any given set of objects with a single classification at the top, known as the root node, which applies to all objects. A taxon (plural taxa) is any item within such a scheme and all objects within a particular taxon will be united by one or more defining features.

For example, a taxonomic scheme for cars has “car” as the root node (all objects in the scheme are cars), followed by manufacturer, model, type, engine size and colour. Each of these sub-categories is known as a division. An example of a car classified in the scheme is Car>Ford>Mondeo>Estate>2.3 Litre>Metallic silver. An example of a taxon is “Ford”; all cars within it sharing the defining feature of having been manufactured by the Ford Motor Company.

The taxonomy devised by Linnaeus, which he refined and expanded over ten editions of Systema Naturae, had six divisions. At the top, as in the car example is the root note, which Linnaeus designated Imperium (Empire), of which all the natural world is a part. The divisions below this were Regnum (Kingdom), Classis (Class), Ordo (Order), Genus and Species.

The use of Latin in this and other learned texts is worth a brief digression. At the time few scientists spoke any contemporary language beyond their own native tongue, but most had studied the classics and so nearly all scientific works were published in Latin, including Sir Isaac Newton’s landmark Philosophiae Naturalis Principia Mathematica (The Philosophy of Natural Mathematical Principles) and Linnaeus’ own Systema Naturae. One notable exception was Galileo’s The Dialogue Concerning the two Chief World Systems, aimed at a wider audience and thus an early example of “popular science” (though it certainly wasn’t very popular with the Inquisition!).

Linnaeus recognised three kingdoms in his system, the Animal kingdom, the Plant Kingdom and the Mineral Kingdom. Each kingdom was subdivided by Class, of which the animal kingdom had six: Mammalia (mammals), Aves (birds), Amphibia (amphibians), Pisces (fish), Insecta (insects) and Vermes (worms). The Mammalia (mammals) are those animals that suckle their young. It is said that Linnaeus adopted this aspect as the defining feature of the group because of his strongly-held view that all mothers should breast feed their on babies. He was strongly opposed to the then-common practice of “wet nursing” and in this respect he was very much in tune with current thinking.

Each class was further subdivided by Order, with the mammals comprising eight such orders, including the Primates. Orders were subdivided into Genera, with each Genus containing one or more Species. The Primates comprised the Simia (monkeys, apes, etc) and Homo (man), the latter containing a single species, sapiens (though Linnaeus initially also included chimpanzees and gibbons).

The Linnaean system did not accord equal status to apparently equal divisions; thus the Mineral Kingdom was ranked below the Plant Kingdom; which in turn sat below the Animal Kingdom. Similarly the classes were assigned ranks with the mammals ranking the highest and the worms the lowest. Within the mammals the Primates received top billing, with Homo sapiens assigned to pole position therein.

This hierarchy within a hierarchy reflected Linnaeus’ belief that the system reflected a Divine Order of Creation, with Mankind standing at the top of the pile and indeed the term “primate” survives to this day as a legacy of that view. It should be remembered that the prevalent belief at the time of Linnaeus was that the Earth and all living things had been produced by God in their present forms in a single act. This view, now known as Creationism, wasn’t seriously challenged until the 19th Century.

Linnaeus’ system was an example of natural theology, which is the study of nature with a view to achieving a better understanding of the works of God. It was heavily relied on by the deists of that time. Deists believe that knowledge of God can be deduced from nature rather than having to be revealed directly by supernatural means. Deism was very popular in the 18th Century and its adherents included Voltaire, Thomas Jefferson and Benjamin Franklin.

Though some were already beginning to question Creationism, Linnaeus was not among them and he proclaimed that “God creates, Linnaeus arranges”. It has to be said that modesty wasn’t Linnaeus’ strongest point and he proposed that Princeps Botanicorum (Prince of Botany) be engraved on his tombstone. He was no doubt delighted with his elevation to the nobility in 1761, when he took the name Carl von Linné.

Linnaeus did have his critics and some objected to the bizarre sexual imagery he used when categorising plants. For example, “The flowers’ leaves…serve as bridal beds which the Creator has so gloriously arranged, adorned with such noble bed curtains, and perfumed with so many soft scents that the bridegroom with his bride might there celebrate their nuptials with so much the greater solemnity…”. The botanist Johann Siegesbeck denounced this “loathsome harlotry” but Linnaeus had his revenge and named a small and completely useless weed Siegesbeckia! In the event Linnaeus’ preoccupation with the sexual characteristics of plants gave poor results and was soon abandoned.

Nevertheless, Linnaeus’ classification system, as set out in the 10th edition of Systema Naturae, published in 1758, is still is considered the foundation of modern taxonomy and it has been modified only slightly.

Linnaeus continued his work until the early 1770s, when his health began to decline. He was afflicted by strokes, memory loss and general ill-health until his death in 1778. In his publications, Linnaeus provided a concise, usable survey of all the world’s then-known plants and animals, comprising about 7,700 species of plants and 4,400 species of animals. These works helped to establish and standardize the consistent binomial nomenclature for species, including our own.

We have long ago discarded the “loathsome harlotry” and the rank of Empire. Two new ranks have been added; Phylum lies between Kingdom and Class; and Family lies between Order and Genus, giving seven hierarchical ranks in all. In addition, prefixes such as sub-, super-, etc. are sometimes used to expand the system. (The optional divisions of Cohort (between Order and Class) and Tribe (between Genus and Family) are also sometimes encountered, but will not be used here). The Mineral Kingdom was soon abandoned but other kingdoms were added later, such as Fungi, Monera (bacteria) and Protista (single-celled organisms including the well-known (but actually quite rare) Amoeba) and most systems today employ at least six kingdoms.

On this revised picture, Mankind is classified as follows:

Kingdom: Animalia (animals)
Phylum: Chordata (possessing a stiffening rod or notochord)
Sub-phylum: Vertebrata (more specifically possessing a backbone)
Class: Mammalia (suckling their young)
Order: Primates (tarsiers, lemurs, monkeys, apes and humans)
Family: Hominidae (the Hominids, i.e. modern and extinct humans, the extinct australopithecines and, in some recent schemes, the great apes)
Genus: Homo
Species: sapiens

It should be noted that we while we now regard all equivalent-level taxa as being equal, the updated scheme would work perfectly well if we had continued with Linnaeus’ view that some taxa were rather more equal in the eyes of God than others, and it is in no way at odds with the tenets of Creationism. The Linnean Taxonomy shows us where Man fits into the grand scheme of things, but it has nothing to tell us about how we got there. It was left for Charles Darwin to point the way.

© Christopher Seddon 2008

A Brief Guide to Evolution

Before Darwin

We have seen how Linnaeus laid the foundations of modern taxonomy, but he did not himself believe that species changed and was an adherent to the then-prevalent view of creationism, claiming that “God creates and Linnaeus arranges” (it has to the said that the self-proclaimed “Prince of Botany” was not the most modest of men!). Linnaeus died in 1778. At that time it was widely believed that Earth was less than 6000 years old, having been created in 4004 BC according to Archbishop Ussher, who put forward this date in 1650.

But the existence of extinct organisms in the fossil record represented a serious problem for creationism (about which creationists are still in denial – get over it!). Fossils had been known for centuries and it was becoming clear that they represented in many cases life forms that no longer existed. William Smith (1769-1839), a canal engineer, observed rocks of different ages preserved different assemblages of fossils and that these succeeded each other in a regular and determinable order. Rocks from different locations could be correlated on the basis of fossil content; a principle now known as the law of faunal succession. Unfortunately Smith was plagued by financial worries, even spending time in a debtor’s prison. Only towards the end of his life were his achievements recognised.

Georges Curvier (1769-1832) studied extinct animals and proposed catastrophism which is modified creationism. Extinctions were caused by periodic catastrophes and new species took their place, created ex nihil by God, though his view that more than one catastrophe might have occurred was contrary to Christian doctrine. But all species, past and present, remained immutable and created by God. Curvier rejected evolution because one highly complex form transitioning into another struck him as unlikely. The main problem with evolution was that if the Earth was only 6,000 years old, there would not be enough time for evolutionary changes to occur.

The French nobleman Compte de Buffon (1707-88) suggested that planets were formed by comets colliding with the Sun and that the Earth was much older than 6,000 years. He calculated a value of 75,000 years from cooling rate of iron – much to the annoyance of the Catholic Church. Fortunately the days of the Inquisition had passed; only Buffon’s books were burned! Buffon rejected Noah’s Flood; noted animals retain non-functional vestigial parts (suggesting that they evolved rather than were created); most significantly he noted the similarities between humans and apes and speculated on a common origin for the two. Although his views were decidedly at odds with the religious orthodoxy of the time, Buffon maintained that he did believe in God. In this respect, he was no different to Galileo, who remained a faithful Catholic.

Catastrophism was first challenged by James Hutton (1726-97), a Scottish geologist who first formulated the principles uniformitarianism. He argued that geological principles do not change with time and have remained the same throughout Earth’s history. Changes in Earth’s geology have occurred gradually, driven by the planet’s hot interior, creating new rock. The changes are plutonic (caused by volcanic action) in nature rather than diluvian (caused by floods). It was clear that the Earth must be much older than 6000 years for these changes to have occurred.

Hutton’s Investigation of the Principles of Knowledge was published in 1794 and The Theory of the Earth the following year. In the latter work he advocated evolution and natural selection. “…if an organised body is not in the situation and circumstances best adapted to its sustenance and propagation, then, in conceiving an indefinite variety among the individuals of that species, we must be assured, that, on the one hand, those which depart most from the best adapted constitution, will be the most liable to perish, while, on the other hand, those organised bodies, which most approach to the best constitution for the present circumstances, will be best adapted to continue, in preserving themselves and multiplying the individuals of their race.” Unfortunately this work was so poorly-written that not only was it largely ignored; it even hindered acceptance of Hutton’s geological theories, which did not gain general acceptance until the 1830s when they were popularised by fellow Scot Sir Charles Lyell (1797-1875), who also coined the word Uniformitarianism. However, it is now accepted that the catastrophists were not entirely wrong and events such as meteorite impacts and plate tectonics also have shaped Earth’s history.

The best-known pre-Darwinian theory of evolution is that of Jean-Baptiste de Lamarck (1744-1829). Lamarck proposed that individuals adapt during their lifetime and transmit acquired traits to their offspring. Offspring carry on where they left off, enabling evolution to advance. The classic example of this is the giraffe stretching its neck to reach leaves on high branches, and passing on a longer neck to its offspring. Some characteristics are advanced by use; others fall into disuse and are discarded. Lamarck’s two laws were:

1. In every animal which has not passed the limit of its development, a more frequent and continuous use of any organ gradually strengthens, develops and enlarges that organ, and gives it a power proportional to the length of time it has been so used; while the permanent disuse of any organ imperceptibly weakens and deteriorates it, and progressively diminishes its functional capacity, until it finally disappears.

2. All the acquisitions or losses wrought by nature on individuals, through the influence of the environment in which their race has long been placed, and hence through the influence of the predominant use or permanent disuse of any organ; all these are preserved by reproduction to the new individuals which arise, provided that the acquired modifications are common to both sexes, or at least to the individuals which produce the young.

Lamarck was not the only proponent of this point of view, but it is now known as Lamarckism. There is little doubt that confronted with the huge body of evidence assembled by Darwin, Lamarck would have abandoned his theory. However the theory remained popular with Marxists and its advocates continued to seek proof until well into the 20th Century. Most notable among these were Paul Kammerer (1880-1926), who committed suicide in the wake of the notorious “Midwife Toad” scandal; and Trofim Lysenko (1898-1976). With Stalin’s backing, Lysenko spearheaded an evil campaign against geneticists, sending many to their deaths in the gulags for pursuing “bourgeois pseudoscience”. Lamarckism continued to enjoy official backing in the USSR until the after fall of Khruschev in 1964, when Lysenko was finally exposed as a charlatan.

Natural selection.

Not until the middle of the 19th Century did Charles Darwin (1809-1882) and Alfred Russel Wallace (1823-1913) put forward a coherent theory of how evolution could work.

Darwin was appointed Naturalist and gentleman companion to Captain Robert Fitzroy of the barque HMS Beagle, joining the ship on her second voyage, initially against his father’s wishes. Fitzroy, serving as a lieutenant in Beagle, had succeeded to the captaincy when her original skipper Captain Pringle-Stokes committed suicide on the first voyage. Fitzroy was a Creationist and objected to Darwin’s theories. Darwin sailed round the world in Beagle between 1831 and 1836. He studied finches and turtles on the Galapagos Islands – different turtles had originated from one type, but had adapted to life on different islands in different ways. These changes and developments in species were in accord with Lyell’s Principles of Geology. Darwin was also influenced by the work of economist Thomas Malfus (1766-1834), author of a 1798 essay stating populations are limited by availability of food resources.

Darwin developed the theory natural selection between 1844 and 1858. The theory was as the same time being independently developed by Alfred Russel Wallace and in 1858 Darwin presented The Origin of Species by means of Natural Selection to the Linnaean Society of London, jointly with Wallace’s paper. Wallace’s independent endorsement of Darwin’s work leant much weight to it. Happily there were none of the unseemly squabbles over priority that have bedevilled so many joint discoveries down the centuries of which Newton and Leibnitz’s spat over differential calculus and strain placed on Anglo-French relations in the 1840s by John Couch Adams and Jean Urbain Le Verrier’s independent discovery of the planet Neptune are but two examples.

Darwin’s pivotal On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life (usually simply referred to as The Origin of Species) was published in 1859 and promptly sold out. The book caused uproar and a debate was held at Oxford where Darwin and Thomas Huxley (grandfather of Aldous Huxley, author of Brave New World) were opposed by Bishop Samuel Wilberforce (son of the anti-slavery campaigner William Wilberforce), the clergy and Darwin’s erstwhile commanding officer, Captain Fitzroy of the Beagle. Darwin was by now in poor health due to an amoebal infection contracted during the Beagle voyage, but Huxley defended his theories vigorously.

The theory of natural selection states that evolutionary mechanisms are based on four conditions – 1) organisms reproduce; 2) there has to be a mode of inheritance whereby parents transmit characteristics to offspring; 3) there must be variation in the population and finally 4) there must be competition for limited resources. With some organisms in a population able to compete more effectively than others, these are the ones more likely to go on to reproduce and transmit their advantageous traits to their offspring, which in turn are more likely to reproduce themselves.

Evolution is the consequence of natural selection – the two are not the same thing, as evolution could in principle proceed by other means. Natural selection is a mechanism of change in species and takes various forms depends on certain conditions.

If for example existing forms are favoured then stabilising selection will maintain the status quo; conversely if a new form is favoured then directional selection will lead to evolutionary change. Divergent selection occurs when two extremes are favoured in a population.

Adaptation is a key concept in evolutionary theory. This is the “goodness of fit between an organism and its environment”. An adaptive trait is one that helps an individual survive, e.g. an elephant’s trunk, which enables it to forage in trees, eat grass, etc; colour vision helps animals to identify particular fruits, etc (and bright distinctive colour schemes are plants’ adaptations to help them to be located).

Sexual selection, proposed by Darwin in his second work The Descent of Man and Selection in Relation to Sex (1871), refers to adaptations in an organism specifically for the needs of obtaining a mate. In birds, this often leads to males having brightly coloured plumage, which they show off to prospective mates in spectacular displays. In many mammal species, males fight for access to females, leading to larger size (sexual dimorphism) and enhanced fighting equipment, e.g. large antlers.

The Descent of Man also put forward the theory that Man was descended from apes. Darwin was characterised as “the monkey man” and caricatured as having a monkey’s body. But after his death in 1882, he was given a state funeral and is buried in Westminster Abbey near Sir Isaac Newton. A dubious BBC poll ranked Charles Darwin as the 4th greatest Brit of all time, behind Churchill, I.K. Brunel and (inevitably) Princess Diana, but ahead of Shakespeare, Newton and (thankfully) David Beckham!

The main problem with Darwin’s theory was that by itself, it failed to provide a mechanism by which changes were transmitted from one generation to the next. Most believed that traits were “blended” in offspring than particulate – the latter being the view now known to be correct.

Mendelian inheritance

Ironically at the very time Darwin was achieving world fame, the missing link in his theory was being discovered by an Augustinian abbot named Gregor Mendel (1822-1884), whose work was practically ignored in his own lifetime. Between 1856 and 1863, Mendel studied the inheritance of traits in pea plants and showed that these followed particular laws and in 1865 he published the paper “Experiments in Plant Hybridization”, which showed that organisms have physical traits that correspond to invisible elements within the cell. These invisible elements, now called genes, exist in pairs. Mendel showed that only one member of this genetic pair is passed on to each progeny via gametes (sperm, ova, etc).

The set of genes possessed by an organism is known as its genotype. By contrast, a phenotype is a measurable characteristic of an organism, such as eye or hair colour, blood group, etc (it is sometimes used as a synonym for “trait” but phenotype is the value of the trait, e.g. if the trait is “eye colour” then possible phenotypes are “grey”, “blue”, etc.). Mendel investigated how various phenotypes of peas were transmitted from generation to generation, and whether these transmissions were unchanged or altered when passed on. His studies were based on traits such as shape of the seed, colour of the pea, etc, beginning with a set of pure-breeding pea plants, i.e. the second generation of plants had consistent traits with those of the first. He performed monohybrid crosses, i.e. between two strains of plants that differed only in one characteristic. The parents were denoted by a P, while the offspring – the filial generation – was denoted by F1, the next generation F2, etc. He found that in the first generation of these crosses, all of the F1s were identical to one of the parents. The trait expressed in the offspring he called a dominant trait; the unexpressed trait he called recessive (the Law of Dominance). He also observed that the sex of the parent was irrelevant for the dominant or recessive trait exhibited in the offspring (the Law of Parental Equivalence).

Mendel found that the phenotypes absent in the F1 generation reappeared in approximately a quarter of the F2 offspring. Mendel could not predict what traits would be present in any one individual, but he did deduce that there was a 3:1 ratio in the F2 generation for dominant/recessive phenotypes. In describing his results, Mendel used the term elementen, which he postulated to be hereditary “particles” transmitted unchanged between generations. Even if the traits are not expressed, he surmised that they are still held intact and the elementen passed on. These “particles” are now known as alleles. An allele that can be suppressed during a generation is called a recessive allele, while one that is consistently expressed is a dominant allele. An organism where both alleles for a particular trait are the same is said to be homozygous; where they differ, it is heterozygous.

For example, consider traits X and y, where X is dominant. A homozygous organism will be of phenotype X and genotype XX and a heterozygous organism will still have phenotype X, but the genotype will be Xy. (Note that the recessive allele is written in lower case.) The recessive trait will only be expressed when the genotype is yy, i.e. it receives the y-allele from both parents. There is a 50% chance of receiving the y-allele from either parent; hence only a 25% of receiving it from both; explaining the 3-1 ratio observed. The Law of Segregation states that each member of a pair of alleles maintains its own integrity, regardless of which is dominant. At reproduction, only one allele of a pair is transmitted, entirely at random.

Mendel next did a series of dihybrid crosses, i.e. crosses between strains identical except for two characteristics. He observed that each of the traits he was following sorted themselves independently. Mendel’s Law of Independent Assortment states that characteristics which are controlled by different genes will assort independent of all others. Whether or not an organism will be Ab or AA has nothing to do with whether or not it will be Xy or yy.

Mendel’s experimental results have been criticized for being “suspiciously good” and he seems to have fortunate in that he selected traits that were affected by just one gene. Otherwise the outcome of his crossings would have been too complex to have been understood at the time.

Population genetics

Mendel’s work remained virtually unknown until 1900, when it was independently rediscovered by Hugo de Vries, Carl Correns, and Erich von Tschermak and vigorously promoted in Europe by William Bateson, who coined the terms “genetics”, “gene” and “allele”. The theory was doubted by many because it suggested heredity was discontinuous in contrast to the continuous variety actually observed. R.A. Fisher and others used statistical methods to show that if multiple genes were involved with individual traits, they could account for the variety observed in nature. This work forms the basis of modern population genetics.

The discovery of DNA

By the 1930s, it was recognised that genetic variation in populations arises by chance through mutation, leading to species change. Chromosomes had been known since 1842, but their role in biological inheritance was not discovered until 1902, when Theodor Boveri (1862-1915) and Walter Sutton (1877-1916) independently showed a connection. The Boveri-Sutton Chromosome Theory, as it became known, remained controversial until 1915 when the initially sceptical Thomas Hunt Morgan (1866-1945) carried out studies on the eye colours of Drosophila melanogaster (the fruit fly) which confirmed the theory (and has since made these insects virtually synonymous with genetic studies).

The role of DNA as the agent of variation and heredity was not discovered until 1941, by Oswald Theodore Avery (1877-1955), Colin McLeod (1909-1972) and Maclyn McCarty (1911-2005). The double-helix structure of DNA was elucidated in 1953 by Francis Crick (1916-2005) and James Watson (b 1928) at Cambridge and Maurice Wilkins (1916-2004) and Rosalind Franklin (1920-1958) at King’s College London. The DNA/RNA replication mechanism was confirmed in 1958. Crick, Watson and Wilkins received the Nobel Prize for Medicine in 1962. Franklin, who died in 1958, missed out (Nobel Prizes are not normally awarded posthumously), but her substantial contribution to the discovery is commemorated by the Royal Society’s Rosalind Franklin Award, established in 2003.

How DNA works

With these discoveries, the picture was now complete, and it could now be seen how the genome is built up at a molecular level and how it is responsible for both variation and inheritance which are – as we have seen – fundamental to natural selection.

The genome of an organism contains the whole hereditary information of that organism and comprises the complete DNA sequence of one set of chromosomes. It is often thought of as a blueprint for the organism, but it is better thought of as a set of digital instructions that completely specify the organism.

The fundamental building blocks of life are a group of molecules known as the amino acids. An amino acid is any molecule containing both amino (-NH2) and carboxylic acid (-COOH) functional groups. In an alpha-amino acid, both groups are attached to the same carbon. Amino acids are the basic structural building blocks of proteins, complex organic materials that are essential to the structure of all living organisms. Amino acids form small polymer chains called peptides or larger ones called polypeptides, from which proteins are formed. The distinction between peptides and proteins is that the former are short and the latter are long. Some twenty amino acids are proteinogenic, i.e. they occur in proteins and are coded for in the genetic code. They are given 1 and 3 letter abbreviations, e.g. A Ala Alanine. Not all amino acids can be synthesised by a particular organism and must be included in the diet; these are known as essential amino acids. An amino acid residue is what is left of an amino acid once a molecule of water has been lost (an H+ from the nitrogenous side and an OH- from the carboxylic side) in the formation of a peptide bond.

Proteins are created by polymerization of amino acids by peptide bonds in a process called translation, a complex process occurring in living cells, etc. The blueprint – or to take a better analogy – the recipe or computer program for each protein used by an organism is held in its genome. The genome is comprised of nucleic acid, a complex macromolecule composed of nucleotide chains that convey genetic information. The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). For nearly all organisms, the genome is comprised of DNA, which usually occurs as a double helix.

Nucleotides comprise a heterocyclic base (an aromatic ring containing at least one non-carbon atom, such as sulphur or nitrogen, in which the nitrogen atom’s lone pair is not part of the aromatic system); a sugar; and one or more phosphate (-PO3) groups. In the most common nucleotides the sugar is pentose – either deoxyribose (in DNA) or ribose (in RNA) and the base is a derivative of purine or pyrimidine. In nucleic acids the five most important bases are Adenine (A), Guanine (G), Thymine (T), Cytosine (C) and Uracil (U). A and G are purine derivatives and are large double-ringed molecules. T, C and U are pyrimidine derivatives and are smaller single-ringed molecules. T occurs only in DNA; U replaces T in RNA. These five bases are known as nucleobases.

In nucleic acids, nucleotides pair up by hydrogen bonding in various combinations known as base pairs. Purines only pair with pyrimidines. Purine-purine pairing is does not occur because the large molecules are far apart for hydrogen bonding to occur; conversely pyrimidine-pyrimidine pairing does not occur because the smaller molecules are too close and electrostatic repulsion overwhelms hydrogen bonding. G pairs only with C and A pairs only with T (in DNA) or U (in RNA). One might also expect GT and AC pairings, but these do not occur because the hydrogen donor and acceptor patterns do not match. Thus one can always construct a complimentary strand for any strand of nucleotides.


Such a nucleotide sequence would normally be written as ATCGAT. Any succession of nucleotides greater than four is liable to be called a sequence.

DNA encodes the sequence of amino acid residues in proteins using the genetic code, which is a set of rules that map DNA sequences to proteins. The genome is inscribed in one or more DNA molecules. Each functional portion is known a gene, though there are a number of definitions of what constitutes a functional portion, of which the cistron is one of the most common. The gene sequence is composed of tri-nucleotide units called codons, each coding for a single amino acid. There are 4 x 4 x 4 codons (= 64), but only 20 amino acids, so most amino acids are coded for by more than one codon. There are also “start” and “stop” codons to define the beginning and end points for translation of a protein sequence.

In the first phase of protein synthesis, a gene is transcribed by an enzyme known as RNA polymerase into a complimentary molecule of messenger RNA (mRNA). (Enzymes are proteins that catalyze chemical reactions.) In eukaryotic cells (nucleated cells – i.e. animals, plants, fungi and protests) the initially-transcribed mRNA is only a precursor, often referred to as pre-mRNA. The pre-RNA is composed of coding sequences known as exons separated by non-coding sequences known as introns. These latter sequences must be removed and the exons joined to produce mature mRNA (often simply referred to as mRNA), in a process is known as splicing. Introns sometimes contain “old code,” sections of a gene that were probably once translated into protein but which are now discarded. Not all intron sequences are junk DNA; some sequences assist the splicing process. In prokaryotes (non-nucleated organisms – i.e. bacteria and archaea), this initial processing of the mRNA is not required.

The second phase of protein synthesis is known as translation. In eukaryotes the mature mRNA is “read” by ribosomes. Ribosomes are organelles containing ribosomal RNA (rRNA) and proteins. They are the “factories” where amino acids are assembled into proteins. Transport RNAs (tRNAs) are small non-coding RNA chains that transport amino acids to the ribosome. tRNAs have a site for amino acid attachment, and a site called an anticodon. The anticodon is an RNA triplet complementary to the mRNA triplet that codes for their cargo protein. Aminoacyl tRNA synthetase (an enzyme) catalyzes the bonding between specific tRNAs and the amino acids that their anticodons sequences call for. The product of this reaction is an aminoacyl-tRNA molecule. This aminoacyl-tRNA travels inside the ribosome, where mRNA codons are matched through complementary base pairing to specific tRNA anticodons. The amino acids that the tRNAs carry are then used to assemble a protein. Its task completed, the mRNA is broken down into its component nucleotides.

Prokaryotes have no nucleus, so mRNA can be translated while it is still being transcribed. The translation is said to be polyribosomal when there is more than one active ribosome. In this case, the collection of ribosomes working at the same time is referred to as a polysome.

In many species, only a small fraction of the total sequence of the genome appears to encode protein. For example, only about 1.5% of the human genome consists of protein-coding exons. Some DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few (if any) protein-coding genes, but are important for the function and stability of chromosomes. Some genes are RNA genes, coding for rRNA and tRNA, etc. Junk DNA represents sequences that do not yet appear to contain genes or to have a function.

The DNA which carries genetic information in cells (as opposed to mitochondrial DNA, etc) is normally packaged in the form of one or more large macromolecules called chromosomes. A chromosome is a very long, continuous piece of DNA (a single DNA molecule), which contains many genes, regulatory elements and other intervening nucleotide sequences. In the chromosomes of eukaryotes, the uncondensed DNA exists in a quasi-ordered structure inside the nucleus, where it wraps around structural proteins called histones. This composite material is called chromatin.

Histones are the major constituent proteins of chromatin. They act as spools around which DNA winds and they play a role in gene regulation, which is the cellular control of the amount and timing of appearance of the functional product of a gene. Although a functional gene product may be an RNA or a protein, the majority of the known mechanisms regulate the expression of protein coding genes. Any step of gene expression may be modulated, from the DNA-RNA transcription step to post-translational modification of a protein. Gene regulation gives the cell control over structure and function, and is the basis for cellular differentiation – i.e. the large range of cell types found in complex organisms.

Ploidy indicates the number of copies of the basic number of chromosomes in a cell. The number of basic sets of chromosomes in an organism is called the monoploid number (x). The ploidy of cells can vary within an organism. In humans, most cells are diploid (containing one set of chromosomes from each parent), but sex cells (sperm and ova) are haploid. Some plant species are tetraploid (four sets of chromosomes). Any organism with more than two sets of chromosomes is said to be polyploid. A species’ normal number of chromosomes per cell is known as the euploid number, e.g. 46 for humans (2×23).

Haploid cells bear one copy of each chromosome. Most fungi, and a few algae are haploid organisms. Male bees, wasps and ants are also haploid. For organisms that only ever have one set of chromosomes, the term monoploid can be used interchangeably with haploid.

Plants and other algae switch between a haploid and a diploid or polyploid state, with one of the stages emphasized over the other. This is called alternation of generations. Most diploid organisms produce haploid sex cells that can combine to form a diploid zygote, for example animals are primarily diploid but produce haploid gametes. During meiosis, germ cell precursors have their number of chromosomes halved by randomly “choosing” one homologue (copy), resulting in haploid germ cells (sperm and ovum).

Diploid cells have two homologue of each chromosome (both sex- and non-sex determining chromosomes), usually one from the mother and one from the father. Most somatic cells (body cells) of complex organisms are diploid.

A haplodiploid species is one in which one of the sexes has haploid cells and the other has diploid cells. Most commonly, the male is haploid and the female is diploid. In such species, the male develops from unfertilized eggs, while the female develops from fertilized eggs: the sperm provides a second set of chromosomes when it fertilizes the egg. Thus males have no father. Haplodiploidy is found in many species of insects from the order Hymenoptera, particularly ants, bees, and wasps.

Cell division is the process by which a cell divides into two daughter cells. Cell division allows an organism to grow, renew and repair itself. Cell division is of course also vital for reproduction. For simple unicellular organisms such as the Amoeba, one cell division reproduces an entire organism. Cell division can also create progeny from multicellular organisms, such as plants that grow from cuttings. Finally, cell division enables sexually reproducing organisms to develop from the one-celled zygote, which itself was produced by cell division from gametes.

Before division can occur, the genomic information which is stored in a cell’s chromosomes must be replicated, and the duplicated genome separated cleanly between cells. Division in Prokaryotic cells involves cytokinesis only. As previously explained, prokaryotic cells are simple in structure. They contain non-membranous organelles, lack a cell nucleus, and have a simplistic genome: only one circular chromosome of limited size. Therefore, prokaryotic cell division, a process known as binary fission, is straightforward. The chromosome is duplicated prior to division. The two copies of the chromosome attach to opposing sides of the cellular membrane. Cytokinesis, the physical separation of the cell, occurs immediately.

Division in Somatic Eukaryotic cells involves mitosis then cytokinesis. Eukaryotic cells are complex. They have many membrane-bound organelles devoted to specialized tasks, a well-defined nucleus with a selectively permeable membrane, and a large number of chromosomes. Therefore, cell division in somatic (i.e. non-germ) eukaryotic cells is more complex than cell division in prokaryotic cells. It is accomplished by a multi-step process: mitosis: the division of the nucleus, separating the duplicated genome into two sets identical to the parent’s; followed by cytokinesis: the division of the cytoplasm, separating the organelles and other cellular components.

Division in Eukaryotic Germ cells involves meiosis, which is the process that transforms one diploid cell into four haploid cells in eukaryotes in order to redistribute the diploid’s cell’s genome. Meiosis forms the basis of sexual reproduction and can only occur in eukaryotes. In meiosis, the diploid cell’s genome is replicated once and separated twice, producing four haploid cells each containing half of the original cell’s chromosomes. These resultant haploid cells will fertilize with other haploid cells of the opposite gender to form a diploid cell again. The cyclical process of separation by meiosis and genetic recombination through fertilization is called the life cycle. The result is that the offspring produced during germination after meiosis will have a slightly different genome contained in the DNA. Meiosis uses many biochemical processes that are similar to those used in mitosis in order to distribute chromosomes among the resulting cells.

Genetic recombination is the process by which the combinations of alleles observed at different loci in two parental individuals become shuffled in offspring individuals. Such shuffling can be the result of inter-chromososomal recombination (independent assortment) and intra-chromososomal recombination (crossing over). Recombination only shuffles already existing genetic variation and does not create new variation at the involved loci. Since the chromosomes separate independently of each other, the gametes can end up with any combination of paternal or maternal chromosomes. In fact, any of the possible combinations of gametes formed from maternal and paternal chromosomes will occur with equal frequency. The number of possible combinations for human cells, with 23 chromosomes, is 2 to the power of 23 or approximately 8.4 million. The gametes will always end up with the standard 23 chromosomes (barring errors), but the origin of any particular one will be randomly selected from paternal or maternal chromosomes.

The other mechanism for genetic recombination is crossover. This occurs when two chromosomes, normally two homologous instances of the same chromosome, break and then reconnect but to the different end piece. If they break at the same place or locus in the sequence of base pairs – which is the normal outcome – the result is an exchange of genes.

An allele is any one of a number of viable DNA codings of the same gene (sometimes the term refers to a non-gene sequence) occupying a given locus (position) on a chromosome. An individual’s genotype for that gene will be the set of alleles it happens to possess. For example, in a diploid organism, two alleles make up the individual’s genotype.

Organisms that are diploid such as humans have paired homologous chromosomes in their somatic cells, and these contain two copies of each gene. An organism in which the two copies of the gene are identical — that is, have the same allele — is said to be homozygous for that gene. An organism which has two different alleles of the gene is said to be heterozygous. Phenotypes associated with a certain allele can sometimes be dominant or recessive, but often they are neither. A dominant phenotype will be expressed when only one allele of its associated type is present, whereas a recessive phenotype will only be expressed when both alleles are of its associated type. This is Mendelian inheritance at a molecular level.

However, there are exceptions to the way heterozygotes express themselves in the phenotype. One exception is incomplete dominance (sometimes called blending inheritance) when alleles blend their traits in the phenotype. An example of this would be seen if, when crossing Antirrhinums — flowers with incompletely dominant “red” and “white” alleles for petal colour — the resulting offspring had pink petals. Another exception is co-dominance, where both alleles are active and both traits are expressed at the same time; for example, both red and white petals in the same bloom or red and white flowers on the same plant. Co-dominance is also apparent in human blood types. A person with one “A” blood type allele and one “B” blood type allele would have a blood type of “AB”.

Recombination shuffles existing variety, but does not add to it. Variety comes from genetic mutation. Mutations are changes to the genetic material of an organism. Mutations can be caused by copying errors in the genetic material during cell division and by exposure to radiation, chemicals, or viruses. In multicellular organisms, mutations can be subdivided into germline mutations, which can be passed on to descendants and somatic mutations. The latter cannot be transmitted to descendants in animals, though plants sometimes can transmit somatic mutations to their descendents. Mutations are considered the driving force of evolution, where less favourable or deleterious mutations are removed from the gene pool by natural selection, while more favourable ones tend to accumulate. Neutral mutations are defined as those that are neither favourable nor unfavourable.

It will be apparent from the above how both variety and inheritance of variety arise at the molecular level.

The so-called central dogma of molecular biology arises from Francis Crick’s statement in 1958 that “Genetic information flows in one direction only from DNA to RNA to protein, and never in reverse.” It follows from this that:

1. Genes determine characters in a straightforward, additive way: one gene-one protein, and by implication, one character. Environmental influence, if any, can be neatly separated from the genetic.

2. Genes and genomes are stable, and except for rare, random mutations, are passed on unchanged to the next generation.

3. Genes and genomes cannot be changed directly in response to the environment.

4. Acquired characters are not inherited.

These assumptions have been challenged and they do not hold under all conditions, e.g. horizontal gene transfer (for example, haemoglobins in leguminous plants).

Modes of evolutionary change

Put together, natural selection, population genetics and molecular biology form the basis of neo-Darwinism, or the Modern Evolutionary Synthesis. The theory encompasses three main tenets:

1. Evolution proceeds in a gradual manner, with the accumulation of small changes in a population over long periods of time, due to changes in frequencies of particular alleles between one generation and another (microevolution).

2. These changes result from natural selection, with differential reproductive success founded on favourable traits.

3. These processes explain not only small-scale changes within species, but also larger-scale processes leading to new species (macroevolution).

On the neo-Darwinian picture, macroevolution is seen simply as the cumulative effects of microevolution.

However the extent and source of variation at the genetic level remained a bone of contention for evolutionary theorists until the mid-1960s. One school of thought favoured little variation, with most mutations being deleterious and selected against; the other school favoured extensive variation, with many mutations offering advantages for survival in different environmental circumstances. Techniques such as gel electrophoreses settled the argument in favour of the second school: genetic variation turned out to be most extensive. By the 1970s the debate had shifted to selectionism versus neutralism. The selectionists view genetic variation as the product of natural selection, which selects favourable new variants. On the other hand the neutralists contend that the great majority of variants are selectively neutral and thus invisible to the forces of natural selection. It is now generally accepted that a significant proportion of variation at the genetic level is neutral.

Consequently certain traits may become common or may even come to predominate in a population by a process known as genetic drift. This is the random changes in the frequencies of neutral alleles over many generations, which may lead to some becoming common and some dying out. Genetic drift, therefore, tends to reduce genetic diversity over time, though for the effect to be significant, a population must be small (to explain by analogy, while a group of ten people could throw a die and all fail to get a six with reasonable probability [tenth power of 0.8 = 0.1], but the probability of one hundred people all failing to get a six is far smaller [hundredth power of 0.8 = 2x10e-10]). There are two ways in which small isolated populations may arise. One is by population bottleneck in which the bulk of a population is killed off; the other is the founder effect which occurs when a small number of individuals carrying a subset of the original population’s genetic diversity move into a new habitat and establishes a new population there. Both these scenarios could lead to a trait that confers no selective advantage coming to predominate in a population. More controversially, they could lead to genetic drift outweighing natural selection as the engine for evolutionary change.

With the foregoing in mind, how do new species arise? There are two ways. Firstly a species changes with time until a point is reached where its members are sufficiently different from the ancestral population to be considered a new species. This form of speciation is known as anagenesis and the species within the lineage are known as chronospecies. Secondly a lineage can split into two different species. This is known as cladogenesis, and usually happens when a population of members of the species becomes isolated.

There are several such modes of speciation, mostly based on the degree of geographical isolation of the populations involved.

1. Allopatric speciation occurs when populations physically isolated by a barrier such as a mountain or river diverge to an extent such that if the barrier between the populations breaks down, individuals of the two populations can no longer interbreed.

2. Peripatric speciation occurs when a small population is isolated at the periphery of a species range. The difference between this and allopatric speciation is that the isolated population is small. Genetic drift comes into play, possibly outweighing natural selection (the founder effect).

3. Parapatric speciation occurs when a population expands its range into a new habitat where the environment favours a different form. The populations diverge as the descendants of those entering the new habitat adapt to the conditions there.

4. Sympatric speciation is where the speciating populations share the same territory. Sympatric speciation is controversial and has been widely rejected, but a number of models have been proposed to account for this mode of speciation. The most popular is disruptive speciation (Smith), which proposes that homozygous individuals may under certain conditions have a greater fitness than those with alleles heterozygous for a certain trait. Under the mechanism of natural selection, therefore, homozygosity would be favoured over heterozygosity, eventually leading to speciation. Rhagoletis pomonella (Apple maggot) may be currently undergoing sympatric speciation. The apple feeders seem to have emerged from hawthorn feeders, after apples were first introduced into North America. The apple feeders do not now normally feed on hawthorns, and the hawthorn feeders do not now normally feed on apples. This may be an early step towards the emergence of a new species.

5. Stasipatric speciation occurs in plants when they double or triple the number of chromosomes, resulting in polyploidy.

Rates of evolution

There are two opposing points of view regarding the rate at which evolutionary change proceeds. The traditional view, known as phyletic gradualism, holds that it occurs gradually, and that speciation is anagenetic. Niles Eldredge and Stephen Jay Gould (1972) criticized this viewpoint, arguing instead for stasis over long stretches of time, with speciation occurring only over relatively brief intervals, a model they called punctuated equilibrium. They pointed out that species arise by cladogenesis rather than by anagenesis. They also highlighted the absence of transitional forms in the fossil record (an old chestnut, often favoured by creationists).

Richard Dawkins has pointed out that no “gradualist” has ever argued for complete uniformity of rate of evolutionary change; conversely even if the “punctuation” events of Eldredge and Gould actually took 100,000 years, they would still show as discontinuities in the fossil record, even though on the scale of the lifetime of an organism, change would be immeasurably small, and invisible at any given time due to variation between individuals; if for example average height increased by 10 cm in 100,000 years, that would be 1/500th of a cm per generation – completely masked by the variation in height of individuals at any one time. It follows that the speciation event – be it anagenetic or cladogenetic – would be very slow in relation to the lifetime of individuals. Reproductive isolation would occur only over hundreds of generations.

On the Dawkins view, then, there is no conflict between gradualism and punctuationism; the latter is no more than the former proceeding at varying tempo.

The Physical Context

Three factors are recognized as influencing the evolution of new species and the extinction of existing ones. The first is the existing properties of a lineage, which places constraints on how it can evolve. The second is the biotic context: how members of a particular species compete both in both an inter-specific and intra-specific context for food, space and other resources; how they interact with other species in respect of predation; mutualist behaviours, etc. The third is the physical context such as geography and climate, which determine the types of species that can thrive and the adaptations that are likely to be favoured.

The relative importance of the last two is a matter of ongoing debate. Darwin held that biotic factors predominate. He did not ignore environmental considerations, but he saw them as merely increasing competition. This view is central to the modern synthesis and it is held that natural selection is necessary and sufficient to drive evolutionary change. For example, adaptations by predators to better their chances of catching prey are the driving force for evolutionary change in the prey, where adaptations to avoiding capture are selected for; thus maintaining the status quo in a kind of evolutionary “arms race”. This is sometimes referred to as the Red Queen effect (van Valen, 1973), from the Red Queen in Alice through the Looking Glass.

However in recent years, it has become clear that the history of life on Earth has been profoundly affected by geological change. The discovery of plate tectonics in the 1960s confirmed that continental landmasses are in a state of constant albeit very slow motion across the Earth’s surface, and when continents meet, previously-isolated biota are brought together. Conversely, as continents drift apart, previously united communities are separated. The first introduces new elements of inter-specific competition; the other to possible isolation of small groups. Both scenarios are likely to lead to evolutionary change.

There is also a school of thought that downplays natural selection and emphasises climate change as the primary cause of evolutionary change. There are two ideas associated with this view – firstly, the habitat theory, which states that species’ response to climate change represents the principle engine of evolutionary change, and that speciation and extinction events will be concentrated in times of climate change, as habitats change and fragment; secondly this pattern of evolutionary change should be synchronous across all taxa, including hominins (the turnover-pulse hypothesis) (Vrba 1996).

Units of selection

In the original theory of Charles Darwin, the unit of selection i.e. the biological entity upon which the forces of natural selection acted was the individual; for example an animal that can run faster than others of its kind, and so avoid predators, will live longer and have more offspring. This simple picture does, however, fail to explain altruism in which an individual acts in a manner that benefits others at its own expense.

One answer is that selection may operate at social group level (group selection) as proposed by V.C. Wynne-Edwards (1962). On this picture, a group in which members behave altruistically towards one another might be more successful than one in which they do not. Kin selection, proposed by W.D. Hamilton (1964), posits reproductive success in terms of passing on one’s genes, and that by helping siblings and other relatives, one is doing this by proxy. This view largely superseded the group selection view. Robert Trivers (1971) extended the theory to non-kin in terms of doing a favour in the expectation of it being returned (“reciprocal altruism”). This behaviour is common in species of large primates, including humans. Kin-selection and reciprocal altruism act at individual and not group level, so the latter fell out of favour; though it has been recently revived.

By contrast, the gene-centric or “selfish gene” view popularised by Richard Dawkins states that selection acts at gene level, with genes that best promote the interests of their host organisms being selected for. On this view, adaptations are phenotypic effects that enable genes to be propagated. A “selfish” gene can be favoured for selection by favouring altruism among organisms containing it, even if individuals performing the altruism do so at the cost of their own chances of reproducing. To be successful, however, a gene must “recognise” kin and degrees of relatedness and favour greater altruism towards closer relatives (by, for example, favouring a sibling [where the chance of the same gene being present is 1/2] over a cousin [where it is only 1/8]). The green beard effect refers to such forms of genetic self-recognition, after Dawkins (1976) considered the possibility of a gene that promoted green beards and altruism to others possessing them.

© Christopher Seddon 2008

What is a Species?

The meaning of species

The concept of a species is of pivotal importance in biological science and in simple terms means “types of organisms”. Man has undoubtedly been familiar with the notion that there are “kinds” of animals since prehistoric times, and this view, known as the morphological species, dovetailed neatly with Platonic Essentialism, which states that everything existing in our world is derived from an “ideal” form or “essence”: “essence of cat”, “essence of dog”, etc, which exist in a higher plane of reality to our imperfect world. For centuries it was a central dogma that species were God-given and immutable, a positive hindrance to understanding evolution, or even accepting that it takes place. An advance on the morphological species concept was the biological species concept, which was formalised by John Ray (1628-1705), an English naturalist who proclaimed that “one species could never spring from the seed of another”. Again, though, it would certainly have been understood in prehistoric times when agriculture was adopted in many parts of the world at the end of the last ice age, if not tens of millennia before.

The morphological and biological species concepts are today still the species concepts with which the general public are most familiar, but despite its central role, there is no consensus among biologists as to how “species” should be defined.

Species concepts

The following definitions are only some of those in current use, but they are probably the most widely-used by biologists:

Biological or Isolation species. Actually or potentially interbreeding populations which are reproductively isolated from other such populations (Mayr). Stated in another way, species are reproductively isolated groups of populations. The means by which they are isolated is known as a reproductive isolation mechanism or RIM. There are two types of RIM – pre-mating (which prevents the animals from mating) and post-mating (where the offspring are either not viable or infertile). Currently the most commonly-used concept, but it is only useful for sexually-reproducing organisms, and not useful when considering extinct organisms.

Specific mate recognition species. Members sharing a specific mate recognition system to ensure effective syngamy within populations (Patterson); focuses on pre-mating RIMs.

Phylogenetic species. The smallest diagnosable cluster of individual organism (that is, the cluster of organisms are identifiably distinct from other clusters) within which there is a parental pattern of ancestry and descent (Cracraft).

Evolutionary species. A lineage evolving separately from others and with its own unitary evolutionary role, tendencies and historical fate (Simpson).

All members [of a species] have the same number of chromosomes, and every location along the length of a chromosome has its exact opposite number in the same position along the length of the corresponding chromosome in all other members of the species (Dawkins, 1986). Only useful if genetic material is available and necessitates sequencing entire genomes to apply in practice!

Unfortunately none of the above concepts are wholly satisfactory. Dawkins’ is the most precise but has the problems noted above. Even the biological species definition has its envelope pushed when considering, for example, fertile hybrid big cats, bears and dolphins, all of which are very occasionally encountered in the wild. Possibly the definition of a RIM should be extended split the post-mating RIM into a “strong” version (non-viable or infertile offspring) and a “weak” one (in which the offspring are at a selective disadvantage due to hybrid behaviour, etc, albeit fertile).

(The so-called “ring” species concept is now dubious with the classic circumpolar herring gull complex having recently been shown not to be a ring species (Liebers, de Knijff & Helbig, 2004)).


In scientific classification, a species is assigned a binomial or two-part name in Latin. The genus is listed first (with its leading letter capitalized), followed by a specific name (which should always be in lower case, even if the root is a proper noun, e.g. neanderthalensis). The binomial should be italicised. For example, humans belong to the genus Homo, and are in the species Homo sapiens. Genus is used to group closely-related animals in genera – e.g. (common) chimpanzees (Pan troglodytes) and pygmy chimpanzees or bonobos (Pan paniscus). The genus may be abbreviated to the initial letter, e.g. E. coli for Escherichia coli.

When an unknown species of known genera is being referred to, the abbreviation “sp.” in the singular or “spp.” in the plural may be used in the place of the second part of the scientific name.

This binomial nomenclature, and most other purely formal aspects of the biological codes of nomenclature, were formalized by the Swedish naturalist Karl von Linné, usually known as Carolus Linnaeus, or simply Linnaeus (1707-1778) in the 18th Century and as a result are called the “Linnaean system” (although binomial nomenclature was actually introduced much earlier, by Gaspard Bauhin (1560-1624), but it failed to gain popularity).


Subspecies are segments of a species that differ morphologically to some degree from other such segments, but still meet other criteria of being a single species. The 75% rule may be used: 75% of individuals classified in one subspecies are distinguishable from all the members of other subspecies within the same species. Subspecies are geographic by nature and cannot by definition ever be sympatric, i.e. occupying the same geographical range.

In the past, numerous subspecies were recognised. Many have since been found to merely represent samples taken at different points on a cline (gradual change across a geographical range), and have largely been discarded. However some species can genuinely be divided into subspecies, such as the lion (Panthera leo), which shows considerable variation in mane appearance, size and distribution. Groves (1988) mentions four subspecies – the North African lion; the Asian lion; the Common African lion and the Cape lion.

If subspecies of a species are recognized, it is said to be polytypic; if there are none it is said to be monotypic. The scientific name of a subspecies is a trinomen, which is the binomen followed immediately by a subspecific name, e.g. Homo sapiens idaltu. If there is a need for subspecific taxa in animal nomenclature, a trinomen may be described for each subspecies. Note that if subspecies are recognised, there must be at least two. A species cannot have a single subspecies.

© Christopher Seddon 2008

Ice Age Star Maps?

In September 1940, four French teenagers made one of the greatest archaeological discoveries of the last century. Walking in woods in the Dordogne after a storm, they came across a small hole that had been made by the falling of a tree. Using their penknives to enlarge the hole, they cut away earth and undergrowth until at length they were able to slide feet first into the chamber below where, in the flickering glow of their kerosene lamp, they saw prehistoric paintings of horses, cattle and herds of deer.

Although the world at that time was rather preoccupied with other matters, the caves’ fame spread rapidly. Named the Lascaux Caves by the site’s landowners, they were opened as a tourist attraction and by the 1950s were attracting 1,200 visitors a day. Unfortunately by 1955 it became clear that the exhalations of all these visitors were promoting the growth of algae and causing significant damage to the paintings. After a number of unsuccessful attempts to ameliorate the problem, the caves were eventually taken over by the French Ministry of Cultural Affairs and closed to the public in 1963. Visitors today have to make do with Lascaux II, a facsimile of the original.

The paintings are now believed to be 16,500 years old – more than three times older than the Pyramids – and are associated with the Magdalenian culture of the Upper Palaeolithic. There are some 600 animals depicted, mainly horses, deer, aurochs (wild cattle) and bison – animals which at that time roamed wild on the fertile steppes of Ice Age Europe. These magnificent paintings are among the earliest examples of representational art and remain to this day among the finest. They have been compared to the works of Michelangelo and da Vinci and it is said that Pablo Picasso, on visiting the caves, proclaimed “We have discovered nothing [since]”. But what purpose did the unknown Palaeolithic genius or geniuses responsible for the caves have in mind? We don’t know and probably never will know for certain. Many believe the purpose was for magic rituals, possibly intended to bring good luck to hunters, but in 1990s a number of researchers suggested that the caves might contain some of the world’s oldest star maps…

The Great Hall of the Bulls is a vaulted rotunda containing a remarkable wrap-around mural, portraying aurochs, horses and stags. American astronomer F.L. Edge and Spanish researcher L. A. Congregado both noted that a pattern of six dots above the mural’s dominant animal, known to archaeologists as Great Bull No. 18, resembles the Pleiades and also identified a V-shaped set of dots on its face with an open star cluster known as the Hyades and the bright star Aldebaran. These star formations are portions of the present-day constellation of Taurus (the Bull). If Edge and Congregado are correct, Man’s identification of this region of the sky is very old indeed.In his 1995 pamphlet Aurochs in the Sky, Dancing with the Summer Moon, Edge, having identified Taurus in the Great Hall of the Bulls, goes on to consider the other animals portrayed in the mural. His scheme runs from west (sunset) to east (sunrise), which rather confusingly is contrary to the scheme adopted by archaeologists, who numbered the animals from left to right as seen on entering the rotunda.

Edge associates the constellations of Orion (the Hunter) and Gemini (the Twins) with a second aurochs, Bull No. 13 and that of Leo (the Lion) with a third, Bull No. 9. Uniquely for the mural, these two animals stand head-to-head. Other animals are associated with Canis Minor (the Lesser Dog), Virgo (the Virgin), Libra (the Scales), Scorpius (the Scorpion) and Saggitarius (the Archer), the last three being represented by the mural’s last animal (in Edge’s scheme), Figure No. 2, which is generally known as the “Unicorn”, despite having two horns. The Unicorn lies on the opposite side of the rotunda to the “Taurus” Bull; in the skies Scorpius lies opposite Taurus.

If we accept Edge’s interpretations, then the mural as a whole seems to be a representation of just over half of the prominent star-patterns lying along or close to the ecliptic. Today the mural’s lead constellations, Taurus, Orion and Gemini, are prominent in the winter skies. However due to the precession of the equinoxes (a “wobble” of the Earth’s axis which takes 25,700 years to complete a cycle) in the era of Lascaux these constellations would have been prominent in spring. As summer approached, they would have begun to move down towards the western horizon, and at the summer solstice, they would have stood low down on the horizon just after sunset.But instead of the three present-day groupings of the Bull, the Hunter and the Twins, the Magdalenian people saw a pair of bulls, which at this time of the year in Edge’s words, “seemed to walk on the horizon”. The other constellations depicted would have been visible between sunset and sunrise.Edge believes that the stars portrayed in the mural were used in conjunction with the phases of the Moon to predict and keep track of the time of the summer solstice. He believes people in the Dordogne region were observing the phases of the Moon at least 15,000 years before the Lascaux Caves were painted, citing the work of archaeologist Alexander Marshack. In his 1972 book The Roots of Civilization, Marshack claims that bone tallies were being used to record moon-phases as long as 35,000 years ago. One such tally, 32,000 years old, was found in the Dordogne region and is engraved with marks that may show the Moon waxing and waning through two complete cycles (though as I have stated elsewhere this is not the only possible interpretation).

In the era of Lascaux the Moon, during spring and summer, would have reached full when passing through the constellations portrayed in the mural. The full Moon occurring closest to the summer solstice would have occurred in the region of the sky represented by the space between the “Leo” Bull and the “Orion/Gemini” Bull which, recall, stand face-to-face; the full Moon would have appeared caught between the horns of the two animals.The approach of the summer solstice would have been heralded by the waning crescent moon lining up with the horns of the Unicorn; this would occur twice – at seven weeks and three weeks prior to the solstice. Following the solstice a waxing crescent would align with the horns of the “Taurus” Bull; again this would occur twice – at around three weeks and seven weeks after the solstice.Notably the horns of the animals on both sides of the mural face the same way as the crescent Moon would appear in the corresponding part of the sky. This is an early example of the symbolic association that has long existed between the crescent Moon and the horns of a bull, for example in ancient Egypt. The prediction method is not infallible, as in some years two full Moons could occur between the horns of the facing bulls. However, the “true” full Moon occurred when the “Taurus” Bull lay on the horizon at sunset and the Unicorn lay on the horizon just before sunrise. The “false” full Moon, occurring a month earlier, when the Unicorn was not yet in the pre-dawn skies, could thus be ignored.

Dr. Michael Rappenglueck, formerly of the University of Munich, believes the caves contain a second star map. His investigations focussed on the caves’ solitary representation of a human figure, which is located in a gallery known as Shaft of the Dead Man. The highly stylised man has the head of a bird and a rather impressive phallus. He is apparently confronting a partially-eviscerated bison. Below him, a bird is perched atop a post. Rappenglueck believes that these paintings may be an accurate representation of what are now the “summer triangle” of constellations, which include Cygnus (the Swan), Aquila (the Eagle) and Lyra (the Lyre). The eyes of the bison, bird-man and bird represent the first magnitude stars Vega, Deneb and Altair. Precession is once again the key to understanding the significance of this grouping. 16,500 years ago the Pole Star was not the familiar Polaris, but a moderately-conspicuous star known as Delta Cygni, which forms part of the Swan’s wing. Thus at that time, the grouping displayed would have circled the celestial North Pole, never setting, and it would have been particularly prominent in early spring. Rappenglueck believes that the bird-man is a shaman. Shamanism refers to a variety of beliefs and practices involving manipulation of invisible spirits and forces that are held to pervade the material world and affect the lives of the living. The spirit of a shaman is held to be able to ascend to the sky or descend into the underworld. Shamans enter altered states of consciousness, often using natural psychotropic drugs. Shamanism is very common in hunter-gatherer societies and has almost certainly been practiced since earliest times. The word “shaman” was borrowed by anthropologists from the Tungus tribesmen of Siberia and literally means “he who knows”, though shamans are not necessarily male. The bird-on-a-stick may be a spirit helper, guiding the shaman in his ascent to the sky. Rappenglueck believes that even weather vanes today may stem from this tradition.

Rappenglueck has also identified possible evidence of a lunar calendar in the Great Hall of the Bulls. Below one painting, of a deer, there is a row of 29 dots, one for each day of the lunar month. Elsewhere there is a row of thirteen dots to the right of an empty square. Does this represent the waxing of the Moon, with the square representing the New Moon, which cannot be seen? Rappenglueck believes so, though others argue that the dots are simply tallies of hunting kills.

We do not know how much if indeed any of the evidence so far presented for Palaeolithic astronomy is valid, some of the ideas that have been put forward are speculative to say the least, and I will admit to being sceptical. However Edge, Rappenglueck and Congregado are highly respected by the scientific community; their respective methodologies are considered to be perfectly sound; and this intriguing theory is certainly worthy of serious consideration.

The last word must go to Michael Rappenglueck, quoted in October 2000: “They were aware of all the rhythms of nature. Their survival depended on them, they were a part of them.

© Christopher Seddon 2008

Target Earth!

The name “Project Spaceguard” was deliberately borrowed from the 1973 science-fiction novel Rendezvous with Rama, by Sir Arthur C. Clarke, which describes a catastrophic meteorite impact in northern Italy in the year 2077, as a result of which a global early-warning system is set up to ensure that such a tragedy is never repeated.

Ironically, the real Project Spaceguard was given impetus by a cataclysmic event occurring not on Earth, but millions of miles away, on the planet Jupiter.

However, for the real beginnings of Project Spaceguard, we must go back just over a century, to 1905, when the mining engineer D. M. Barringer and physicist Benjamin C. Tilghman claimed that the Coon Butte Crater in Arizona was caused by a meteorite impact. The suggestion met with considerable scepticism, though it must be remembered that at the time, the idea that meteorites could cause cratering was highly controversial and the majority of scientists believed the craters on the Moon (the only example of large-scale cratering then known to science) had a volcanic origin, a view that did indeed have some support until quite recently. Not until the 1920s was Coon Butte Crater accepted as being meteoritic in origin, since when it has been known, not entirely correctly, as the Meteor Crater.

One of the reasons the impact theory of cratering was slow to win acceptance was because it was rather at odds with the then prevalent doctrine of Uniformitarianism, under which change, both geological and biological, occurs gradually over million years. The idea that large meteorites could wreak enormous changes was reminiscent of Catastrophism, which states that events on Earth such as geological change, the evolution of life and even human history have been shaped by upheavals of a violent or unusual nature.

Catastrophism was not a new idea and until the early 19th Century, it was generally accepted that stories like the Biblical Flood related to actual events. It should be remembered that at this time the Bible was interpreted more literally that is usual now and it was widely believed that all life on this planet had been created exactly as described in the Book of Genesis. Such beliefs had largely died out after Charles Darwin put forward his Theory of Evolution and fossil evidence confirmed that life had evolved gradually over millions of years.

It was easy to be complacent at that time. The Arizona crater and others that came to light later were thousands and in some cases millions of years old; and the face of the Moon has not changed throughout recorded human history. Not for many decades would it be discovered that cratering exists on many other bodies in the Solar System. Yet the danger signs were there for those who cared to look.

On the morning of 30 June, 1908, a fragment of cosmic debris entered Earth’s atmosphere somewhere over western China. Travelling eastwards on a shallow slanting trajectory, it trailed a series of loud explosions in its wake until at 14 minutes and 28 seconds after seven o’clock local time, by which time it was around 5-10 kilometres above the ground, it exploded near the Tunguska River in Siberia releasing energy that has been conservatively estimated at 10 megatons and could have been as much as 20 megatons. The airburst felled around 80 million trees over an area of 2150 square kilometres and even now, a century after the blast, satellite imaging shows reduced tree cover in the area around ground zero. Curiously no trace of the object has ever been found, which has lead to speculation that it was a small comet rather than an asteroid (inevitably there have been suggestions of a more speculative nature such as black holes, antimatter and of course crashing UFOs). Fortunately, the region was largely uninhabited, and casualties were restricted to a dozen or so nomadic tribespeople who were slightly injured. But had the meteorite arrived 4 hours 52 minutes later, the city of St. Petersburg would have been totally destroyed, together with most of its inhabitants who at that time included one Vladimir Illich Ulyanov, later known as Lenin. Just a few hours, and the history of the 20th Century might have been very different!

In October 1937, Earth experienced an event we now know to be commonplace, but at the time caused some consternation when the news became public. The asteroid 1937 UB, later named Hermes, passed by at less than twice the distance of the Moon. Hermes was large enough to have caused a global disaster had it hit Earth. Hermes caused such a stir that it was given a name despite being lost after its close passage; it was not relocated until 2003 and is now known to comprise two 300-metre objects separated by just 1200 metres. It has also been calculated that it came even closer to Earth in 1942, but it was missed. At the time, of course, many astronomers would have been otherwise engaged.

In February 1947, a meteorite exploded just over two hundred miles from Vladivostok, detonating above the ground like the Tunguska object though in this case specimens from the fall were recovered. Although the explosive yield in this case was much less, it was still at least five times more powerful than the nuclear bombs dropped on Hiroshima and Nagasaki eighteen months earlier.

Both the Siberian events took place over sparsely populated regions, but then on 10 August 1972 came a decidedly close call, when an asteroid approximately 10 metres in diameter entered Earth’s atmosphere above southern Utah. Travelling due north, it passed over Salt Lake City, before making its closest approach to Earth at an altitude of around 53 kilometres above Montana, where sonic booms were heard. Still travelling above escape velocity, the body then drew away, exiting the atmosphere over Canada.

Had it come a tiny fraction closer, it would have impacted with multi-megaton effect in the densely populated region between Provo, Utah and Idaho Falls. In the prevalent tensions of that era, and before the threat of meteorite strikes was fully appreciated, such an explosion could easily have been mistaken for a nuclear attack and triggered World War III.

Scientist had in fact been warning of the danger since the war, but they have been largely forgotten – few, for example, will have read the 1953 work Target Earth, by Allan Kelly and Frank Dachille, who advocated the use of rocket powered “tug boats” to deflect incoming meteorites. However in the 1980s the first concrete evidence of a meteoritic catastrophe on this planet began to emerge – albeit concerning events occurring millions of years before the dawn of mankind.

The sudden demise of the dinosaurs, 65 million years ago, had long been a mystery to science. Incidentally, it is worth pointing out a few common misconceptions about the dinosaurs. They were never contemporaries of man, despite suggestions to the contrary by one or two rather silly Hollywood motion pictures: they were not all large and ferocious, the majority being much smaller than a man: and finally, they certainly were not the stupid, blundering creatures of popular belief. They were the unchallenged rulers of the Earth for 150 million years. If mankind hopes to emulate the feat there is a long way to go.

In June 1980, the physicist Luis Alvarez, his son Walter and a number of other collaborators published a paper which claimed that a large meteorite had struck the Earth 65 million years ago, resulting in the extinction of the dinosaurs. The evidence for this claim was based on studies of a layer of clay, half an inch thick, laid down between two layers of limestone that had been seen in rocks near the town of Gubbio, in northern Italy. The clay was clearly located at the so-called K-T Boundary that delimits the Cretaceous and Tertiary geological time periods. There was no element of doubt; the limestone below the clay contained Cretaceous fossils; that above contained fossils from the Tertiary. It was at this point in time that the dinosaurs had become extinct.

The Alvarez team has originally intended to find out how long it had taken the clay to be deposited, because sudden though the transition from Cretaceous to Tertiary was, nobody believed it could have literally happened overnight. The method chosen was to measure the amount of iridium in the clay.

Iridium is one of the so-called “Splendid Six” group of metals, which also includes platinum, and is considerably rarer than gold. It is very rare in the Earth’s crust, but relatively abundant in meteorites. There is nothing mysterious about this. Because of the great density of these metals, much of the terrestrial supply has sunk right down to the Earth’s core. However, meteorites are formed from much smaller parent bodies in which the iridium is more evenly distributed.

A constant trickle of iridium reaches Earth all the time among the dust grains which constantly rain down from outer space as a result of micrometeorites entering the atmosphere. The rate of fall has been constant throughout geological time, the amount falling over, say, 100 years being the same now as it was 65 million years ago. This effect could be used to provide a “clock” to time how long it had taken the clay layer to form.

The Alvarez team had expected to find a small amount of iridium, consistent with, at most a time scale of 10,000 years to deposit the clay, which is what one would normally expect for a layer of its thickness. Instead, they found iridium concentrations were so high that if the “iridium clock” model were correct, it would have taken four million years for the clay to form.

This result was clearly nonsensical. Something else must have produced the anomalously high iridium levels and the only possible causative agent was a large meteorite, which had smashed into the Earth, spreading billions of tons of fine dust around the world comprising pulverised rock and debris from the meteorite itself. This fine dust eventually settled out of the atmosphere to produce a uniform layer of iridium-enriched clay all over the world. The shroud of dust, encircling the globe, would have cut off the light of the Sun and plunged the Earth into darkness. Plant life would have died off, unable to photosynthesise, and temperatures would have fallen. It was this “cosmic winter” that had killed off the dinosaurs.

One consequence of this discovery was the realisation that a “nuclear winter” would follow even a limited nuclear exchange. This led to the signing of several major arms treaties by the end of the 1980s and for a time there were genuine hopes that the madness of nuclear weapons might finally be eliminated. Sadly, these hopes seem to have been misplaced, with many states scrambling to acquire weapons of mass destruction, including nuclear weapons.

Like all radical scientific theories, the impact theory of the dinosaur extinction was slow to gain acceptance, especially from the palaeontologists who felt their territory had been invaded by a bunch of physicists and geologists. Some geologists put forward a rival theory, claiming that a series of volcanic eruptions occurring in India at the same point in time had been responsible. They argued that the volcanoes, spewing forth material from the bowels of the Earth, could have produced the iridium anomaly.

Meanwhile, the search was on for the “smoking gun”, the crater left by the impact. In 1990, a huge buried crater at Chicxulub, in the Yucatan Peninsula in Mexico came to the attention of scientists. The crater had been formed in what had then been shallow water and consequently was covered in a limestone layer. It had been discovered by industrial geologists in the 1970s, but they had kept quiet about their discovery because of the possibility of oil in the region. Using radioactive argon dating techniques, the scientists determined that the crater had been formed 65 million years ago – the time at which the dinosaurs disappear from the fossil record.

In the light of this evidence, the impact theory has now become widely accepted. A giant meteorite impact killed the dinosaurs, and indeed 70 percent of all species then existing on Earth. But what was disaster for the 70 percent was good news for the rest, including the tiny shrew-like mammals, which were able to fill all the vacant niches and diversify into the vast range of modern mammals, including ourselves, that now inhabit the Earth.

Interesting though all this was, it had all happened rather too long ago for the threat of a recurrence to be taken seriously by the general public, and what finally moved the danger from cosmic impacts onto the public agenda was a timely demonstration of just what such an impact could do. Fortunately, Nature was kind enough to arrange for the demonstration to take place at a safe distance from the Earth!

In 1993, a team of comet watchers comprising Carolyn Shoemaker, her husband Eugene Shoemaker and David Levy, observing at the Mount Palomar Observatory, discovered a peculiar cometary object subsequently named Shoemaker-Levy 9 (it was in fact the team’s ninth discovery). Shoemaker-Levy 9 resembled a string of beads strung out on the same orbit and once its orbit had been calculated, it was determined that it was the remains of a single comet that had passed so close to Jupiter the previous year it had not only been captured by the giant planet, it been torn apart by tidal forces during its close approach. Furthermore, it soon became clear that the cometary fragments were now on a collision course for Jupiter, with a series of impacts due to begin on 16 July 1994.

What followed is of course well-known. Although all the impacts occurred on the side of Jupiter not facing the Earth, the fireballs produced as the fragments rained down upon the giant planet billowed up to 2000 miles above the cloud-tops and were clearly observed by the Hubble Space Telescope. The impact sites, carried into view by Jupiter’s rapid rotation, showed that great dark scars had been produced. The dramatic photographs of the scars, stretched out across the face of the wounded planet convinced even the politicians the threat posed to Earth by such objects was very real, and even while the bombardment of Jupiter continued, the US House of Representatives passed a bill requiring NASA to submit to the Congress a costed proposal to chart all objects in Earth-crossing orbits larger than one kilometre in diameter. There were similar political initiatives in Europe, Russia and Australia.

Meanwhile, Hollywood wasted no time in leaping on the bandwagon, and in the late 1990s “meteorite movies” almost become a genre in their own right, though for the most part they were little better than the dire (and misnamed) Meteor, which was released in 1979. A scene from one such movie shows the inundation of New York by an impact-induced tsunami; ironically (in the light of later events) only the twin towers of the World Trade Center survive.

To implement a scheme to monitor space for hazardous objects, it was proposed to set up a global network of eight purpose-built telescopes. Six would have an aperture of 100 inches, and would be used to search for near-Earth objects. The other two, one in each hemisphere, would have an aperture of 200 inches and would be used to search for faint comets beyond the orbit of Jupiter.

By the end of the century a hazard scale for potentially threatening near-Earth objects had been devised. Known as the Torino Scale (named for Torino [Turin], Italy, where it was first proposed), it accesses a threat from 0 (no possibility of collision) through to 10 (the end is nigh). Threat level is based on both probability of a collision, and the consequences of that collision. The latter is obviously a function of the size of the threatening object.

What action could be taken if an asteroid was determined to be on a collision course? Simply blowing it up with a nuclear device would do no good – the fragments would still hit Earth with disastrous consequences, but given sufficient lead time, danger could still be averted by deflecting the threatening object. Two methods have been proposed. The first is to use the enormous velocity of such objects relative to the Earth to deflect them. By firing a large interceptor rocket into the path of one, the change of momentum so imparted would be sufficient to nudge it onto a new, harmless trajectory. With ten years warning, objects a mile in diameter could be so diverted. However, with only the same warning, much larger objects up to 20 miles in diameter could also be diverted by exploding one or more nuclear devices a short distance away, so as to vaporise the surface of one hemisphere. In this case, the expanding gases would act as a rocket motor and push the object away from the direction of the blast. With a century to react, even objects the size of a small moon could be turned aside.

Unfortunately, politicians being politicians, the initial enthusiasm for Project Spaceguard seems to have been lost: global warming seems to be the hot (pun intended) topic now. One sincerely hopes that the project does eventually come to fruition, because it is a matter of pure luck that no impact having global consequences has occurred during recorded history; and the respite can hardly be expected to continue forever. Inevitably, there has been speculation that such an impact did occur in late prehistoric time.

The idea isn’t new and the suggestion that astronomical events have caused global catastrophe within the last ten thousand years have been the subject of speculation for decades, though most of which is firmly in the realms of pseudo-science. The best known proponent of this view was Belarus-born Emmanuel Velikovsky, who had a number of very strange ideas, including the belief that Venus was a comet until just a few thousand years ago. In 1950, he published a book called Worlds in Collision in which he meticulously catalogued graphic accounts of global catastrophe as described in the Bible; the records of the Mayan, Aztec and Inca civilisations; and Greek and Nordic mythology. Velikovsky’s approach was to assume that these myths and legends were literally true, and he sought to interpret them as being references to catastrophes caused by close encounters first with Venus and then Mars, occurring around 2000 BC and 800 BC. Worlds in Collision unleashed a storm of Salman Rushdie proportions in the scientific community, and there were even threats by some universities to boycott Velikovsky’s publishers unless the book was withdrawn. This ridiculous over-reaction only served to lend the book spurious credibility, with the result that it continues to find its way into pseudo-scientific speculations to the present day.

Velikovsky’s book is very readable and well-researched, and one wonders how many of his critics have ever actually taken the trouble to read it. That said there is no doubt that the theory he puts forward is the purest nonsense and suggests a near-total disregard for the laws of physics. Had Venus genuinely been on a collision course for Earth it would have taken rather more than a lightning bolt (as Velikovsky asserts) to avoid total annihilation; and had its influence halted the Earth’s spin the effects would not have been confined to the walls of Jericho.

Worlds in Collision didn’t even represent the full extent of Velikovsky’s bizarre theories and among the speculations that never made it into print was the idea that Earth had once been a satellite of Saturn, but a nova-like disturbance there had shifted our planet into its current orbit, producing Noah’s flood in the process; Jupiter was to blame for the destruction of Sodom and Gomorrah; and Mercury was somehow mixed up in the Tower of Babel.

Stephen Jay Gould makes what is probably the fairest comment on Velikovsky. In an essay entitled Velikovsky in Collision, he stated Velikovsky is neither crank nor charlatan — although to state my opinion and to quote one of my colleagues, he is at least gloriously wrong. But is the idea of a global catastrophe in Neolithic or Bronze Age times something we can completely dismiss?

In 1982 the British astronomers Victor Clube and William Napier proposed that large comets can from time to time end up in short-period orbits and wreak havoc in the inner Solar System. Over time a large object would break up and Earth would experience not just impact events but global cooling arising from meteoroidal dust building up in the atmosphere. The theory met with considerable scepticism at first but was widely quoted in the spate of books on the “meteorite menace” that appeared in the second half of the 1990s. Speculations in these books included the suggestion that the collapse of Mycenaean civilization towards the end the Bronze Age had been caused by an impact, an idea that certainly does deserve to be taken very seriously. However attempts to link more or less every myth and legend (including of course Atlantis) to meteorite impacts; and the suggestion by one author that Stonehenge was a Neolithic early-warning system, intended to look out for incoming meteorites; should serve as a reminder that there is a fine dividing line between informed speculation and pure hokum.

One possible reference to a prehistoric catastrophe is to be found in the Norse legend of Ragnarok, the end of the world, which tells of a world fire followed by the Fimbulvetr, a great winter lasting three years. This is startlingly suggestive of a meteoritic impact followed by a “cosmic winter” of the kind associated with the death of the dinosaurs.

But are the Norse legends, as compiled by Snorri Sturluson in the Prose Edda, based on original material? Many have commented on the similarity between the opening of the Sixth Seal in the Book of Revelation and the Norse account of Fenris-Wolf devouring the sun. It has been pointed out that the Prose Edda was compiled from a Christian standpoint. Could the Ragnarok legend not simply be a rehash of the Biblical account of St. John the Divine?

In the early Thirteenth Century, the ancient Norse legends were becoming rather frowned upon because of the rise of Christianity in the Scandinavian countries and Snorri Sturluson decided to record these wonderful and now all too often neglected tales for posterity. It is true that he was a Christian, but the considered opinion is that he did not embellish the Ragnarok legend with Biblical material and that it is largely unaltered from its original form, and of independent origin. So could the Ragnarok legend be an account of real events?

The last Ice Age ended around 10,000 years ago, but the thaw set in some millennia previously. This was interrupted by an event known as the Younger Dryas occurring 12,700 years ago and lasting for 1,300 years. It has recently been suggested that this might have been the result of a meteorite impact near the North American Great Lakes. Of course the Norse legends are far more recent in origin – but it is possible that some elements have their origins in these events millennia early, just as the universal Flood mythology probably has its origins in the rise in sea levels that accompanied the end of the last Ice Age. However I have to say I am somewhat sceptical – I do feel that it is far more likely that the Younger Dryas was caused by freshwater running off from the melting North American ice sheets. This could have caused the Gulf Stream to cut off, bringing a temporary halt to the warming. If the Ragnarok legend does relate to actual events, possibly it is a description of a major volcanic eruption. This could also precipitate temporary global cooling, such as happened in 1815 after the eruption of Mount Tambora. One eruption that undoubtedly made its mark on prehistory is that of Thera in the Mediterranean around 1600 BC. This seems to have brought about the downfall of the original Minoan civilization on Crete and its absorption by the mainland-based Mycenaeans.

While it is possible that a major meteorite impact gave rise to some of the Norse legends, the evidence is sketchy and the idea of a late prehistoric impact still largely speculative, unlike the K-T Boundary event, which few now dispute was caused by a meteorite. What is not speculation is that the next major impact will occur one day. It may be a hundred millennia away or just a few days. Let us hope that the latter is not the case, for it may be many years before a proper early warning system is in place.

It seems to be human nature to react to tragedy rather than try to prevent it from happening in the first place. Less than a century ago, major transatlantic liners sailed quite legally with sufficient lifeboats to save only a fraction of those aboard should disaster strike. Today it is quite unthinkable that a ship should sail without lifeboats for all. The story of the Titanic is indelibly etched on mankind’s collective psyche. Unfortunately, the needless loss of human life has continued unabated. For example, it took the deaths of nearly two hundred football fans in three separate disasters in the 1980s before something was done about the medieval conditions in which enthusiasts were expected to follow their teams.

Even a small meteorite strike on a major city would pale all this into insignificance. The death toll would far exceed that of the terrorist attacks of 11 September 2001, possibly running to tens or even hundreds of thousands. Even if a warning was given and there was sufficient time for the affected region to be evacuated, it is also worth considering the cultural heritage that would be lost should a meteorite land in the middle of Paris, Rome or any one of a dozen other European cities. The annihilation of the Louvre, the British Museum or the Hermitage would be a disaster comparable to the destruction of the Alexandria Library in antiquity.

Let us hope it does not come to that….

© Christopher Seddon 2007

Everything you wanted to know about the Moon

Most people will think nothing of the Moon should they happen to see it in the sky. This is hardly surprising, the Moon is after all one of only two distinct, instantly recognisable objects (the other being the Sun) that we are guaranteed to see (even here in Britain!) during our lifetime; there can be nobody alive who has not known of it from their earliest childhood. The Moon is a ubiquitous part of our culture and almost certainly has been since earliest times. Its beauty in the night skies has inspired writers, poets and artists for centuries. Reaching for the Moon was once synonymous with desiring the impossible – until man reached it. Even now, almost four decades on from Armstrong’s momentous giant leap for mankind, the Moon remains the only astronomical body other than Earth to be visited by humans.

What’s in a name?

Many people think the “official” name of the moon is the Latin form Luna, but in common with Terra (Earth) and Sol (the Sun) the term Luna has no official standing and is rarely encountered outside of science-fiction novels, though the adjectival forms “lunar”, “terrestrial” and “solar” are in common usage. The “official” name for the Moon is – the Moon (capitalised)! The uncapitalised form – “moon” – is a generic term for any natural satellite of any planet, including our own. Some prefer this term over “satellite” thinking the latter implies something manmade. Strictly speaking a manmade satellite should be referred to as an “artificial satellite” but this usage is now very rare.

The phases of the Moon

The most obvious thing about the Moon is that its appearance changes from night to night. The Moon is not the only body visible from Earth to exhibit phases – Venus and Mercury do also – but without a telescope those of Venus are very difficult to see and those of Mercury are way beyond human perception. The explanation for the phase is straightforward; only one hemisphere of the Moon is illuminated by the Sun at any one time (in common with all other non-luminous solar system objects) and the portion of the illuminated hemisphere visible from Earth changes as the Moon travels round the Earth on its orbit. When the Moon and Sun are on opposite sides of the Earth a full moon is seen; when they are on the same side the Moon disappears altogether. When they are 90 degrees apart a half moon is seen.

The time taken for the Moon to cycle through its phases (the synodic month, defined as the time taken for the Moon to return to the same position relative to both Earth and Sun) is actually longer than the time taken for it to complete a single orbit (the sidereal month) – 29.53 days on average, as opposed to 27.32 days. The reason for this is while the Moon is completing an orbit of the Earth, the latter is moving on its own orbit around the Sun, and the Moon has to move slightly further before it can return to the same position relative to both Earth and Sun.

The wrong time of the month

In 1972 the American researcher Alexander Marshack claimed that people were making records of the phases of the Moon 30,000 years ago. After extensive research that entailed examining just about every prehistoric artefact he could lay his hands on for calendrical notches, he published his findings in a book entitled The Roots of Civilization. Marshack claimed that the tallies corresponded to lunar months. On the face of it, this seems highly plausible. It is now generally accepted that the people of that era were every bit as mentally capable as we are today, and there is little doubt that they would have been aware that the phase of the Moon changes from night to night in a predictable manner. But there are two problems – firstly, it seems unnecessary to record, say, the days since the last full moon when one can simply look at the Moon, note the current phase, and work forward to when the next full moon will occur. The second problem is the tallies vary in numbers of days by more than can be explained by the small seasonal variations in the length of the lunar cycle, or by observational error. However there is another cycle with an average length almost identical to the lunar cycle that does show a certain amount of variation – the human menstrual cycle. It is my guess that this is what was being recorded, since the advantages of knowing when that time of the month is approaching are fairly obvious, and this was probably also the case 30,000 years ago!

That the menstrual cycle is almost exactly one lunar month in duration is now thought to be pure co-incidence, but it is one that was noticed many thousand years ago. The words “moon”, “month”, “menstruate” and “measure” (time) all have the same Proto-Indo-European root. The proto-Indo-European language is the hypothetical common ancestor of the Indo-European languages, which include Latin, Greek, Sanskrit and the modern languages derived from them. According to one popular theory, the Proto-Indo-Europeans were warlike nomads who originally expanded from the Eurasian steppes at around 4000 BC, taking their language with them. A rival theory, proposed in the mid-1980s, claims that Proto-Indo-European origins go even further back, and that they were originally farmers living in Asia Minor, shortly after the end of the last Ice Age. Regardless of which theory is correct (I personally favour the farming theory), the origin of the word “moon” is very ancient indeed.

The Moon from an astronomical viewpoint

The Moon is ranked as a satellite of the Earth. Most of us will be aware that the Earth is in astronomical terms quite undistinguished, and that the same goes for the Sun. Even though the Milky Way, of which the Sun is a part, is classed as a large galaxy one doesn’t have to look far (in fact a mere two million light years) to find a larger galaxy (the Andromeda Galaxy). In a way this is exactly what we should expect from the Copernican Principle or Principle of Mediocrity, an important principle in the philosophy of science which states that Earth holds no special place in the universe and that humans are not privileged observers. Right, so this presumably means that the Moon is equally average? Well, actually, no.

The Moon is a remarkable object and as far as the Solar System is concerned, it is unique. The Moon is a fully paid-up member of the Solar System’s “Big Seven” group of satellites, all of which are larger than Pluto and Eris (the two smallest planets(or largest “dwarf planets” if you insist)). The Moon is by no means the largest member of this group, but all the other six are satellites of giant planets: the Earth is at best only medium-sized. Indeed many astronomers take the view that the Moon is too large in relation to the Earth to be considered a mere satellite and elevate it to the rank of a sister world, classifying the Earth-Moon system as a binary planet. However this view is not really valid. Large though the Moon is, it is still only 1/81 the mass of the Earth; the centre of mass for the Earth/Moon system lies below the surface of the Earth and the Moon cannot be classed as anything other than a satellite of Earth.

Lunar geography – or Selenography

Though most will know what it means, terms like “lunar geography” , “lunar geology”, etc, are oxymorons as the prefix “geo-“ means pertaining to the Earth. The correct terms are “selenography”, “selenology”, etc; the prefix “seleno-“comes from Selene, the Greek goddess of the Moon.

It is not true, as is often believed, that Galileo was the first to map the Moon using a telescope. That distinction must go to Thomas Harriot in 1609, a year before Galileo. However, both men clearly observed mountains, valleys, craters and comparatively smooth areas known as maria or seas. It was at one time believed that these latter features actually were seas, or at least dried-up sea beds, but we now know from samples brought back from the Moon that they have never contained any water.

However, in 1998, it was widely reported that NASA’s Lunar Prospector probe had found water on the Moon, allegedly from comets that had landed in polar regions permanently hidden from the Sun and thus remained frozen. In fact, the probe had only detected evidence of hydrogen on the Moon’s surface. While this could be due to water, I have to say I am highly dubious. Any comet impacting the Moon would almost certainly do so at a relative velocity high enough to vaporise it instantly as its kinetic energy is transformed into heat.

Another theory, popular before the Space Age, was that the maria were great dust-bowls, and any spacecraft landing there would be swallowed up. The idea was featured in two vintage novels by Sir Arthur C. Clarke, Earthlight and A Fall of Moondust, the latter telling the story of a “dust cruiser” designed to “sail” the lunar “seas”.

Today we know that the maria are large dark plains of basalt, formed by volcanic activity billions of years ago.

The origin of the lunar craters has been the subject of considerable controversy over the years. It was once believed that they were volcanic in origin, similar to calderas, but it is now generally accepted that they are the result of meteoric impacts. It was however quite a long time before the volcanic theory was abandoned and a number of astronomers, including Sir Patrick Moore, continued to argue for it until as late as the 1990s.

A Canterbury Tale

Assuming that the impact theory is correct could any new craters have appeared in historic times? In theory, there is no reason why not, though in practice it seems unlikely with impacts forming craters visible from Earth being fairly rare events. However in the 1970s an American astronomer named Jack Hartung claimed that a report made on 18 June 1178 by a Canterbury monk named Gervase could be interpreted as an eye-witness account of the formation of the crater Giordano Bruno.

… after sunset when the moon had first become visible a marvellous phenomenon was witnessed by some five or more men who were sitting there facing the moon. Now there was a bright new moon, and as usual in that phase its horns were tilted toward the east; and suddenly the upper horn split in two. From the midpoint of this division a flaming torch sprang up, spewing out, over a considerable distance, fire, hot coals, and sparks. Meanwhile the body of the moon which was below writhed, as it were, in anxiety, and, to put it in the words of those who reported it to me and saw it with their own eyes, the moon throbbed like a wounded snake. Afterwards, it returned to its proper state. This phenomenon was repeated a dozen times or more, the flame assuming various twisting shapes at random and then returning to normal. Then after these transformations the moon from horn to horn, that is along its whole length, took on a blackish appearance.

It has been suggested that Gervaise saw a meteorite impact, and that the crater Giordano Bruno (named for the Italian philosopher who was burned at the stake for heresy in 1600) was formed as a result. Proponents of this idea point out that the time of the year is consistent with an impact from the so-called Taurid Complex, associated with Enke’s Comet, but the whole thing really has to be taken with a king-sized pinch of salt. Surely a small group of men in Canterbury would not have been the only people in the whole world to see and note such a major disturbance in the natural order of things? A more recent mathematical treatment of the theory showed that Earth would have been bombarded with ejecta from the impact. This would have resulted in spectacular meteor showers of roughly 50,000 meteors an hour being visible all over the world for a week – yet there is absolutely no record of anything of the sort being seen.

Crucially, the Moon was close to the horizon at the time and what Gervaise reported was almost certainly an unusual cloud phenomenon or atmospheric disturbance.

The Moon in fiction

What is arguably the world’s first ever work of science-fiction, entitled A True Story, was written by the Greek satirist Lucian of Samosata in the 2nd Century A.D. and dealt with imaginary voyages to the Moon, but the topic did not become popular until the invention of the telescope in the 17th Century. Authors who wrote about journeys to the Moon included Johannes Kepler, Francis Godwin and Cyrano de Bergerac though the heroes tended to travel by unlikely means such as harnessing a flock of geese.

About a hundred years before Project Apollo, Jules Verne described an American moon program in which a projectile is launched from a space gun in Florida and splashes down in the Pacific, just as Apollo would later do. Some 35 years later, H.G. Wells sent his characters to the Moon in a vehicle utilising anti-gravity – much to the disgust of the by then elderly Verne. This criticism evidently affected Wells, who much later used a space gun himself in the moon shot sequence at the end of the movie Things to Come.

The Moon featured in innumerable works by the 20th Century’s “Holy Trinity” of Sir Arthur C. Clarke, Isaac Asimov and Robert Heinlein.

Inevitably the Moon has featured in many science fiction movies and television series, with manned moonbases being a popular theme for the latter. Gerry Anderson, best known for his classic puppet shows such as Thunderbirds, made two live-action series featuring moonbases. In the first, UFO, interceptors were launched from a moonbase to destroy hostile alien spacecraft. The second, Space 1999, was an altogether more ambitious affair. It was billed as a British answer to Star Trek but despite a huge budget, excellent special effects and a star cast that included Martin Landau, Barbara Baines, Barry Morse, Catherine Schell, Joan Collins, Brian Blessed and Judy Geeson, the series was not a success and was cancelled mid-way through its second run. The main problem was an utterly implausible plot device in which a nuclear explosion sent the Moon careering off into outer space at what one must presume was many times the speed of light (a physical impossibility in itself), given that most weeks would find it hurtling towards a new planetary system. Hopes would rise among those marooned on Moonbase Alpha that the new system would contain an inhabitable world on which they could settle, but on the occasions that it did something would always prevent colonisation, be it paranoid aliens fearing cultural contamination by “primitive” humans (this one cropped up on several occasions); an interplanetary battle of the sexes (a group of rather butch-looking women hijacked Alpha and used it as a platform to lob nuclear missiles at the men, who had already been banished to another planet for being “unreasonable”); a time-warp that reverted the crew to cavemen (this provided an excuse to put the lovely Zienia Merton in a leopard-skin), or the putative new home turning out to be rather inconveniently composed of antimatter. Even when the Moon was in interstellar space things were rarely quiet: black holes and time warps were as frequent as tailbacks on the M25; other menaces included a space brain, a monster dwelling in a Sargasso Sea of abandoned spaceships and miscellaneous aliens in suspended animation, who invariably turned out to be bad guys sent into exile by their peace-loving compatriots.

Is the Moon Earth’s only natural satellite?

Could the Earth have a second, undetected satellite? On the face of it, there is absolutely no reason why not. Jupiter is now known to have at least 63 satellites; Saturn has about the same number; and even Pluto has three. However if a second Earth satellite were to exist, it would have to be very small indeed to avoid detection. It is not often appreciated that were the Moon only two miles in diameter, it would still be visible to the naked eye.

Nevertheless, the idea that the Moon might not be our planet’s sole attendant has intrigued astronomers for the better part of two hundred years. In 1846 Frederic Petit, Director of the Toulouse Observatory, claimed that a second Earth satellite had indeed been discovered. Petit’s claim was soon refuted, but he became obsessed with the idea of a second satellite. Fifteen years later, he published an abstract in which he proposed the existence of a second satellite to account for then-unexplained anomalies in the Moon’s orbit. The theory attracted little interest among astronomers, and doubtless would have been entirely forgotten by now had a young French writer by the name of Jules Verne not read the abstract and immortalised Petit and his satellite in the novel From the Earth to the Moon, in which the Petit object passes close to the space travellers projectile, pulling it off course and swinging it into an orbit around the Moon.

The idea of a second moon was revived several times during the last century, and shortly after the end of the Second World War, Clyde Tombaugh, discoverer of Pluto, carried out a most comprehensive search. He used equipment so sensitive that it would have shown a lump of coal the size of a football a thousand miles away. He failed to find anything.

It is now believed that the combined gravitational effects of the Earth, Moon and Sun would rapidly eject any small satellite from Earth’s orbit, ruling out the existence of a second moon. Nevertheless in 2002 an object known as J002E3 was discovered in Earth orbit – but it was soon discovered to be almost certainly the discarded third-stage booster from the Apollo XII mission in November 1969. It is believed that the object left orbit in June 2003 and may return around 2032.

It is sometimes claimed that the asteroid 3753 Cruithne ranks as a second Earth satellite. Discovered in 1986, Cruithne has an unusual orbit, known as a “horse-shoe” orbit, due to the influence of Earth. However it is in orbit around the Sun, not the Earth and therefore it is not an Earth satellite.

A Cosmic Coincidence

One of the most singular features of the Moon is the fact that it appears almost exactly the same size as the Sun in the sky. The reason for this is that while the Sun is 400 times the diameter of the moon, it is also 400 times further away, so both objects appear the same size when viewed from Earth. This is a pure co-incidence, but it is responsible for what is surely one of the most spectacular phenomena to be seen anywhere in the Solar System – a total eclipse of the Sun. A solar eclipse is, of course, due to the Moon passing directly between the Sun and the Earth, casting its shadow upon the latter (strictly speaking, the phenomenon is an occultation, not an eclipse). Because the Moon’s disc is just sufficient to hide that of the Sun, the latter’s atmosphere, the so-called corona can be seen in all its splendour. In fact it is a close call and for a total eclipse to occur, the Moon must be close to perigee (i.e. its minimum distance from Earth). Otherwise, a thin ring of the Sun’s disc is left showing, quite enough to drown out the glorious corona, and the eclipse is said to be annular. Because the Moon’s orbit is inclined at five degrees, an eclipse does not occur every month, though at least two must occur in a given year. However this figure includes partial eclipses, when the Moon does not pass directly in front of the Sun. Even when a total eclipse does occur, the area experiencing totality is only a small corridor, though it may extend for thousands of miles as the Moon’s shadow races across the Earth’s surface.

I have only witnessed one total eclipse of the Sun, that being the one in Cornwall in August 1999. Although cloudy skies prevented me from seeing totality, it was still an awesome experience as day became night in a matter of seconds. Sea birds, believing night really had fallen, hooted in great excitement. On the horizon was seen a band of orange light, marking the limits of totality. The scene was one of great beauty and although it was disappointing to have missed something I had been waiting to see since my childhood, it was still a worthwhile experience.

We will not always be able to enjoy the spectacle of a solar eclipse, because tidal effects are causing the Moon to recede from the Earth by 3.8 centimetres per year. That might not seem like a lot, but it adds up. When the first maps of the Moon were being drawn up, three centuries ago, the Moon was 11.4 metres (just under 40 feet) closer to the Earth. When modern humans first reached Australia, 50,000 years ago, the Moon was 1900 metres (rather more than a mile) closer; when the dinosaurs became extinct 65 million years ago, it was 2470 kilometres closer.

Eventually it will be too far away for its disc to fully block out the Sun, even at perigee. These effects are also causing the Earth’s spin to slow and the day is gradually lengthening. Again, these effects are small but they add up over time and account for discrepancies amounting to several hours in the timing of eclipses observed in antiquity.

The Dark Side of the Moon

As is correctly pointed out in the eponymous Pink Floyd album, there is no “dark” side of the Moon: each part of the Moon experiences as much daylight as it does night time. So where does the idea that the Moon has a “dark” side come from? In common with almost all bodies circling a larger primary, the Moon exhibits so-called “captured rotation”, meaning that it turns on its axis exactly once in each circuit of its primary. In other words, a lunar day is exactly a month long. It is often said that this results in half the Moon’s surface being permanently hidden from view on Earth, leading to the misconception that the hidden side is in permanent darkness. If this were true, we’d see a full moon all the month round! The phase is of course due the part of the Earth-facing side being in darkness. In fact it is not strictly true that only half of the Moon’s surface can be seen from Earth. Because the Moon (in common with all other objects in the Solar System) does not move in a perfectly circular orbit, its orbital velocity varies slightly during the course of a month in accordance with Kepler’s Laws of Planetary Motion. This means that the orbital motion and axial spin are at times slightly out of step, and in consequence we can see portions of the “hidden” side. Because the Moon’s orbit is inclined at five degrees to that of Earth, we can also see alternately beyond the north and south lunar poles. Finally, parallax effects result in observers being presented with slightly different portions of the Moon’s surface at different times of the day and in total, about 59 percent of the Moon’s surface may be observed from Earth at various times.

Origin of the Moon

As one might expect, the origin of the Moon has been the subject of many theories over the years. The first theory to gain widespread acceptance was put forward by Sir George Darwin (son of Charles). Darwin suggested that the Earth and Moon had originally formed a single rapidly rotating, molten mass. The tidal forces raised by the Sun and the centripetal forces of its own motion caused it to become pear-shaped and eventually split into two objects of unequal size. A strong supporter of the fission theory was the American astronomer W.H. Pickering, who suggested that the scar left by the Moon’s breakaway was now the basin of the Pacific Ocean.
Unfortunately the theory was pear-shaped in more ways than one. A mathematical treatment of the dynamics involved showed that it was unsound and it had to be abandoned. This did not prevent it from being used as the basis of an ingenious science fiction movie, Crack in the World, in which an attempt to tap energy from the Earth’s molten core goes disastrously awry and triggers a series of earthquakes. A growing rupture in the Earth’s crust threatens to tear the planet apart and rival scientists Stephen Sorenson (Dana Andrews) and Ted Rampion (Kieron Moore) are forced to put aside their differences and try to come up with a solution. An attempt to avert disaster by exploding a hydrogen bomb in the shaft of an active volcano is only partially successful, and a whole portion of the Earth is blasted away into space, where it forms a new satellite. The movie’s closing reel shows the Moon and its new sibling in the sky together, the whole process having been observed from no more than a few hundred yards by Rampion – accompanied, of course, by the movie’s love-interest (Janette Scott).

The next theory to be put forward suggested that the Moon was originally an independent body, but it wandered too close to the Earth and was captured. There is little doubt that this has happened elsewhere in the Solar System, Mars’s dwarf attendants and several satellites of the giant outer worlds, including Neptune’s major satellite Triton – only slightly smaller than the Moon – were almost certainly captured from independent orbits. The theory was popular for a time and in the middle part of the last century an Austrian researcher named H.S. Bellamy even suggested that it might have happened fairly recently (needless to say, this accounts for the destruction of Atlantis). But captures that are believed to have occurred all involve objects that are very small in relation to their captors, and as we have observed, the Moon is fairly large in relation to the Earth.

Another theory states that the Moon simply formed in Earth’s orbit from the same primordial material, but this model fails to explain why the Moon is less dense and deficient in iron in comparison to Earth.

The currently popular theory, put forward by American scientists W.K. Hartmann and D.R. Davis in 1974, proposes that an object about the size of Mars collided with Earth, and while the bulk of its mass including its iron core merged with the Earth, enough debris was ejected into space from Earth’s mantle to form the Moon. The theory explains why the Moon is rather less dense than the Earth, as denser materials were not blasted into space by the impact. The theory is not without its problems, but seems to be the most plausible explanation put forward to date.

From the Earth to the Moon

As we have seen, some 59 percent of the Moon’s surface can be seen under various conditions from Earth. Not until the dawn of the space age was anything definite learned about the remaining 41 percent. In October 1959, the Soviet probe Lunik III made a fly-by of the far side of the Moon. Because the probe was out of radio contact with Earth as it passed behind the Moon’s far side, the pictures it took could not be simply beamed back to Earth. Accordingly, film was automatically exposed and developed. As the probe emerged from behind the Moon, the developed film was imaged by a TV camera and the first blurry images of the Moon’s hidden side were transmitted back to Earth. It sounds crude, and by today’s standards it was, but it was a tremendous technical feat for the time.

As the Cold War ratcheted up tensions between East and West, so the Soviets continued to score an impressive succession of “firsts” in space, but the US was galvanised into a response and on 25 May 1961 President John F. Kennedy threw down his historic challenge:

I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to Earth. No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish.

In May 1961, Alan Shepard had only just become the first American to fly in space, yet little over nine years later after the flight of Lunik III, men saw the Moon’s hidden side with their own eyes as Apollo VIII made its historic circumnavigation of the Moon at Christmas 1968. Seven months later Armstrong and Aldrin became the first men to actually land there, realising Kennedy’s goal with less than six months to spare. The technological leap that made this possible might sound incredible, but it must be remembered that even the technology of Project Apollo was quite primitive by today’s standards. It is a fact that the Eagle’s on-board computer was actually far less powerful than that of a modern-day mobile phone! (I refuse to comment on conspiracy theories that the Moon landings were faked because it is patently obvious that the idea is absurd.)

At all events, the US won the race to the Moon. Not until much later did it emerge that early Soviet successes owed more to the genius of Chief Designer Sergei Korolev than to any superiority of communism over capitalism. But Korolev’s health had been ruined by a spell in the gulag during Stalin’s reign of terror and he died in 1966 during a botched operation to remove a tumour. With his death ended any hopes of perfecting the N1 booster with which he had hoped to put a man on the Moon. The race to the Moon lost, the Soviets turned their attention to establishing a near-permanent human presence in Earth orbit – which in the long run was of far more benefit than simply duplicating the efforts of the US.

When will people go back to the Moon? In 1972, when Cernan and Schmitt blasted off from the Moon’s surface, it was said that nobody would be going back in the 20th Century. I did not believe this (I assumed that men would be on Mars before the century was out), but the public’s attention-span is short and after the Moon landing had been made, only the astonishing drama of Apollo XIII made the headlines (and, a quarter of a century later, an excellent if not entirely accurate Hollywood movie). NASA turned its attention to the Space Shuttle, setting back the manned exploration of space by decades. As an experimental proof-of-concept spaceship, there is no doubt that the Shuttle was a technological triumph. As a practical manned reusable heavy-lift system however it has been an unmitigated disaster that cost the lives of the crews of Challenger and Columbia. It was the latter tragedy that prompted President George W. Bush, in one of the very few highlights of his presidency, to announce what has since become known as Project Constellation, which will return humans to the Moon, and on to Mars – using designs that draw heavily from Project Apollo, albeit using hardware developed originally for the Shuttle.

A permanently inhabited base on the Moon should be established no later than the middle part of this century. When it is, one of science fiction’s oldest and most central themes will be a reality at last.

© Christopher Seddon 2007