Plato’s Theory of Forms

Plato (circa 427-347 BC) made contributions to practically every field of human interest and is undoubtedly one of the greatest thinkers of all times. However it is just as well that his political ideas didn’t catch on (except possibly in North Korea); additionally Platonic Realism bogged down biological science until Darwin and Wallace’s time.

Plato was influenced by Pythagoras, Parmenides, Heraclitus and Socrates (Russell (1946)). From Pythagoras he derived the Orphic elements in his philosophy: religion, belief in immortality, other-worldliness, the priestly tone, and all that is involved in the allegory of the cave; mathematics and his intermingling of intellect and mysticism. From Parmenides he derived the view that reality is eternal and timeless and that on logical grounds, all change must be an illusion. From Heraclitus he derived the view that there is nothing permanent in the world of our senses. Combining this with the doctrine of Parmenides led to the conclusion that knowledge is not to be derived from the senses but achieved by intellect – which ties in with Pythagoras. Finally from Socrates came his preoccupation with ethics and his tendency to seek teleological rather than mechanical explanations.

Realism, as opposed to nominalism, refers to the idea that general properties or universals have a mode of existence or form of reality that is independent of the objects that possess them. A universal can be a type, a property or a relation. Types are categories of being, or types of things – e.g. a dog is a type of thing. A specific instance of a type is known as a token, e.g. Rover is a token of a dog. Properties are qualities that describe an object – size, colour, weight, etc, e.g. Rover is a black Labrador. Relations exist between pairs of objects, e.g. if Rover is larger than Gus then there is a relation of is-larger-than between the two dogs. In Platonic Realism universals exist, but only in a broad abstract sense that we cannot come into contact with. The Form is one type of universal.

The Theory of Forms (or Ideas) is referred to in Plato’s Republic and other Socratic Dialogues and follows on from the work of Parmenides and his arguments about the distinction between reality and appearance. The theory states that everything existing in our world is an imperfect copy of a Form (or Idea), which is a perfect object, timeless and unchanging, existing in a higher state of reality; for example there are many types of beds, double, single, four-poster etc but they are only imperfect copies of the Form of the bed, which is the only real bed. Plato frowned upon the idea of painting a bed because the painting would merely be a copy of a copy, and hence even more flawed. The world of Forms contains not only the bed Form but a form for everything else – tables, wristwatches, dogs, horses, etc. Forms are related to particulars (instances of objects and properties) in that a particular is regarded as a copy of its form. For example, a particular apple is said to be a copy of the form of Applehood and the apple’s redness is a copy of the form of Redness. Participation is another relationship between forms and particulars. Particulars are said to participate in the forms, and the forms are said to inhere in the particulars, e.g. redness inheres in an apple. Not all forms are instantiated, but all could be. Forms are capable of being instantiated by many different particulars, which would result in the form having many copies, or inhering many particulars.

Needless to say, the world of the Forms was only accessible to philosophers, a view which justified the Philosopher Kings of the Republic, and casts philosophers in the same role as shamans and priests as people with exclusive access to worlds better than our own, and hence the basis of a ruling elite. That animals have ideal Forms is a view that bogged down biological science for centuries, as it rules out any notion of evolution. (The Republic also advocated such unsavoury practices as eugenics (dressed up as a rigged mating lottery); abolition of the family; censorship of art; and a caste-system based on a “noble lie” of the “myth of metals” (which I suppose is better than a war based on the ignoble lie of the myth of weapons of mass destruction). The Republic seems to have influenced Huxley’s Brave New World, Orwell’s 1984 and the Federation of Heinlein’s Starship Troopers).

The inheritance criticism questions what it means to say that the form of something inheres in a particular or that the particular is a copy of the form. If the form is not spatial, it cannot have a shape, so the particular cannot be the same shape as the form.

Arguments against the inherence criticism claim that a form of something spatial can lack a concrete location and yet have abstract spatial qualities. An apple, for example, can have the same shape as its form. Such arguments typically claim that the relationship between a particular and its form is very intelligible and people apply Platonic theory in everyday life, for example “car”, “aeroplane”, “cat” etc don’t have to refer to specific vehicles, aircraft or cats.

Another criticism of forms relates to the origin of concepts without the benefit of sense-perception. For example, to think of redness-in-general is to think of the form of redness. But how can one have the concept of a form existing in a special realm of the universe, separate from space and time, since such a concept cannot come from sense-perception. Although one can see an apple and its redness, those things merely participate in, or are copies of, the forms. Thus to conceive of a particular apple and its redness is not to conceive of applehood or redness-in-general.

Platonic epistemology, however, addresses such criticism by saying that knowledge is innate and that souls are born with the concepts of the forms. They just have to be reminded of those concepts from back before birth, when they were in close contact with the forms in the Platonic heaven. Plato believed that each soul existed before birth with “The Form of the Good” and a perfect knowledge of everything. Thus, when something is “learned” it is actually just “recalled.”

Plato stated that knowledge is justified true belief, i.e. if we believe something, have a good reason for doing so, and it is in fact true, then the belief is knowledge. For example, if I believe that the King’s Head sells London Pride (because I looked it up in the Good Beer Guide), I get a bus to the pub and see a Fullers sign outside, then I have knowledge that it sells London Pride. This view has been central to epistemological debate ever since Plato’s time.

Plato drew a sharp distinction between knowledge which is certain, and mere opinion which is not certain. Opinions derive from the shifting world of sensation; knowledge derives from the world of timeless forms, or essences. In the Republic, these concepts were illustrated using the metaphor of the sun, the divided line and the allegory of the cave.

Firstly, the metaphor of the sun is used for the source of “intellectual illumination”, which Plato held to be The Form of the Good. The metaphor is about the nature of ultimate reality and how we come to know it. It starts with the eye, which is unusual among the sense organs in that it needs a medium, namely light, in order to operate. The strongest source of light is the sun; with it, we can discern objects clearly. By analogy, we cannot attempt to understand why intelligible objects are as they are and what general categories can be used to understand various particulars around us without reference to forms. “The domain where truth and reality shine resplendent” is Plato’s world of forms, illuminated by the highest of all the forms – the Form of the Good. Since true being resides in the world of the forms, we must direct our intellects there to have knowledge. Otherwise we have mere opinion, i.e that which is not certain.

Secondly, the divided line has two parts that represent the intelligible world and the smaller visible world. Each of those two parts is divided, the segments within the intelligible world represent higher and lower forms and the segments within the visible world represent ordinary visible objects and their shadows, reflections, and other representations. The line segments are unequal and their lengths represent “their comparative clearness and obscurity” and their comparative “reality and truth,” as well as whether we have knowledge or instead mere opinion of the objects. Hence, we are said to have relatively clear knowledge of something that is more real and “true” when we attend to ordinary perceptual objects like rocks and trees; by comparison, if we merely attend to their shadows and reflections, we have relatively obscure opinion of something not quite real.

Finally Plato drew an analogy between human sensation and the shadows that pass along the wall of a cave – the allegory of the cave. Prisoners inside a cave see only the shadows of puppets in front of a fire behind them. If a prisoner is freed, he learns that his previous perception of reality was merely a shadow and that the puppets are more real. If the learner moves outside of the cave, they learn that there are real things of which the puppets are themselves mere imitations, again achieving a greater perception of reality. Thus the mere opinion of viewing only shadows is steadily replaced with knowledge by escape from the cave, into the world of the sun and real objects. Eventually, through intellectualisation, the learner reaches the forms of the objects – i.e. their true reality.

© Christopher Seddon 2008

Advertisements

Radiometric dating techniques

A major problem for archaeologists and palaeontologists is the reliable determination of the ages of artefacts and fossils.

As far back as the 17th Century the Danish geologist Nicolas Steno proposed the Law of Superimposition for sedimentary rocks, noting that sedimentary layers are deposited in a time sequence, with the oldest at the bottom. Over a hundred years later, the British geologist William Smith noticed that sedimentary rock strata contain fossilised flora and fauna, and that these fossils succeed each other from top to bottom in a consistent order that can be identified over long distances. Thus strata can be identified and dated by their fossil content. This is known as the Principle of Faunal succession. Archaeologists apply a similar principal, artefacts and remains that are buried deeper are usually older.

Such techniques can provide reliably relative dating along the lines of “x is older than y”, but to provide reliable absolute values for the ages of x and y is harder. Before the introduction of radiometric dating in the 1950s dating was a rather haphazard affair involving assumptions about the diffusion of ideas and artefacts from centres of civilization where written records were kept and reasonably accurate dates were known. For example, it was assumed – quite incorrectly as it later turned out – that Stonehenge was more recent than the great civilization of Mycenaean Greece.

The idea behind radiometric dating is fairly straightforward. The atoms of which ordinary matter is composed each comprise a positively charged nucleus surrounded by a cloud of negatively charged electrons. The nucleus itself is made up of a mixture of positively charged protons and neutral neutrons. The atomic weight is total number of protons plus neutrons in the nucleus and the atomic number is the number of protons only. The atom as a whole has the same number of electrons as it does protons, and is thus electrically neutral. It is the number of electrons (and hence the atomic number) that dictate the chemical properties of an atom and all atoms of a particular chemical element have the same atomic number, thus for example all carbon atom have an atomic number of six. However the atomic weight is not fixed for atoms of a particular element, i.e. the number of neutrons they have can vary. For example carbon can have 6, 7 or 8 neutrons and carbon atoms with atomic weights of 12, 13 and 14 can exist. Such “varieties” are known as isotopes.

The physical and chemical properties of various isotopes of a given element vary only very slightly but the nuclear properties can vary dramatically. For example naturally-occurring uranium is comprised largely of U-238 with only a very small proportion of U-235. It is only the latter type that can be used as a nuclear fuel – or to make bombs. Many elements have some unstable or radioactive isotopes. Atoms of an unstable isotope will over time decay into “daughter products” by internal nuclear change, usually involving the emission of charged particles. For a given radioisotope, this decay takes place at a consistent rate which means that the time taken for half the atoms in a sample to decay – the so called half-life – is fixed for that radioisotope. If an initial sample is 100 grams, then after one half-life there will only be 50 grams left, after two half-lives have elapsed only 25 grams will remain, and so on.

It is upon this principle that radiometric dating is based. Suppose a particular mineral contains an element x which has a number of isotopes, one of which is radioactive and decays to element y with a half-life of t. The mineral when formed does not contain any element y, but as time goes by more and more y will be formed by decay of the radioisotope of x. Analysis of a sample of the mineral for the amount of y contained will enable its age to be determined provided the half-life t and isotopic abundance of the radioisotope is known.

The best-known form of radiometric dating is that involving radiocarbon, or C-14. Carbon – as noted above – has three isotopes. C-12 (the most common form) and C-13 are stable, but C-14 is radioactive, with a half-life of 5730 years, decaying to N-14 (an isotope of nitrogen) and releasing an electron in the process (a process known as beta decay). This is an infinitesimal length of time in comparison to the age of the Earth and one might have expected all the C-14 to have long since decayed. In fact the terrestrial supply is constantly being replenished from the action of interstellar cosmic rays upon the upper atmosphere where moderately energetic neutrons interact with atmospheric nitrogen to produce C-14 and hydrogen. Consequently all atmospheric carbon dioxide (CO2) contains a very small but measurable percentage of C-14 atoms.

The significance of this is that all living organisms absorb this carbon either directly (as plants photosynthesising) or indirectly (as animals feeding on the plants). The percentage of C-14 out of all the carbon atoms in a living organism will be the same as that in the Earth’s atmosphere. The C-14 atoms it contains are decaying all the time, but these are replenished for as long as the organism lives and continues to absorb carbon. But when it dies it stops absorbing carbon, the replenishment ceases and the percentage of C-14 it contains begins to fall. By determining the percentage of C-14 in human or animal remains or indeed anything containing once-living material, such as wood, and comparing this to the atmospheric percentage, the time since death occurred can be established.

This technique was developed by Willard Libby in 1949 and revolutionised archaeology, earning Libby the Nobel Prize for Chemistry in 1960. The technique does however have its limitations. Firstly it can only be used for human, animal or plant remains – the ages of tools and other artefacts can only be inferred from datable remains, if any, in the same context. The second is that it only has a limited “range”. Beyond 60,000 years (10 half-lives) the percentage of C-14 remaining is too small to be measured, so the technique cannot be used much further back than the late Middle Palaeolithic. Another problem is the cosmic ray flux that produces C-14 in the upper atmosphere is not constant as was once believed. Variations have to be compensated for by calibration curves, based on samples that have an age that can be attested by independent means such as dendochronology (counting tree-rings). Finally great care must be taken to avoid any contamination of the sample in question with later material as this will introduce errors.

The conventions for quoting dates obtained by radiocarbon dating are a source of considerable confusion. They are generally quoted as Before Present (BP) but “present” in this case is taken to be 1950. Calibrated dates can be quoted, but quite often a quoted date will be left uncalibrated. Uncalibrated dates are given in “radiocarbon years” BP. Calibrated dates are usually suffixed (cal), but “present” is still taken to be 1950. To add to the confusion, Libby’s original value for the half-life of C-14 was later found to be out by 162 years. Libby’s value of 5568 years, now known as the “Libby half-life”, is rather lower than the currently-accepted value of 5730 years, which is known as the Cambridge half-life. Laboratories, however, continue to use the Libby half-life! In fact this does make sense because by quoting all raw uncalibrated data to a consistent standard means any uncalibrated radiocarbon date in the literature can be converted to a calibrated date by applying the same set of calculations. Furthermore the quoted dates are “futureproofed” against any further revision of the C-14 half-life or refinement of the calibration curves.

If one needs to go back further than 60,000 years other techniques must be used. One is Potassium-Argon dating, which relies on the decay of radioactive potassium (K-40) to Ar-40. Due to the long half-life of K-40, the technique is only useful for dating minerals and rocks that are over than 100,000 years old. It has been used to bracket the age of archaeological deposits at Olduvai Gorge and other east African sites with a history of volcanic activity by dating lava flows above and below the deposits.

© Christopher Seddon 2008

The Day of the Triffids, by John Wyndham (1951)

Science Fiction does not often make an appearance on the school curriculum, but The Day of the Triffids is one work that has been required reading for generations of pupils. I first encountered the book nearly forty years ago, in fact just months after the death of its author at the comparatively early age of 66. At school, I must confess, my enthusiasm for Wuthering Heights, Return of the Native and I Claudius was (shamefully!) less than these great works warranted. But The Day of the Triffids was unputdownable. Instead of reading the two chapters set for homework that evening, I read the entire book!

It is reasonable to say that I could have been presented with many other works of science fiction and devoured them with equal gusto. Few of these would be regarded as great works of SF, let alone English Literature. But no other book has ever appealed to two more differing arbiters of what constitutes a good read, myself at the age of fourteen and those seemingly determined to stuff down pupils’ throats the dullest books imaginable.

So why is a somewhat dated science fiction novel, written from a seemingly rather prim post-war middle class perspective, still popular now – almost half a century after it was written?

Read the first few pages and you will see why. There is something for everybody, from the most inattentive schoolboy to the stodgiest academic. The first line is one of the finest opening sentences to any book ever written, SF or otherwise….

When a day you happen to know is Wednesday starts off by sounding like Sunday, there is something seriously wrong somewhere.

Tension mounts immediately as we sense that the hospitalised narrator, not named until the tenth page as Bill Masen, is helpless. Realisation is slow to come that he is blind – at least temporarily so. His eyes are bandaged following emergency treatment to save his sight. And his plight is nightmarish. Not just the hospital, but the world outside, has apparently ceased to function. Nothing can be heard – not a car, not even a distant tugboat. Nothing but church clocks, with varying degrees of accuracy, announcing first eight o’clock, then quarter-past, then nine…
We learn that the previous night, the whole Earth had been treated to a magnificent display of green meteors, believed at first to be comet debris. Masen is bitterly disappointed at being one of the few people to miss the display. He wonders has the whole hospital, the whole of London made such a night of it that nobody has yet pulled round. Eventually, he takes off the bandages, which were in any case due to come off, by himself. He is greatly relieved to find that he can see – he soon finds out that is one of the few people left who can.

The hospital has been transformed into a Doréan nightmare of blinded patients milling helplessly around. The only doctor Masen encounters hurls himself from a fifth floor window after finding his telephone is dead. After giving only cursory consideration to trying to help the blinded, he flees the hospital. What, he rationalises, would he do if he did succeed in leading them outside? It is already becoming apparent that the scale of the disaster extends way beyond the hospital. He makes for the nearest pub, desperately in need of a drink. But this is a nightmare from which there is no escape. The pub landlord is also blind, to say nothing of blind drunk. He blames the meteor shower for his condition. He says that having discovered their children were also blinded, his wife gassed them and herself, and he intends to join them once he is drunk enough.
Anybody who describes this as “cosy catastrophism” really needs to re-read just this first chapter to be firmly disabused of the notion.

At a single stroke, mankind’s complex civilisation has been brought down, all but a tiny handful of the world’s population blinded. Nor is this the extent of humanity’s troubles. Within hours, triffids have broken out of captivity and are running amok, and within a week London is smitten by plague. Only near the end of the book do we learn that mankind, in all probability, brought this triple-whammy down upon himself.

The Day of the Triffids is set in the near future, although no date is given. Masen, who is apparently an only child, is in his late twenties when the story begins and his father had reached adulthood before the war. The catastrophe, that turns out to have been caused by a satellite weapon having been accidentally set off in space rather than close to the ground, probably occurs around 1980.

Masen lives in a world in which food shortages are the biggest challenge to mankind. The triffid, a mobile carnivorous plant equipped with a lethal sting, is being farmed world-wide as a source of vegetable oil and cattle-food. Originally bred in secret in the Soviet Union, they are distributed world-wide when an attempt to steal a case of fertile triffid seeds backfires. Masen himself is making a successful career in the triffid business and is hospitalised when one stings him in the eyes – thus it is the triffids who are responsible for his escaping the almost universal blindness.

The story follows the adventures of Masen and fellow-survivor Josella Playton and explores the differing attempts of various groups to deal with the catastrophe. Some want to somehow cling on to a vestige of the social and moral status quo, others see the situation as an opportunity for personal advancement. The well-meaning but ultimately hopeless attempts of Wilfred Coker to keep as many blind people alive for as long as possible end in failure within a week when the plague strikes. Miss Durrant’s attempt to build a Christian community fares little better, and it too succumbs to the plague. The dictator Torrence tries to set up a feudal state, using the blind as slave labour, fed upon mashed triffid.

From the start, though, Masen and Ms. Playton take the same view as Michael Beadley, the avuncular leader of a group of survivors holed up in Senate House. Nothing can be done for the vast majority of the blind – mankind’s best hope for the future is to set up a community of largely sighted survivors, in a place of comparative safety.

Thus Wyndham explores from different angles the question of how ordinary people face up to the task of trying to run a small community, something that is quite challenging under even normal circumstances, with everybody seemingly having different views on how things should be done.

Coker’s shenanigans see to it that many adventures must pass before Masen and Ms. Playton eventually link up with Beadley’s group, by now ensconced on a triffid-free Isle of Wight.

The Day of the Triffids has been likened to Orwell’s Nineteen Eighty-four for both its cold-war extrapolations and its gloomy perspective of misery for evermore. But this view is wrong on both counts. Wyndham’s remarks about the Soviet Union could have been written by almost any author between the end of the war and the rise of Mikhail Gorbachev. And despite the magnitude of the disaster to have overtaken mankind, the tone of The Day of the Triffids is an optimistic one. Its recurring message is that a portion of mankind has been spared to begin again, and the human race has in fact escaped the even worse fate that was becoming increasingly inevitable in a world threatened by both global nuclear war and mass starvation. The triffids’ possession of the world will be a temporary thing, and in the last paragraph of the book, Wyndham suggests that research into ways to destroy them is well underway. Within two or three generations at most, mankind will be in a position to strike back and reclaim all he has lost.

It is perhaps the upbeat endings and veneer of British middle-class values, a constant feature of Wyndham’s work, which fools people into labelling him with the “cosy catastrophe” tag. In fact, there is much more to his work than met even my enthusiastic eye when, in the Autumn of 1969, I first encountered an author I still count as one of my great favourites.

The Day of the Triffids was made into a truly appalling Hollywood movie, starring country and western singer Howard Keel (1963), and a superior BBC television series (1981). Simon Clark wrote a sequel, The Night of the Triffids, in 2001. My personal feeling is that another movie version is long overdue.

© Christopher Seddon 2008

Nightfall, by Isaac Asimov

If the stars should appear one night in a thousand years, how would men believe and adore, and preserve for many generations the remembrance of the city of God?

In 1941, this quote by American poet Ralph Waldo Emerson inspired a young and then little-known science fiction writer to produce what is arguably the greatest science fiction story of all time.

On the planet Lagash, a group of astronomers try to warn a disbelieving public that a doomsday cult is correct and the end of the world is indeed nigh. Lagash is one of the most remarkable planets in the galaxy – it is part of a system comprising six suns, of which at least one is always in the sky. Night is unknown – or almost unknown.

The astronomers, investigating anomalies in Lagash’s orbit, which threaten to overturn the recently established Law of Universal Gravitation, have made an alarming discovery. The problem with the orbit can be resolved by postulating that Lagash has a hitherto undiscovered moon, invisible in the glare of the eternal day. When the moon’s orbit is calculated, the astronomers learn that it can cause an eclipse of one of the suns, the red dwarf Beta. The phenomenon can only occur with Beta alone in its hemisphere, at maximum distance from Lagash, with the moon at minimum distance – a configuration that only occurs every 2049 years. The eclipse covers the entire planet and lasts well over half a day, so that no spot on Lagash escapes being plunged into darkness.

The psychological effects on a population unused to darkness will be catastrophic – and an eclipse is imminent….

WARNING: SPOILER ALERT!

One of the reasons Nightfall is such a powerful tale is the mounting sense of terror Asimov manages to convey to his readers in his description of what is after all an everyday occurrence here on earth – the fall of dusk. He does this by the clever choice of a red dwarf as the sun that is eclipsed. He describes Beta as “glowering redly at zenith, dwarfed and evil” and makes frequent comparisons between its red light and blood. As the eclipse proceeds, the sky is described as turning “a horrible deep purple-red”. It is powerful, almost apocalyptic stuff.

No less intense is the description of the claustrophobia experienced by the group of astronomers as the gloom deepens. Outside, even the insects are frightened into silence.

Few short stories manage to draw together as many diverse, thought-provoking ideas as Nightfall. Archaeological records that tell of a series of earlier civilisations, all destroyed by fire at the height of their culture; a doomsday cult that claims Lagash enters a cave every 2050 years, plunging it into darkness; and a fairground ride that has caused people to go mad and even die of fright – all this inexorably heightens the sense of impending doom.

In 1990, almost half a century after Nightfall first appeared, Asimov collaborated with Robert Silverberg, to produce a novel based on the original short story.

When two of the world’s greatest SF writers team up on such a project, expectations are bound to be very high and this was possibly why Nightfall the novel met with a mixed reception. Some loved it, but many hated it, going as far as to describe it as the weakest offering from either author in a decade. IMHO, the truth lies somewhere between the two extremes.

The first two-thirds of the novel expands on the events and ideas described in the short story. The two versions are very consistent, even featuring the same characters, though with the addition of an archaeologist, who makes the crucial discoveries about the planet’s past history. There are some trifling name changes – the six suns are given proper names rather than Greek letters, and for some reason the planet itself is renamed Kalgash. (We will conjecture that Kalgash is a more accurate English rendering of the planet’s name, just as Peking is now usually referred to as Beijing. For simplicity, though, I will continue to use the original names.)

The last third of the novel follows events after the eclipse, as survivors who have retained their faculties try to regroup in a world rapidly reverting to feudalism. I have to agree with those who say that the ending is weak. It is true that the idea of using religious superstition to hold together a disintegrating society also appears in Asimov’s Foundation Trilogy, but an open ending with the feudal leaders, cultists and scientists battling for control of Lagash would have been better.

The novel’s strong points is that it paints a picture of day to day life on of world very different to Earth in some ways, yet very similar in others. It develops and draws together the same diverse ideas as the original, with a scientific community and general public reacting to events in a manner that is completely believable.

We learn that Lagash is centuries behind Earth in the sciences of astronomy, cosmology and physics, but at a similar level in terms of engineering and technology. Presumably, though, the Lagashans have not yet managed to send even an unmanned vehicle beyond their atmosphere, or they would have learned of the existence of the Stars. With gravitation such a recent discovery, though, this is hardly surprising.

We also learn something about the system to which Lagash belongs. The planet orbits a yellow sun at a distance of ten light minutes (slightly further than Earth is from the Sun), there is a binary pair of blue suns at one hundred and ten light minutes away (somewhat closer than Uranus is from Earth) and the system also comprises a red dwarf and a binary pair of white suns.

The problem with the novel is that it exposes the intriguing and unusual elements that make up the story to a scrutiny under which they cannot entirely hold up.

Just how valid is the story’s central premise, that Darkness combined with the Stars will cause universal madness among a people utterly unused to such things? Is something going to cause madness simply because it has not been previously experienced and is unnatural? For example, for 99.9 percent of his history, mankind was utterly unused to flying. To man, a primate, flying is completely unnatural. Yet millions now do so every year without going mad. Even those with a fear of flying can generally tolerate it (exceptions include the former Arsenal and Netherlands footballer Dennis Bergkamp, and (alledgedly) The Good Doctor himself).

We must also question whether an advanced technological society could evolve given the handicap of a pathological fear of darkness. On Earth, after all, dependency on artificial lighting, even during daytime, has always been perfectly normal. Underground mines have existed since prehistoric times. But would Neolithic and Bronze Age man have constructed them faced with a deep-rooted phobia of entering such places and knowing that they risked instant madness were their crude illumination to fail? Without the Bronze Age, the science of metallurgy and all subsequent human advances would never have happened.

Crucial to the plot is the fact that Lagash’s moon cannot be seen in the eternal daylight due to its being composed of bluish rock. Would this be the case? Earth’s moon, composed of greyish rock (which will have a lower albedo), is easy to see by day. Possibly the Lagashan eye is less sensitive to relatively faint objects than the human eye (but it is curious that their eyes can dark-adapt like ours. How did this ability evolve on Lagash?).

Even if the moon cannot normally be seen, what happens during the total eclipse of Beta? Surely the moon, illuminated by the light of the other suns, would become visible. With these suns shining on it from various angles it would appear full – and at minimum distance, seven times the apparent diameter of Beta, almost certainly bright enough to drown out all but the brightest Stars. (We can rule out the possibility of Lagash itself eclipsing its moon, since one of the other suns set only four hours prior to totality.)

If only in comparison to the stunning original version, Nightfall doesn’t entirely succeed as a novel and for this reason, the short story remains the definitive version.

© Christopher Seddon 2008

The rise and fall of the quartz watch

To those of us old enough to remember it, the autumn of 1973 was not perhaps what Charles Dickens would have classified as “the best of times”. War had broken out in the Middle East, the Watergate scandal was making life difficult for the newly-re-elected Richard Nixon and the late and thoroughly unlamented General Pinochet had just seized power in Chile. Britain had begun the year joining the EEC (the forerunner of the EU) but was now in the grip of the Three Day Week as the confrontation between the Tory Prime Minister Edward Heath and the miners showed no sign of abating. Inflation was spiralling out of control and recession seemed inevitable.

It would have been about that time that I saw in the window of a jewellers shop in Wendover in Buckinghamshire something that caught my imagination – a Seiko quartz watch. I knew from the encyclopaedia that we had had at home since my early childhood that a quartz clock was an extremely accurate timepiece, but it was completely news to me that somebody had managed to shrink the complex electronics to the size of something that could be fitted into a wristwatch. In fact the first quartz watches appeared in Japan in 1969, but it obviously took time for them to make their way to the Home Counties (it must also be remembered that wide-spread access to the internet was still a quarter of a century off).

The watch had a claimed accuracy of 1 minute per year, which was quite sensational because even a well-regulated mechanical watch could – and still can – be off by that amount in a few days. It cost £100 – a considerable sum of money for the time. Soon after Seiko began marketing their watches very actively in the UK with the advertising tag “Some day all watches will be made this way”.

Rarely if ever has an advertising slogan proved more accurate; within a decade the mechanical wristwatch had all but disappeared from the windows of high street retailers. The first cheap quartz watches appeared around the second half of 1975. Unlike the analogue Seiko, these watches featured digital displays. The first models used light emitting diode (LED) displays of the type used by the electronic calculators of that time (calculators were also considered cool cutting-edge gadgets in the mid ‘70s) but had the major disadvantage that it was necessary to press a button in order to read off the time (I possessed one made by Samsung – a company virtually unknown in the West at the time). This type of display soon gave way to the now-familiar liquid crystal display (LCD) still found in brands like the ever-popular Casio G-Shock. A watch where one can read of the time as – say – 1:52 PM rather than “just after ten to two” might seem to be at a major advantage, but here the quartz revolution stuttered slightly. Most people actually preferred the older analogue displays and these days the majority of wristwatches have this type of display.

For the Swiss watch industry, quartz represented a major challenge. What happened next is best considered through the very different directions taken by two of Switzerland’s most prominent watchmakers – Rolex and Omega. Omega embraced the new technology full on. In 1974 they launched the Megaquartz Marine Chronometer, which remains to this day the most accurate wristwatch ever made. But – not helped by the adverse economic conditions of the time – Omega struggled and only within the last decade has the brand begun to regain its former strength. Rolex for their part did absolutely nothing. They carried on making exactly the same models – and they kept on selling! This policy was successful – today Rolex is by far the world’s largest producer of luxury wristwatches. It was many years before they even bothered to produce a quartz watch – the Oysterquartz. But despite an accuracy of 5 seconds per year – not far off the Omega Megaquartz – it was not a success and was eventually discontinued.

Round about the end of the 1980s the tide turned as more and more purchasers of high-end watches bean to reject quartz in favour of traditional mechanicals. Why one might ask, when a quartz watch is so much more accurate? There are a number of possible reasons – one obvious advantage a mechanical watch has over its quartz counterpart is that it never needs a battery. But battery-less technologies such as eco-drive (solar) and kinetic (rotor-driven dynamo) have largely failed to penetrate the high-end market. And in any case changing the battery every few years is far cheaper and less time-consuming than the regular servicing mechanical watches require to keep them in working order.

The answer is to some extent to be found with the so-called “display back”. Many mechanical watches now have a transparent back, so the movement can be viewed. Look at the intricate and exquisitely-finished movement in a Patek Phillippe or a Lange and compare it with an electronic chip. No contest! Even the nicely-decorated UNITAS hand-wound movements found in many mid-range watches such as the Stowa Marine Original beats a quartz movement hands down in the beauty stakes. To be blunt, one is a micro-machine, a marvel of precision engineering; the other is nothing more than an electrical appliance.

Today the vast majority of luxury watches are mechanical. Most of the high-end quartz watches, such as the Omega Megaquartz, the Rolex Oysterquartz and the Longines Conquest VHP, have long since ceased production. The Citizen Chronomaster, rated to within 5 seconds a year, remains a current model but it is not widely available outside of Japan. The advent of radio control, whereby a watch can synchronize itself to the time signals from Rugby, Frankfurt, Colorado etc has meant that super-accurate quartz movements are now largely redundant, virtually killing off innovation in the field. Most modern quartz watches, when not synchronized to a time signal, are actually far less accurate than the Seiko I saw in that jeweller’s shop window almost three and a half decades ago.

© Christopher Seddon 2008

Biological Classification and Systematics

The Linnaean classification

Scientific classification or biological classification is how species both living and extinct are grouped and categorized. Man’s desire to classify the natural world seems to be very deep rooted and the fact that many traditional societies have highly sophisticated taxonomies suggests the practice goes back to prehistoric times. However the earliest system of which we have knowledge was that of Aristotle, who divided living organisms into two groups – animals and plants. Animals were further divided into three categories – those living on land, those living in the water and those living in the air, and were in addition categorised by whether or not they had blood (those “without blood” would now be classed as invertebrates). Plants were categorised by differences in their stems.

Aristotle’s system remained in use for hundreds of years but by the 16th Century, man’s knowledge of the natural world had reached a point where it was becoming inadequate. Many attempts were made to devise a better system, but the science of biological classification remained in a confused state until the time of Linnaeus, who published the first edition of his Systema Naturae in 1735. In this work, he re-introduced Gaspard Bauhin’s binomial nomenclature and grouped species according to shared physical characteristics for ease of identification. The scheme of ranks, as used today, differs very little from that originally proposed by Linnaeus. A taxon (plural taxa), or taxonomic unit, is a grouping of organisms. A taxon will usually have a rank and can be placed at a particular level in the hierarchy.

The ranks in general use, in hierarchical order, are as follows:

Domain
Kingdom
Phylum (animals or plants) or Division (plants only)
Class
Order
Cohort
Family
Tribe
Genus
Species

The prefix super- indicates a rank above; the prefix sub- indicates a rank below. The prefix infra- indicates a rank below sub-. For instance:

Superclass
Class
Subclass
Infraclass

Even higher resolution is sometimes required and divisions below infra- are sometimes encountered, e.g. parvorder. Domains are a relatively new grouping. The three-domain system (Archaea, Bacteria and Eukaryota) was first proposed in 1990 (Woese), but not generally accepted until later. Many biologists to this day still use the older five-kingdom system (Whittaker). One main characteristic of the three-domain system is the separation of Archaea and Bacteria, previously grouped into the single prokaryote kingdom Bacteria (sometimes Monera). As a compromise, some authorities add Archaea as a sixth kingdom.

It should be noted that taxonomic rank is relative, and restricted to the particular scheme used. The idea is to group living organisms by degrees of relatedness, but it should be bourn in mind that rankings above species level are a bookkeeping idea and not a fundamental truth. Groupings such as Reptilia are a convenience but are not proper taxonomic terms. One can become too obsessed with whether a thing belongs in one artificial category or another – e.g. is Pluto a planet or (closer to home) does habilis belong in Homo or Australopithecus; does it really matter if we lump the robust australopithecines into Australopithecus or split them out into Paranthropus?

Systematics

Systematics is the study of the evolutionary relationships between organisms and grouping of organisms. There are three principle schools of systematics – evolutionary taxonomy (Linnaean or “traditional” taxonomy), phenetics and cladistics. Although there are considerable differences between the three in terms of methodologies used, all seek to determine taxonomic relationships or phylogenies between different species or between different higher order groupings and should, in principle, all come to the same conclusions for the species or groups under consideration.

Some Terminology and concepts

One of the most important concepts in systematics is that of monophyly. A monophyletic group is a group of species comprising an ancestral species and all of its descendants, and so forming one (and only one) evolutionary group. Such a group is said to be a natural group. A paraphyletic group also contains a common ancestor, but excludes some of the descendants that have undergone significant changes. For instance, the traditional class Reptilia excludes birds even though they evolved from an ancestral reptile. A polyphyletic group is one in which the defining trait evolved separately in different places on the phylogenetic tree and hence does not contain all the common ancestors, e.g. warm-blooded vertebrates (birds and mammals, whose common ancestor was cold-blooded). Such groups are usually defined as a result of incomplete knowledge. Organisms forming a natural group are said to form a clade, e.g. the amniotes. If however the defining feature has not arisen within a natural group, it is said to be a grade, e.g. flightless birds (flight has been given up by many unrelated groups of birds).

Characters are attributes or features of organisms or groups of organisms (taxa) that biologists use to indicate relatedness or lack of relatedness to other organisms or groups of organisms. A character can be just about anything that can be measured from a morphological feature to a part of its genetic makeup. Characters in organisms that are similar due to descent from a common ancestor are known as homologues and it is crucial to systematics to determine if characters under consideration are indeed homologous, e.g. wings are homologous if we are comparing two birds, but if a bird is compared with, say, a bat, they are not, having arisen through convergent evolution, a process where structures similar in appearance and function appear in unrelated groups of organisms. Such characters are known as homoplasies. Convergences are not the same as parallelisms which are similar structures that have arisen more than once in species or groups within a single extended lineage, and have followed a similar evolutionary trajectory over time.

Character states can be either primitive or derived. A primitive character state is one that has been retained from a remote ancestor; derived character states are those that originated more recently. For example the backbone is a defining feature of the vertebrates and is a primitive state when considering mammals; but the mammalian ear is a derived state, not shared with other vertebrates. However these things are relative. If one considers Phylum Chordata as a whole, the backbone is a derived state of the vertebrates, not shared with the acrania or the tunicates. If a character state is primitive at the point of reference, it is known as a pleisiomorphy; if it is derived it is known as an apomorphy (note that “primitive” trait in this context does not mean it is less well adapted than one that is not primitive).

Current schools of thought in classification methodology

Biologists devote much effort to identifying and unambiguously defining monophyletic taxa. Relationships are generally presented in tree-diagrams or dendrograms known as phenograms, cladograms or evolutionary trees depending on the methodology used. In all cases they represent evolutionary hypotheses i.e. hypotheses of ancestor-descendant relationships.

Phenetics, also known as numerical taxonomy, was developed in the late 1950s. Pheneticists avoid all considerations of the evolution of taxa and seek instead to construct relationships based on overall phenetic similarity (which can be based on morphological features, or protein chemistry, or indeed anything that can be measured), which they take to be a reflection of genetic similarity. By considering a large number of randomly-chosen phenotypic characters and giving each equal weight, then the sums of differences and similarities between taxa should serve as the best possible measure of genetic distance and hence degree of relatedness. The main problem with the approach is that it tends to group taxa by degrees of difference rather than by shared similarities. Phenetics won many converts in the 1960s and 1970s, as more and more “number crunching” computer techniques became available. Though it has since declined in popularity, some believe it may make a comeback (Dawkins, 1986).

By contrast, cladistics is based on the goal of producing testable hypotheses of genealogical relationships among monophyletic groups of organisms. Cladistics originated with Willi Hennig in 1950 and has grown in popularity since the mid-1960s. Cladists rely heavily on the concept of primitive versus derived character states, identifying homologies as pleisiomorphies and apomorphies. Apomorphies restricted to a single species are referred to as autapomorphies, where as those shared between two or more species or groups are known as synapomorphies.

A major task for cladists is identifying which is the pleisiomorphic and which is the apomorphic form of two character states. A number of techniques are used; a common approach is outgroup analysis where clues are sought to ancestral character states in groups known to be more primitive than the group under consideration.

In constructing a cladogram, only genealogical (ancestor-descendent) relationships are considered; thus cladograms may be thought of as depicting synapomorphy patterns or the pattern of shared similarities hypothesised to the evolutionary novelties among taxa. In drawing up a cladogram based on significant numbers of traits and significant numbers of taxa, the consideration of every possibility is beyond even a computer; computer programs are therefore designed to reject unnecessarily complex hypotheses using the method of maximum parsimony, which is really an application of Occam’s Razor.

The result will be a family tree – an evolutionary pattern of monophyletic lineages; one that can be tested and revised as necessary when new homologues and species are identified. Trees that consistently resist refutation in the face of such testing are said to be highly corroborated.

A cladogram will often be used to construct a classification scheme. Here cladistics differs from traditional Linnaean systematics. Phylogeny is treated as a genealogical branching pattern, with each split producing a pair of newly-derived taxa known as sister groups (or sister species). The classification is based solely on the cladogram, with no consideration to the degree of difference between taxa, or to rates of evolutionary change.

For example, consider these two classification schemes of the Phylum Chordata.

Classification Scheme A (Linnaean):

Phylum Chordata
Subphylum Vertebrata (vertebrates)
Superclass Pisces (fish)
Class Amphibia (amphibians)
Class Reptilia (turtles, crocodiles, snakes and lizards)
Class Mammalia (mammals)
Class Aves (birds)

Classification Scheme B (Cladistic):

Phylum Chordata
Subphylum Vertebrata
Superclass Tetrapoda
Subclass Lissamphibia (recent amphibians)
Superclass Amniota
Class Mammalia (mammals)
Class Reptilomorpha
Subclass Anapsida (turtles)
Subclass Diapsida
Infraclass Lepidosaura (snakes, lizards, etc)
Infraclass Archosauria
Order Crocodilia (crocodiles, etc)
Class Aves (birds)

In Scheme A, crocodiles are grouped with turtles, snakes and lizards as “reptiles” (Class Reptilia) and birds get their own separate grouping (Class Aves). This scheme considers physical similarities as well as genealogy; but the result is the scheme contains paraphyletic taxa. Scheme B strictly reflects cladistic branching patterns; the reptiles are broken up, with birds and crocodiles as a sister group Archosauria (which also included the dinosaurs). All the groupings in this scheme are monophyletic. It will be noted that attempts to append traditional Linnaean rankings to each group runs into difficulties – birds should have equal ranking with the Crocodilia and should therefore be also categorised as an order within the Archosauria; not their own class, as is traditional.

Traditional Linnaean systematics, now referred to as evolutionary taxonomy, seeks to construct relationships on basis of both genealogy and overall similarity/dissimilarity; rates of evolution are an important consideration (in the above example, birds have clearly evolved faster than crocodiles); classification reflects both branching pattern and degree of difference of taxa. The approach lacks a clearly-defined methodology; tends to be based on intuition; and for this reason does not produce results amenable to testing and falsification.

© Christopher Seddon 2008

Linnaeus – Princeps Botanicorum

There are very few examples of scientific terminology that have become sufficiently well-known to have become a part of popular culture. The chemical formula for water – H2O – is certainly one; it is so familiar it has even featured in advertisements. Another is the equation E = mc squared – while not everybody knows that it defines a relationship between mass and energy, most will have heard of it and will be aware it was formulated by Albert Einstein.

But the most familiar scientific term of all has to be Homo sapiens – Mankind’s scientific name for himself.

The term was originated by the 18th Century Swedish scientist Carl von Linné (1707-78), better known as Linnaeus, who first formally “described” the human species in 1758. It means (some would say ironically!) “wise man” or “man the thinker”. It is an example of what biologists call the binomial nomenclature, a system whereby all living things are assigned a double-barrelled name based on their genus and species. These latter terms are in turn part of a bigger scheme of classification known as the Linnaean taxonomy, which – as the name implies – was introduced by Linnaeus himself.

Man has been studying and classifying that natural world throughout recorded history and probably much longer. A key concept in classification of living organisms is that they all belong to various species, and this is a very old idea indeed, almost certainly prehistoric in origin. For example, it would have been obvious that sheep all look very much alike, but that they don’t look in the least bit like pigs, and that therefore all sheep belong to one species and all pigs belong to another. Today we refer to organisms so grouped as morphological species.

In addition, the early Neolithic farmers must soon have realised that while a ewe and a ram can reproduce, and likewise a sow and a boar; a ewe and a boar, or a sow and a ram cannot. Sheep and pigs are different biological species, though this definition of a species was not formalised until much later, by John Ray (1628-1705), an English naturalist who proclaimed that “one species could never spring from the seed of another”.

The first attempt at arranging the various species of living organisms into a systematic classification was made by the Greek philosopher Aristotle (384-322 BC), who divided them into two groups – animals and plants. Animals were further divided into three categories – those living on land, those living in the water and those living in the air, and were in addition categorised by whether or not they had blood (broadly speaking, those “without blood” would now be classed as invertebrates, or animals without a backbone). Plants were categorised by differences in their stems.

Aristotle’s system remained in use for hundreds of years but by the 16th Century, Man’s knowledge of the natural world had reached a point where it was becoming inadequate. Many attempts were made to devise a better system, with some notable works being published by Conrad Gessner (1516-65), Andrea Cesalpino (1524-1603) and John Ray (1628-1705).

In addition Gaspard Bauhin (1560-1624) introduced the binomial nomenclature that Linnaeus would later adopt. Under this system, a species is assigned a generic name and a specific name. The generic name refers to the genus, a group of species more closely related to one another than any other group of species. The specific name represents the species itself. For example lions and tigers are different species, but they are similar enough to both be assigned to the genus Panthera. The lion is Panthera leo and the tiger Panthera tigris.

Despite these advances, the science of biological classification at the beginning of the 18th Century remained in a confused state. There was little or no consensus in the scientific community on how things should be done and with new species being discovered all the time, the problem was getting steadily worse.

Step forward Carl Linné, who was born at Rashult, Sweden, in 1707, the son of a Lutherian curate. He is usually known by the Latinised version of his name, Carolus Linnaeus. It was expected that young Carl would follow his father into the Church, but he showed little enthusiasm for this proposed choice of career and it is said his despairing father apprenticed him to a local shoemaker before he was eventually sent to study medicine at the University of Lund in 1727. A year later, he transferred to Uppsala. However his real interest lay in Botany (the study of plants) and during the course of his studies he became convinced that flowering plants could be classified on the basis of their sexual organs – the male stamens (pollinating) and female pistils (pollen receptor).

In 1732 he led an expedition to Lapland, where he discovered around a hundred new plant species, before completing his medical studies in the Netherlands and Belgium. It was during this time that he published the first edition of Systema Naturae, the work for he is largely remembered, in which he adopted Gaspard Bauhin’s binomial nomenclature, which to date had not gained popularity. Unwieldy names such as physalis amno ramosissime ramis angulosis glabris foliis dentoserratis were still the norm, but under Bauhin’s system this became the rather less wordy Physalis angulata.

This work also put forward Linnaeus’ taxonomic scheme for the natural world. The word taxonomy means “hierarchical classification” and it can be used as either a noun or an adjective. A taxonomy (noun) is a tree structure of classifications for any given set of objects with a single classification at the top, known as the root node, which applies to all objects. A taxon (plural taxa) is any item within such a scheme and all objects within a particular taxon will be united by one or more defining features.

For example, a taxonomic scheme for cars has “car” as the root node (all objects in the scheme are cars), followed by manufacturer, model, type, engine size and colour. Each of these sub-categories is known as a division. An example of a car classified in the scheme is Car>Ford>Mondeo>Estate>2.3 Litre>Metallic silver. An example of a taxon is “Ford”; all cars within it sharing the defining feature of having been manufactured by the Ford Motor Company.

The taxonomy devised by Linnaeus, which he refined and expanded over ten editions of Systema Naturae, had six divisions. At the top, as in the car example is the root note, which Linnaeus designated Imperium (Empire), of which all the natural world is a part. The divisions below this were Regnum (Kingdom), Classis (Class), Ordo (Order), Genus and Species.

The use of Latin in this and other learned texts is worth a brief digression. At the time few scientists spoke any contemporary language beyond their own native tongue, but most had studied the classics and so nearly all scientific works were published in Latin, including Sir Isaac Newton’s landmark Philosophiae Naturalis Principia Mathematica (The Philosophy of Natural Mathematical Principles) and Linnaeus’ own Systema Naturae. One notable exception was Galileo’s The Dialogue Concerning the two Chief World Systems, aimed at a wider audience and thus an early example of “popular science” (though it certainly wasn’t very popular with the Inquisition!).

Linnaeus recognised three kingdoms in his system, the Animal kingdom, the Plant Kingdom and the Mineral Kingdom. Each kingdom was subdivided by Class, of which the animal kingdom had six: Mammalia (mammals), Aves (birds), Amphibia (amphibians), Pisces (fish), Insecta (insects) and Vermes (worms). The Mammalia (mammals) are those animals that suckle their young. It is said that Linnaeus adopted this aspect as the defining feature of the group because of his strongly-held view that all mothers should breast feed their on babies. He was strongly opposed to the then-common practice of “wet nursing” and in this respect he was very much in tune with current thinking.

Each class was further subdivided by Order, with the mammals comprising eight such orders, including the Primates. Orders were subdivided into Genera, with each Genus containing one or more Species. The Primates comprised the Simia (monkeys, apes, etc) and Homo (man), the latter containing a single species, sapiens (though Linnaeus initially also included chimpanzees and gibbons).

The Linnaean system did not accord equal status to apparently equal divisions; thus the Mineral Kingdom was ranked below the Plant Kingdom; which in turn sat below the Animal Kingdom. Similarly the classes were assigned ranks with the mammals ranking the highest and the worms the lowest. Within the mammals the Primates received top billing, with Homo sapiens assigned to pole position therein.

This hierarchy within a hierarchy reflected Linnaeus’ belief that the system reflected a Divine Order of Creation, with Mankind standing at the top of the pile and indeed the term “primate” survives to this day as a legacy of that view. It should be remembered that the prevalent belief at the time of Linnaeus was that the Earth and all living things had been produced by God in their present forms in a single act. This view, now known as Creationism, wasn’t seriously challenged until the 19th Century.

Linnaeus’ system was an example of natural theology, which is the study of nature with a view to achieving a better understanding of the works of God. It was heavily relied on by the deists of that time. Deists believe that knowledge of God can be deduced from nature rather than having to be revealed directly by supernatural means. Deism was very popular in the 18th Century and its adherents included Voltaire, Thomas Jefferson and Benjamin Franklin.

Though some were already beginning to question Creationism, Linnaeus was not among them and he proclaimed that “God creates, Linnaeus arranges”. It has to be said that modesty wasn’t Linnaeus’ strongest point and he proposed that Princeps Botanicorum (Prince of Botany) be engraved on his tombstone. He was no doubt delighted with his elevation to the nobility in 1761, when he took the name Carl von Linné.

Linnaeus did have his critics and some objected to the bizarre sexual imagery he used when categorising plants. For example, “The flowers’ leaves…serve as bridal beds which the Creator has so gloriously arranged, adorned with such noble bed curtains, and perfumed with so many soft scents that the bridegroom with his bride might there celebrate their nuptials with so much the greater solemnity…”. The botanist Johann Siegesbeck denounced this “loathsome harlotry” but Linnaeus had his revenge and named a small and completely useless weed Siegesbeckia! In the event Linnaeus’ preoccupation with the sexual characteristics of plants gave poor results and was soon abandoned.

Nevertheless, Linnaeus’ classification system, as set out in the 10th edition of Systema Naturae, published in 1758, is still is considered the foundation of modern taxonomy and it has been modified only slightly.

Linnaeus continued his work until the early 1770s, when his health began to decline. He was afflicted by strokes, memory loss and general ill-health until his death in 1778. In his publications, Linnaeus provided a concise, usable survey of all the world’s then-known plants and animals, comprising about 7,700 species of plants and 4,400 species of animals. These works helped to establish and standardize the consistent binomial nomenclature for species, including our own.

We have long ago discarded the “loathsome harlotry” and the rank of Empire. Two new ranks have been added; Phylum lies between Kingdom and Class; and Family lies between Order and Genus, giving seven hierarchical ranks in all. In addition, prefixes such as sub-, super-, etc. are sometimes used to expand the system. (The optional divisions of Cohort (between Order and Class) and Tribe (between Genus and Family) are also sometimes encountered, but will not be used here). The Mineral Kingdom was soon abandoned but other kingdoms were added later, such as Fungi, Monera (bacteria) and Protista (single-celled organisms including the well-known (but actually quite rare) Amoeba) and most systems today employ at least six kingdoms.

On this revised picture, Mankind is classified as follows:

Kingdom: Animalia (animals)
Phylum: Chordata (possessing a stiffening rod or notochord)
Sub-phylum: Vertebrata (more specifically possessing a backbone)
Class: Mammalia (suckling their young)
Order: Primates (tarsiers, lemurs, monkeys, apes and humans)
Family: Hominidae (the Hominids, i.e. modern and extinct humans, the extinct australopithecines and, in some recent schemes, the great apes)
Genus: Homo
Species: sapiens

It should be noted that we while we now regard all equivalent-level taxa as being equal, the updated scheme would work perfectly well if we had continued with Linnaeus’ view that some taxa were rather more equal in the eyes of God than others, and it is in no way at odds with the tenets of Creationism. The Linnean Taxonomy shows us where Man fits into the grand scheme of things, but it has nothing to tell us about how we got there. It was left for Charles Darwin to point the way.

© Christopher Seddon 2008