The Prehistory of the Mind: A Search for the Origins of Art, Science and Religion(1996), by Steven Mithen

Steven Mithen is Professor of Archaeology at the University of Reading and is a pioneer in the field of cognitive archaeology, which is the branch of archaeology that investigates the development of human cognition.

Mithen’s 1996 work The Prehistory of the Mind: A Search for the Origins of Art, Science and Religion is an ambitious attempt to bring to bear an interdisciplinary approach to the evolutionary origins of the human mind. The book is aimed at the non-specialist and was generally well-received upon its publication.

The following is an extended summary of Mithen’s book:

Chapter 1: “Why ask an archaeologist about the human mind?” Mithen touts his book as an archaeologist’s approach to a problem normally tackled by psychologists and neurologists.

There have been major two spurts of brain enlargement in humans and proto-humans – one between 2.0 and 1.5 million years ago (Homo habilis) and a lesser one at 500,000 to 200,000. One is linked to the development of tool-making, but there was no great advance at that time. Brains had already reached present-day size when two dramatic transformations occurred. One was a “cultural explosion” at 60,000 – 30,000 years ago (art, complex technology and religion) and the other was at 10,000 years ago (agriculture). Brains are expensive to run in terms of the body’s energy-budget, so what were they used for before the cultural explosion? What was going on between the major enlargement spurts? What caused the cultural explosion? How did language and consciousness arise? When did modern intelligence arise – indeed what is modern consciousness?

Chapter 2: “The Drama of our Past”. Mithen presents prehistory from 6 million years ago to the present day as a play in four acts.

Act 1 6-4.5 million years ago features the common ancestral ape (“the missing link”). Nothing much happens during this act.

Act 2 4.5-1.8 million years ago. Starts in Africa – initially Chad, Kenya, Ethiopia and Tanzania for Scene 1; enlarges to include South Africa for Scene 2. Act opens with Australopithecus ramidus, the first of the australopithecines who is joined by A. anamensis 300,000 years later. Both live in wooded environments and are principally vegetarian. At 3.5 million years ago, they are replaced by Lucy (A. afarensis) who can both walk upright and climb trees. She is on-stage for 0.5 million years, then leaves and nothing happens until Scene 2 opens at 2.5 million years ago. Right at the end of Scene 1 we see primitive stone tools (Omo industrial complex), but we cannot see who made them. A rush of actors appear with Scene 2 – gracile australopithecines (A. africanus) in the South and robusts in both East and South. The first humans (Homo habilis, etc) are seen at 2.0 million years. They carry tools, stone artefacts known as the Oldowan industry. Habilis butchers animals but we cannot see if they have been hunted or merely scavenged. The remaining australopithecines become more and more robust.

Act 3 1.8 million – 100,000 years ago. The act begins with a grand announcement: “The Pleistocene begins”. Ice sheets form in high latitudes. Scene 1 – the Lower Palaeolithic. The Homo habilis actors exit and are replaced by Homo erectus who is taller and bigger-brained. The robust australopithecines skulk around in the background until 1 million years ago. H. erectus appears simultaneously in East Africa, China and Java. Erectus persists in East Asia until 300,000 years ago but elsewhere we see actors with more rounded skulls referred to as archaic Homo sapiens. By 500,000 years ago the stage expands to include Europe and a new actor, the large Homo heidelbergensis. New tools join the Oldowan tools in this act, pear-shaped hand-axes that are first seen in East Africa at 1.4 million years ago but soon spread to all parts of the stage except south-east Asia where no tools are seen (possibly they used bamboo). Scene 2 – the Middle Palaeolithic. This scene opens 200,000 years ago though the distinction is blurred and is gradually being phased out. However new tools are replacing the hand-axes and include those made by the Levallois method, which show regional variation. At 150,000 years ago Homo neanderthalensis appears in Europe and the Near East. Like other actors he has to deal with frequent and dramatic changes to the scenery as Ice Ages come and go and vegetation changes from tundra to forest. For all this, tool kits change very little for a million years. Brain size is modern but there is still no art, religion or science.

Act 4 100,000 years ago to present day. Scene 1 covers 100,000 to 60,000 years ago. Homo sapiens sapiens [at the time of writing it was still believed that modern humans and Neanderthals were subspecies rather than separate species] joins a cast that includes archaics and Neanderthals. In the Near East, Homo sapiens sapiens bury their dead (as indeed to the Neanderthals) but also place parts of animal carcasses on the bodies as grave goods. In South Africa ochre is used and bone is used to make harpoons (first use of anything other than stone or wood). In Scene 2, beginning 60,000 years ago) Homo sapiens sapiens builds boats and reaches Australia. Blade technology appears where flakes are removed from prepared prismatic cores. 40,000 years ago we enter the Upper Palaeolithic in Europe and the Late Stone Age in Africa. Diverse props are made from new materials including bone and ivory. Beads; necklaces; animal and human carvings; cave paintings, bone needles used to sew clothes. For about 10,000 years the Neanderthals may be trying to mimic Homo sapiens sapiens but then they fade out, leaving Homo sapiens sapiens alone on the world stage. Cave art flourishes in Europe as the Ice Age is at its height 30,000 – 12,000 years ago. As the ice retreats the scenery fluctuates between cold/dry and warm/wet and the stage expands to take in the Americas. Scene 3 begins with the Holocene. Agriculture appears in the Near East, towns and cities appear, empires rise and fall; carts become cars and tablets become word processors as the final curtain falls.

Chapter 3. “The Architecture of the Modern Mind”. By exposing the architecture of the modern mind and taking it apart we can learn much about how it evolved.

Can the mind of a young child be regarded as a sponge, soaking up knowledge, or is it better thought of as a computer? A computer can take in data, run a program to process it and output the result. Could a young child’s mind be thought of as running a general-purpose learning program to process the knowledge they are soaking up? In fact this analogy is not a good one. Children do more than process data – they think, create and imagine.

In 1979 the US archaeologist Thomas Wynn published an article claiming the modern mind existed 300,000 years ago, before anatomically modern man. He suggested that phases of mental development of a child reflect phases of the cognitive evolution of mankind (the ideas that “ontogeny recapitulates phylogeny”). Wynn consulted child psychologist Jean Piaget, who believed that mind is like a computer and runs a small set of general purpose programs to control entry of new information and restructure the mind so that it passes through a series of developmental phases. According to Piaget, the last phase is reached at 12 when child acquires “formal operational intelligence” and can think about hypothetical objects and events. As such intelligence is required to make a hand-axe with the maker needing to be able to visualise the finished produce, Wynn concluded the makers of such hand-axes, who lived 300,000 years ago, must possess modern minds.

However Mithen is dubious and believes that the events in Act 4 must have required further cognitive developments. Wynn’s reasoning is sound, therefore Piaget must be wrong. In fact Piaget’s ideas of general-purpose learning programs are now widely disputed by psychologists who have instead begun to liken the mind to a Swiss Army knife, with specialised devices to perform different tasks.

Psycho-linguist Jerry Fodor’s book “The Modularity of the Mind” was published in 1983. According to Fodor, the mind has two parts – input systems and cognition or central systems. The input systems are a series of discreet modules with dedicated architectures that govern sight, hearing, touch, etc. Language is also regarded as an input system. However the cognitive or central system has no architecture at all – this is where “thought”, “imagination” and “problem solving” happen and “intelligence” resides.

Each input system is based on independent brain processes and they are quite different from each other, reflecting their different purposes. These systems are localized in specific areas of the brain. The input systems are mandatory, if for example somebody sits behind you on the bus and spends the entire journey gassing away on their mobile, you cannot switch off the hearing module. However this has the advantage of saving time that would otherwise spent on decision-making.

Fodor believes that the input systems are “encapsulated”, i.e. they do not have direct access to the information being acquired by other input systems. What one is experiencing at a given time in one sensory modality does not affect any of the others.

A second feature of the input modules is that they only have limited information from the central systems. Fodor cites a number of optical illusions such as the Muller-Lyre arrows, which continue to apparently differ in length even when one is fully aware that this is not the case. The input modules are essentially “dumb” systems that act independently of the cognitive system and each other. To sum up, they are encapsulated, mandatory, fast-operating and hard-wired. Perception is innate, i.e. hard wired into the mind at birth.

The central cognitive systems are very different to the “dumb” input systems. According to Fodor, they are “smart”, they operate slowly, are unencapsulated and domain-neutral, i.e. they cannot be related to specific areas of the brain.

The Fodorian view is that evolution has given the modern human mind the best of both worlds: input modules that can enable swift, unthinking reactions in situations of danger (predators, etc) or opportunity (prey, etc) on one hand; and a slower central cognitive system, to be used when there is time for quiet contemplation, integrating information of many types and from many sources.

Also published in 1983 was Howard Gardner’s “Frames of Mind: The theory of Multiple Intelligences”. Gardiner was as much concerned with practical issues such as devising educational policies for schools as with philosophy of the mind and he put forward a very different architecture to Fodor. The entire mind is a Swiss army knife, with seven “blades”: linguistic, musical, logical-mathematical, spatial, bodily-kinaesthetic and two forms of personal intelligence, one for looking into one’s own mind and one for looking outwards into the minds of others. Gardner’s modules are smart, interact with each other, and can be used for problem solving. The smartest people are those who can use the modular domains synergistically e.g. by use of metaphor and analogy.

Mithen speculates the two approaches are closer than might at first appear to be the case. Fodor’s non-modular central system might appear that way because its modules function so smoothly the modularity within simply cannot be discerned.

In 1992 the evolutionary psychologists Leda Cosmides and John Tooby (abbreviated to “C&T” by the author) entered the fray with an essay published in “The Adapted Mind”, co-edited with Jerome Barkow. On their view, the human mind evolved under selective pressures during Pleistocene. We remain adapted to this, as it ended so recently. The mind is like a Swiss army knife with many blades, each designed by natural selection to deal with problems faced by hunter-gatherers. The modules are “Fodor type”, i.e. hard-wired, but are “content rich” and possess not just algorithms for solving problems but a built-in knowledge-base. Some are activated at birth – e.g. those for making eye-contact with mother; others need a little time, such as language-recognition.

Hunter-gatherers would have needed specific modules for specific tasks, as more a more general purpose reasoning would have been prone to making errors – e.g. committing incest or failing to share food with kin, if indeed they could make a decision about anything at all, and thus the Swiss army knife model was selected for.

C&T believe children could not learn complex subjects rapidly without content-rich mental modules pre-programmed to do so (cf. Noam Chomsky’s theory about built-in grammar systems).

The dedicated systems are also used to make rapid decisions e.g. run if faced with a lion rather than weigh up the pros and cons of the situation and getting eaten.

Looking at the lifestyle, C&T predict the modules that would be needed: face-recognition, spatial relations, rigid object mechanics, fear, social exchange, emotion, kin-orientated motivation, effort allocation and recalibration, childcare, social awareness, friendship, grammar, communications, theory-of-mind etc. These modules are grouped together in domains called “faculties”.

But does this model explain the existence of a genius like Einstein? Could a mind purpose-built for life as a hunter-gatherer expand the boundaries of human knowledge? Mithen considers present-day hunter-gatherers and note all think of the natural world in social terms – anthropomorphic (animals with human-like characteristics) and totemic (kinship with animals and plants) thinking. This contradicts C&T which assume different “blades” would be used for the natural and social world and would not think about the natural world as if it were a social being. Children will anthropomorphise a cat and interact with a doll as if it is a living person. How can this be squared with content-rich modules? The human passion for analogy and metaphor is a problem for C&T. How can this be resolved?

Mithen then considers child development and four domains of instinctive intelligence – language, psychology, physics and biology.

1) Language has already been considered.

2) Psychology – children possess a “Theory of mind”, i.e. they can predict what others are thinking (autism is the absence of this ability). Alan Leslie and others developed this idea originally put forward by Nicholas Humphrey “The social function of intellect” – such ability will be selected for when individuals live in a group. Such behaviour could not be learned by young children from experience alone and thus must be innate (i.e. the content must be there already). Humphrey believes that the biological purpose of reflexive consciousness is to model the mind of another individual – in other words reflect on how we ourselves would feel in a given situation.

3) Biology. Children realise that animals have an immutable “essence” – a three-legged dog that can’t bark is still a dog; putting striped pyjamas on a horse doesn’t transform it into a zebra, etc. Scott Atran (1990) notes among all known cultures certain concepts are universal: vertebrates, flowering plants, sequential naming conventions (e.g. spotted shingle oak); taxa that are morphologically similar; higher taxa such as birds and fish; trees and grass. Such “natural history intelligence” would be vital to hunter-gatherers.

4) Physics. Young children instinctively understand solidity, gravity and inertia. Children understand the difference between living and inanimate things. There are obvious advantages to having this knowledge from Day 1.

So how do we resolve the paradox of children applying inappropriate rules of psychology, biology and language when playing with inanimate objects?

Mithen goes back to the notion of “ontology recapitulates phylogeny”. The evidence for content-rich modules comes from studies of children aged 2-3. Developmental psychologist Paula Greenfield suggests before that age the Swiss army knife modules aren’t there – instead there is only a simple general-purpose learning program. The language explosion begins at 2, suggesting the content-rich modules only cut in then. Prior to that the child’s mind is like that of a chimpanzee. The child’s has metamorphosed from a computer program to a Swiss Army knife.

However, according to Annette Karmiloff-Smith, the final stage of mental development has yet to come. Her 1992 work “Beyond modularity” attempts a synthesis of Piaget’s and Fodor’s work. Her view is that the dedicated content-rich modules kick-start the development of cognitive domains. Rather than having mental modules grouped in faculties, Karmiloff-Smith has domains comprised of micro-domains. Cultural development shapes these domains and while hunter-gatherers didn’t need a maths domain, one could develop in a modern child under appropriate cultural conditions.

After modularisation, the modules begin working together. Karmiloff-Smith describes this as representational redescription (RR). RR results in multiple representations of similar knowledge and consequently knowledge becomes applicable beyond the special purpose goals for which it is normally used and perceptual links across domains can be forged. Thoughts can arise that had previously been trapped in one domain.

Developmental psychologists Susan Carey and Elizabeth Spelke have proposed a similar idea. “Mapping across domains”, or duplicating the same data in different domains, is a fundamental feature of cognitive development and one that might account for cultural diversity. This can be compared to Gardner’s view of smartest people using the different domains synergistically, as exemplified by use of analogy and metaphor.

Margaret Boden’s 1990 work The Creative Mind explores how we can account for creative thought and concludes this arises from what she describes as the transformation of conceptual spaces. Boden’s conceptual spaces are similar to cognitive domains. Transformation of one of these involves the introduction of new knowledge, or new ways of processing the knowledge that is already contained within the domains.

The evidence for thought requiring knowledge from multiple cognitive domains is overwhelming and it is clearly a critical feature of mental architecture. To account for it, Paul Rozin argued that the processes of evolution should result in a host of modules within the mind. But rather than add more modules as C&T suggested, Rozin believed that accessibility between the modules is a critical feature in both child development and evolution.

Basically partial de-modularisation appears to be essential for creative thought and a fully-developed modern mind.

But the French anthropologist and cognitive scientist Dan Sperber believes we can have it both ways – full modularity and creativity. He believes we have a module of meta-representation or MMR. This holds metadata, representations of representations, which can be updated as new data becomes available. For example new data about cats is matched against a meta-cat is used to update the meta-cat. The MMR thus acts as a clearing-house for new ideas. Ideas that cannot find a home stay in the clearing-house.

“Mischief can occur in the clearing-house” – ideas about dogs can get mixed up with ideas about inanimate objects, thus a stuffed toy can be made to represent a real dog. But this crossover shouldn’t happen according to C&T – it could lead to errors like eating a plastic banana. In fact this doesn’t happen, however fevered our imagination we can (on the whole) distinguish it from reality. Can C&T’s ideas (full modularity) be reconciled with Karmiloff-Smith, Carey, Spelke (partial modularity) and Sperber (clearing-house)? Mithen believes they can in an evolutionary context.

Chapter 4. “A new proposal for the mind’s evolution”. The mind is likened to a cathedral built in several architectural phases. Mithen adopts the premise of ontology recapitulates phylogeny, due to E. Conklin in 1928, documented by Stephen Jay Gould in 1977 and drawn on by Thomas Wynn. Mithen also considers and rejects neoteny however useful this is for morphological development of modern humans.

Phase 1. Minds dominated by a general intelligence. Such minds have a single “nave” in which all the services take place; these are the thought processes. Fodor’s input modules are present for delivering information to the nave, but it lacks the complex cognitive systems Fodor sees in the modern mind. This is a nave of simple general intelligence, similar to that of young (>2 years old) children. The behaviour is simple, the rate of learning slow, errors frequent and complex behaviour patterns cannot be acquired.

Phase 2. To the central nave are added chapels of specialised intelligence or cognitive domains. Just as the addition of side chapels to Romanesque cathedrals in the 12th Century reflect increasing complexity of church ritual, so these chapels reflect increasing complexity of mental activity. Some of the modules found in the chapels were present in the original nave, but they have now been grouped together in the appropriate chapel.

There are four chapels of specialised intelligences – social (group behaviour, intentionality (“mind-reading”)), technical (tool-making, etc), natural history (weather, geography, animal behaviour, etc) and language. The first three are totally separate from the nave and each other, divided by thick walls. Thought, when it occurs, is confined to that domain. If a thought is required that requires more than one domain – e.g. a tool for hunting a particular animal – it happens in the general intelligence. Accordingly thought and behaviour at such “domain interfaces” is far simpler than within a single domain.

The relationship of the language chapel (which Fodor saw an input module) to everything else is unknown at this stage.

Such minds are similar to those of children aged 2-3 as described by Karmiloff-Smith.

Phase 3. Direct access is possible between the chapels and indeed a new “superchapel” corresponding to Sperber’s MMR may also be present. Experience gained in one domain can now influence that in another. This is the synergistic interaction of Gardner’s theory – metaphor and analogy become possible. The nave acquires a greater complexity in its services: this central service is Fodor’s central cognitive system of the mind. It’s like a Gothic cathedral rather than a Romanesque one, where there are no thick walls between the chapels and sound, space and light interact. The mind now possesses “cognitive fluidity.”

How did all this come about? Mithen turns to the chimpanzee.

Chapter 5. “Apes, monkeys and the mind of the missing link”. The mind of the common ancestor of humans and modern chimpanzees (the so-called “missing link”) is assumed to be equivalent to that of the latter. Mithen believes the case for chimp intelligence has been overstated. He concedes that they can make and use rudimentary tools (termite sticks, loo paper, etc) but offspring are slow to pick ideas up. This indicates the absence of a “technical intelligence”. There is no chimp “culture” as such. While different groups of chimps use different tools (e.g. some use termite sticks, some don’t) this is purely because nobody in a particular group ever discovered the technique – it is the absence of a problem-solving approach rather than the presence of culture.

Chimps have only basic natural history intelligence. They are good at making foraging decisions based on a continually-updated mental map of known resources. They will sometimes move hammer-stones and nuts considerable distances to anvil stones. But they lack the ability to make creative and flexible use of their knowledge – for example one group, while playing with a juvenile duiker (a type of small antelope) killed it but failed to eat it because they normally hunt the more abundant colobus monkeys.

On the other hand the social intelligence of chimps and their Machiavellian scheming is well-documented. They clearly have a theory of mind and practice deception (though it only appears to work for other chimps and not humans). Examples of deception include a subordinate male placing his hand over his erect penis so it remains visible to the target female, but concealed from a nearby dominant male.

Chimpanzees appear to have a conscious awareness of their own minds but this only extends to social interactions, not to tool-making or foraging.

Chimp linguistic skills are rudimentary. They can create sentences and use grammar but only the most limited way. Indeed bird song is more analogous to human speech and convergent evolution has probably given birds a dedicated speech module. Song plays a major role in the social life of birds, much more so than vocalisation in non-human primates.

Chimps have a moderately good general intelligence, a specialised social intelligence domain and a domain for mapping resource distribution – a very basic natural history module. Tool-making and foraging use the same mental processes – general intelligence and seem well integrated, albeit limited. However the integration between social and tool-making skills is poor – for example adults rarely tutor offspring about making and using tools, despite the obvious benefits of doing so.

The chimp’s mind is mid-way between Phase 1 and Phase 2. There is a general intelligence and the first “chapel”, one for social intelligence.

Monkeys also have a complex social life, but it is simpler than that of chimps. They get confused by their own reflections. They have no concept of self. They have no theory of mind.

The common ancestor of monkeys, apes and lemurs – the Notharctus – lived 55 million years ago and was probably even more primitive, possessing general intelligence only. This could handle simple learning rules for reducing food acquisition costs and facilitating kin recognition. There was as yet no social module and Nothartcus interaction with the social world was probably no more complex than their interaction with the non-social world (similar to present-day lemurs). Their minds were Phase 1.

Chapter 6. “The mind of the first stone toolmaker”. Mithen now moves on to consider the australopithecines and Homo habilis [which he uses as a convenient catch-all for the various human species then existing]. He looks at the tools – the Omo tools (australopithecines) are little more than smashed stone nodules, possibly even within the range of modern chimps. The Oldowan-type flakes used by H. habilis are more advanced and include sharp flakes for butchery and nodules that can be used for breaking open bones for marrow. They are beyond anything a chimp could produce as they require some knowledge of fracture dynamics, but they are still very simple and show no attempt at the imposition of a preconceived form, the finished product reflecting the character of the original nodule, the number of flakes, and the order in which they were detached. The materials worked were mainly quartzite and basalt – more demanding than stripping leaves off twigs to make termite sticks, but less so than working material such as cherts (flints). So it would appear that H. habilis had only rudimentary technical intelligence.

Mithen now considers natural history intelligence. It is accepted that H. habilis consumed more meat than does a chimp. There are many sites aged 2.0 – 1.5 million years with a mixture of animal bones and stone artefacts. From where these animal bones can be identified (which isn’t often) it appears H. habilis’ diet included zebra, antelope and wildebeest. The bones show butchery cut marks. Additionally, the relatively large brain implies a high-quality diet in terms of calories, given brains are very expensive to run in terms of energy requirements.

In the 1980s there was much often acrimonious debate about how the bone fragments should be interpreted. Glenn Isaacs (late 1970s) proposed the sites represented home bases where food was brought from various locations and shared; and infants were cared for. This implies prolonged infant dependency and linguistic communication. But Lewis Binford published the seminal Bones: Ancient Men and Modern Myths in 1981. In this he argued there was no evidence for the transporting and consumption of large quantities of meat. Instead they were marginal scavengers, taking leftovers at the bottom of the hierarchy of meat eaters on the African savannah. Other models were proposed but no firm conclusions reached, partly because of the poor archaeological record but also because H. habilis probably had a diverse lifestyle with flexibility between hunting and scavenging as circumstances dictated.

What would be the cognitive implications of flexible meat eating? To the chimps ability to build mental resource maps and move tools around would be required the ability to form hypotheses about carcass/animal location. Also habilis took the food to the tools (rather than just the other way round). For all this the range of environments exploited was very narrow compared to later humans and was probably tied to edges of permanent water sources. The behavioural flexibility implying full-blown natural history intelligence was absent.

Mithen goes on to consider social intelligence in Homo habilis. Among living primates there is a relationship between brain size and group size. This was pointed out by Robin Dunbar who believed it is also a measure of social intelligence. Dick Byrne found deception occurs more frequently with larger brains – the larger and more complex the social scene, the more devious one must be to win friends and influence people. But does this rule apply to extinct primates like australopithecines and H. habilis? The latter was more advanced than living primates with tool-making and rudimentary natural history modules. However these were such recent developments that Mithen believes the rule probably holds good. Dunbar plugged estimated brain sizes for australopithecines and H. habilis into a formula derived from living primates to obtain group sizes of 67 for australopithecines and 82 for H. habilis, compared with 60 for chimps. These group sizes are what Dumbar refers to as “cognitive groups” whom one “knows socially” as opposed to the total group size.

Circumstantial evidence supports the larger group sizes. Two factors make primates live in larger groups. Firstly there is a better chance of beating off predators, or at least somebody else getting eaten. Skulls pierced by leopards prove that H. habilis did get eaten by predators. The other advantage is when food comes in large, unevenly distributed parcels. These are difficult to locate and too much for small groups to eat. A bigger group increases the probability of locating such a parcel, and there is enough to go round, so it can be shared with the rest of the group. So it would appear that there were selective pressures acting in favour of an enhanced social intelligence for H. habilis.

H. habilis could probably handle higher levels of intentionality than their predecessors. Modern humans can track five or six levels of intentionality. Chimps can manage two; the increased social intelligence of H. habilis could probably manage three or four.

Did H. habilis have language capability? Fully-developed language requires fully-developed mental modules for language, as we know from the development of children’s linguistic abilities. In modern humans Broca’s area is associated with grammar and Wernicke’s area with comprehension. The 2 million year old H. habilis specimen from Koobi Fora (KNM-ER 1470) is well-preserved and has been examined by Phillip Tobias who believes that Broca’s area is present and Dean Falk confirms this. By contrast there is no evidence for Broca’s area in australopithecines. Terrence Deacon has argued that the pre-frontal cortex in early humans has been disproportionately enlarged and that this would lead to a re-organisation of connections within the brain favouring the development of linguistic capacity – but it’s not clear how far this process had developed 2 million years ago.

Robin Dunbar found that as primate group size increases so does the percentage of time spent grooming. If grooming time goes above 30% there isn’t enough time to look for food. Vocalising means more than one group member can be “groomed” simultaneously and thus grooming time can be reduced. From the inferred group size for H. habilis, grooming time is 23% and there would likely have been selective pressure to reduce grooming time by use of vocalisation. This may have been no more sophisticated than the chattering of baboons or purring of cats.

To sum up, though H. habilis is clearly an advance on the common ancestor, the basic design is the same and the “chapels” are as yet incomplete.

Chapter 7. “The multiple intelligences of the Early Human mind”. Act 3 from 1.8 million to 100,000 features a better archaeological record. Detailed and accurate reconstructions of past behaviour can often be made, but it seems almost bizarre in nature. “Early Human” lumps together Homo erectus, heidelbergensis and neanderthalensis.

Once again, Mithen considers technical, natural history, social and speech. Technical intelligence increased with the development of the hand-axe, which often show 3D symmetry indicating the “knapper” was intent on imposing form on the artefact rather than just creating a sharp edge as in the Oldowan tradition. This is very difficult and requires forward planning. The Levallois method (typically used by the Neanderthals) requires even more technical skill. In neither case can the procedure be reduced to a series of fixed rules that can be followed by rote and both visual and tactile clues must be used to monitor the artefact’s constantly changing shape and adjust plans for how it should develop. Tougher-to-work materials such as quartzite and chert are now used. Different types of artefacts are made from different materials.

Natural history intelligence improved – H. erectus was able to cope with conditions outside the African savannahs where there is more seasonality. From Wales to South Africa we find Early Humans – but very dry and very cold environments were too much for them and they didn’t reach Australia or the Americas. Early humans gathered, scavenged and hunted in a flexible manner.

In the glaciated landscapes of Europe, the Neanderthals flourished for over 200,000 years – an impressive achievement. The frequent environmental changes as glaciers advanced and retreated meant available foods changed and this made life even harder. 70-80% died before 40. Given their technology was primitive compared with modern humans they must have had at least as good natural history intelligence as that of modern humans in order to survive.

Mithen then poses four questions:

1) Why were bone, antler and ivory not used for tool-making? Because Early Humans could not think of animal parts (catered for under natural history) in the tool-making technical domain.

2) Why were tools for specific tasks not made, e.g. spear points show little variation in the Old World although a considerable variety of animals were hunted? Mithen claims this was because Early Humans couldn’t cross-link technical intelligence with animal behaviour (natural history intelligence).

3) Why were no multi-component tools ever made, e.g. erectus never made hafted tools though the Neanderthals and others did occasionally? Such tools were usually made for specific types of prey – same explanation as 2) above.

4) Why so little variation both in time and space, over a million years and in Africa, Western Europe, the Near East and India toolkits show little variation. Again, this is because there was no integration between tool-making and prey availability.

The “social intelligence” of Early Humans must have been at least as good as chimps. Leslie Aiello and Robin Dunbar predict “social knowledge” group size of 111 (erectus) 131 (archaic sapiens) and 144 (Neanderthals) c.f. 150 for modern humans. Mithen believes that true figures were slightly lower as some brain power must have been needed for other domains. As with H. habilis, “large packet” limited availability foods would have favoured group living. However tensions could have risen in larger groups and during milder interglacial times, much smaller groups were favoured. Such flexibility in social relationships is at the heart of social intelligence. The proven care of disabled and elderly is further proof.

Four more questions.
1) Why do the settlements of Early Humans imply universally small social groups, contrary to Dunbar’s theories? The false assumption is that mind of Early Humans was like that of modern humans implies lack of social cohesion. But if technical and social intelligences were not integrated, tool-making and living activities might not have taken place at the same sites. The interpretation of archaeological record could be in error because the “social networking” sites are now invisible in the archaeological record.

2) Why do distributions of artefacts on sites suggest limited social interaction? (Knapping and butchery debris are strewn around all over the place suggesting no dedicated food-processing and tool-making areas.) There is no integration between social and technical, or social and natural history. Eating is not a social activity and food-distribution is handled by general intelligence.

3) Why is there an absence of items of personal decoration? (No beads, pendants, necklaces or cave art have been found. There is possible body painting with ochre in South Africa.) No integration between social and technical intelligence.

4) Why is there no evidence of ritual burial among early humans? Neanderthals buried their dead but there is no unequivocal evidence of grave goods (the supposed pollen evidence of burial flowers is flawed – the pollen was probably blown into the cave) and it was possibly no more than hygienic disposal. Possibly they could recognise ancestors, but if so absence of grave goods is even more puzzling. Presumably the explanation is the same as 3) above, i.e. no social purpose for artefacts.

Mithen considers language. Language can be inferred from brain size, brain shape and character of vocal tract. Brain size of Early Humans mostly falls within the range of modern humans. Recall Dunbar correlation between brain size, social group size, and required grooming time. Dunbar and Leslie Aiello predict grooming time is 40% by the time of archaic Homo sapiens. This is too high and would have required a language with significant social content to alleviate the problem – a “social language”. A general purpose language emerged later, but how much later Dunbar and Leslie do not make clear.

The pre-frontal cortex is responsible for language and reflecting on own and other mental states – central to social intelligence. Broca and Wernicke areas on Neanderthals identical to modern humans. 63,000 year old Neanderthal hyoid bone is identical in shape, muscular attachments and apparent positioning, suggesting Neanderthals possessed the anatomic capability for speech. Given the risk this implies of choking this suggests the mental ability would also have been present.

But was language used for other than social purposes, e.g. for teaching the Levallois Method? It sounds reasonable, but on the other hand H. erectus was a proficient tool maker and forager, despite being in all probability linguistically limited. Additionally such usage would have implied a greater integration between social, technical and natural history domains. Mithen concludes that language was purely social in function, supporting Dunbar.

Brain size increased from 750-1250 cc for early H. erectus to 1200-1750 cc for Neanderthals. Not gradual – there was a plateau between 1.8 million and 500,000 years ago followed by a rapid increase with the appearance of archaic Homo sapiens and the Neanderthals. H. erectus could probably make a wide range of sounds, but probably too simple to be described as a proper language. Aiello notes most complete H. erectus skeleton KNM-WT 15000 suggests muscle control necessary for regulation of respiration not present.

The Levallois method appears at end of brain expansion period (250,000 years ago) implying better technical intelligence but probably not a reflection of more intense social interactions.

The higher latitudes in Europe were occupied a million years after first H. erectus migration out of Africa. Early humans may have had cognitive ability to cope with the harsh Pleistocene European environment.

There was some improvement in natural history intelligence going from H. erectus to Neanderthals, but the biggest difference was increase in linguistic intelligence.

A Phase 2 mind has been achieved in the course of Act 3. The chapels are complete, but isolated and services going on in them can barely be heard elsewhere. There is still no cognitive fluidity.

Chapter 8. “Trying to think like a Neanderthal”. Mithen assumes Humphrey is correct and consciousness evolved as a mechanism to allow an individual to predict social behaviour of other group members. At some stage we became able to interrogate our own thoughts and feelings about how we would behave in certain situations. In other words consciousness arose as a part of social intelligence.

In a Phase 2 mind like a Neanderthal, therefore, consciousness existed only in the social domain and there was no conscious awareness of thought processes involved in technical and natural history domains. Nobody really understands consciousness. There appear to be two types – “sensation” – awareness of sights, sounds, itches, etc which Humphrey regards as “lower order” than that relating to reasoning and reflection of one’s own mental state. Mithen believes the Neanderthals possessed this latter “reflexive consciousness only in relation to the social world.

When making a stone tool they experienced what we experience when driving a car on autopilot while engaged in conversation, thought, etc. We negotiate roundabouts and traffic-lights etc successfully but have no memory of having done so at the end of the journey. This is what is described by Daniel Dennett as “rolling consciousness with swift memory loss”. We find it hard to imagine what it would have been like to have such a Swiss army knife mind but we should remember that we are only aware of a fraction of what is going on in our own minds – for example the complex processes required to generate grammatically-correct speech and the evasive action when knocking over a cup of coffee. Another example is that an epileptic can continue to play the piano during a petit mal seizure despite higher brain stem function being temporarily lost. If modern people can drive cars and play pianos without involving conscious awareness then Neanderthals making stone tools and foraging becomes more plausible.

We must accept that the monotony of industrial traditions such as hand axes and the absence of bone and ivory tools and of art is only explicable in terms of mentalities fundamentally different to our own.

Chapter 9. “The big bang of human culture: the origins of art and religion”. Act 4 sees the entry of fully-modern Homo sapiens sapiens at 100,000 years ago. In Scene 2, 60,000 – 30,000 years ago there is a cultural explosion. But early modern humans, Homo sapiens sapiens, had already been in existence for at least 40,000 years [in fact around 150,000 years following the redating of the Omo remains]. The start of Scene 2 rather than Scene 1 is taken as the Middle/Upper Palaeolithic transition. Archaeologists believe this is a cultural revolution – restructuring of social relations, the appearance of economic specialisation, a technical invention similar to that which caused the adoption of agriculture, and the origin of language – but Mithen rejects this and believes it marks the point at which “doors and windows are inserted into the chapel walls” – the development of the Phase 3 mind. But it doesn’t happen everywhere at once.

Colonization of Australia occurred 60,000 – 50,000 years ago; Blade core technology replaced Levallois technology in Near East 50,000 – 45,000 years ago; appearance of art in Europe dates to 40,000 – 30,000 years ago.

What is Art? Palaeolithic concept of art different from ours; art is culturally specific [art is in the eye of the beholder] and many cultures creating fine rock paintings do not have a word for “art”. Art from Europe in the era includes an ivory statuette from Hohlenstein-Stadel in southern Germany – a man with a lion’s head (totemism); animal figures in ivory including cats, mammoths, bison and horses also from southern Germany; v-shaped signs engraved on limestone blocks in caves in the Dordogne. Once thought to be vulvas, but not now thought to have any simple representational status. Items of personal adornment such as beads, pendants and perforated animal teeth are widely known. At La Souquette in south-west France ivory beads carved to mimic sea shell. A tradition of painting caves with animals, signs and anthropomorphic figures culminating with Lascaux 17,000 years ago. Chauvet 30,000 years ago contains 300 or more paintings of naturalistic and anatomically-correct animals including rhinos, lions, reindeer, horses and an owl. After 30,000 years ago, art is found in Africa and is generally a world-wide phenomenon by 20,000 years ago.

The European art appeared when the last ice age was at its peak and cannot be viewed as a product of favourable circumstances. Yet under similar conditions the Neanderthals [apparently] produced no art.

Visual symbols –
1) Can be arbitrary, e.g. “2” doesn’t look like two of anything.
2) The purpose is communication.
3) A symbol can refer to things distant in space and distance in time
4) The same symbol can mean different things to different people; e.g. a swastika wasn’t always associated with the Nazis and long predates Hitler.
5) Variability is permitted, e.g. variable handwriting.

Consider Australian Aborigines. Circle can mean campsite, fire, mountain, waterhole, women’s breasts, eggs, fruit, etc. As an Aborigine child grows up, they change from face value interpretation such as fish = fishing; to images in the Dreamtime tradition which must of course be learned. Greater metaphorical sense, often relating to Ancestral Beings. Some knowledge may require being in the know. Fish starts out being good to eat; later good to think; potent symbol of spiritual transformation of both birth and death. The two types of fish image are complimentary. Archaeologists can reconstruct face value “outside” meaning; but “inner” meaning requires access to the lost mythological world of the prehistoric mind – the origin, Mithen believes, of religion.

There are three requirements for art –
1) Making a visual image requires a mental template, e.g. I think I will draw a Boeing 747.
2) Intentional communication with something displaced in space or time, e.g. a mammoth that was killed last Tuesday in Ipswich.
3) Attributing meaning to a visual image not associated with its referent – e.g. the hoof-print of a pregnant female deer that passed 2 hours ago.

Art is only possible with cognitive fluidity. 1) is found in the technical intelligence domain (making objects of preconceived form such as hand-axes) and could be used for making art but wasn’t prior to modern humans. 2) is established as a feature of social intelligence as intentional communication as vital to Early humans as to Modern humans – this feature is common, indeed, to monkeys and apes. Finally 3) is a feature of natural history intelligence. A rock painting can be compared with a hoof-print; it is removed from the animal that has been painted. But this was not done before the appearance of modern humans.

The cultural explosion 40,000 years ago in Europe is explained by new connections between technical, social and natural history intelligence, which create a new synergistic cognitive process Mithen calls visual symbolism – or simply art. Evidence – art apparently did not evolve (like a child’s artistic skills) but emerged fully-fledged, the very first images possessing technical skill and emotional power. (Although there are images drawn by children and apprentices – the artists clearly had to learn.)

What of earlier art? 100,000 year old fossil nummulite from Tata in Hungary appears to have an incised line perpendicular to natural crack to make a cross. There is the incised red ochre in South Africa, etc. Mithen believes these relied on general intelligence only; that while the specialist domains had the necessary capability, they could not be brought into play. Thus art was very limited.

Not only is art made possible by cognitive fluidity the content is influenced. We see humans with animal heads. In southern Germany we see a human with a lion’s head – a being in the mythology of people of that time. It could be an animal with human attributes (anthropomorphism) or a human descended from a lion (totemism). Anthropomorphism is common in human society – cats, dogs, Mickey Mouse, etc. Totemism is “the other side of the coin” and was the core of social anthropology during the 19th Century. Major works produced between 1910-50 by pioneer social anthropologists such as Frazer, Durkheim, Pitt-Rivers, Ratcliffe-Brown and Malinowski. This was the foundation for Levi-Strauss’ The Savage Mind followed by surge of renewed interest from 1970. Levi-Strauss defined animals as “good to think” as well as good to eat. Totemism viewed as humanity brooding on itself and its place in nature. The study of natural species “provided non-literate and pre-scientific groups with a ready-to-hand means of conceptualizing relationships between human groups”.

Totemism is universal among (modern) hunter-gatherers; it requires cognitive fluidity (animals and people); based on the evidence it has been present since the start of the Upper Palaeolithic.

Landscapes appear to also be socially-constructed and full of meaning. The Aborigines are again a good example – e.g. wells are where ancestral beings dug in the ground, trees are where they placed their digging sticks and red ochre where they bled. In southwest France in Upper Palaeolithic times we find a range of topographic features universally associated with social and symbolic meanings by modern hunter-gatherers so this mindset certainly isn’t new. The social and natural worlds are one and the same for both modern and Upper Palaeolithic hunter-gatherers. One consequence is that they expressed their view in art producing some of the finest art ever made.

In the Upper Palaeolithic, people hunted the same animals as their predecessors, but did so more efficiently. Early humans were predominantly opportunistic, hunting whatever was available but modern humans concentrated on specific animals at specific sites, e.g. reindeer. Some sites selected for ambush hunting, suggesting animal movements were better predicted, e.g. attacking at critical points along annual migration routes such as narrow valleys or river crossings. This is evidence of anthropomorphic thinking – though a reindeer doesn’t think as a human, imagining it does can still act as an excellent predictor to its behaviour. This has been tested in several studies of modern hunter-gatherers.

Modern humans also produced for the first time bespoke hunting weapons for different types of animal which would be impossible for a Swiss army knife minded human. Examples include weapons made from bone and antler, harpoons, spear-throwers, etc. A variety of projectile point is seen, with specific types being associated with particular prey at particular sites. Key to all this is blade technology where specialized multi-component tools can be made from standardised blanks. We also see a rapid evolution in technology as environmental conditions changed and building on the experience of earlier generations. Large points used for big game on tundra at height of ice age 18,000 years ago; shift to multi component tools and greater diversity of tools as conditions ameliorated and wider range of game became available. This is in stark contrast to the monotony of earlier technology and could only have happened with a connection between natural history and technical intelligences.

Art is used to store information, e.g. bones incised with parallel lines. Alexander Marshack’s microscopic studies suggest regular patterns that appear to be a system of notation. This was probably a visual recording device, probably environmental events such as moon-phases [my personal view is that these tallies recorded the female menstrual cycle]. Similar to notched engraved artefacts made by modern hunter-gatherers which are known to be mnemonic aids and recording devices, such as calendar sticks made by Yakut people of Siberia. John Pfeiffer describes cave paintings as a tribal encyclopaedia. Mithen has himself suggested that the way in which animals were painted relates to information acquired about movements and behaviour. In some cases animals are painted in profile but their hoofs in plan. Was this to facilitate identifying hoof prints or teaching children? Bird imagery dominated by migratory birds like ducks and geese. Again, modern hunter-gatherers in glacial environments use arrival and departure of these birds as harbingers of winter or spring. Paintings could also have fulfilled the same function as “trophy arrays” of the Wopkaimin of New Guinea, which are carefully arranged to act as a mental map of the local environment, an aide memoir for recalling info about environment and animal behaviour. Michael and Anne Eastham have suggested paintings and engravings in Ardeche, France served as a model or map of terrain around the caves.

Objects of adornment appear for the first time, requiring social and technical intelligence integration.

The anthropomorphic images seen earlier and grave goods suggest the Upper Palaeolithic people have beliefs in supernatural beings and possibly an afterlife. In other words, the first religion. What is religion? In his 1994 work “The naturalness of religious ideas” Pascal Bowyer notes that the most common and indeed possibly universal feature is a belief in non-physical beings. He also notes three other features common in religious ideologies:

1. Many societies believe in an afterlife.
2. Certain people within a society (e.g. shamans, priests) are especially likely to receive direct communication from supernatural agencies such as gods and spirits.
3. Performing certain rituals in an exact way can effect change in the physical world.

All these features seem to have been present in Upper Palaeolithic times. It is likely that anthropomorphic images seen in French caves are either supernatural beings or the shamans who communicated with them. French pre-historian Andre Leroi-Gourhan believes the painted caves are likely to reflect a mythological world as complex as the Australian Dreamtime. Bowyer believes supernatural beings typically have characteristics that violate instinctive knowledge of psychology, biology and physics (see Chapter 3). For example, bodies that don’t age and can pass through solid objects (ghosts) or are invisible. They nevertheless conform to some instinctive knowledge in that they have desires and beliefs like normal beings. The Ancestral Beings of the Australian Aborigines have weird characteristics like existing in both past and present; but they play tricks and practice deceptions. The various shenanigans of the Greek Gods provide a more familiar example. Bowyer argues the combination of violation and conformity characterises supernatural beings in religious ideologies. Some conformity is necessary for people to get their heads round things.

Mixing-up of knowledge of different types of entities in the real world – which would have once been trapped in separate domains – are the essence of supernatural beings. It could only happen in a cognitively fluid mind. The notion that some punters in a group can communicate with supernatural beings is a consequence of the belief that some people have a different “essence” than others. Essence is a “natural history” feature (instinctive biology) (chapter 3). Bowyer explains differentiating of people into different social roles exemplified by shaman as the introduction of “essence” into the social world – again, a consequence of cognitive fluidity.

Religious ideologies as complex as those in modern hunter-gatherer societies must have come into being about at the time of the Middle/Upper Palaeolithic transition and have remained with us ever since.

It is not surprising that with new abilities, humans rapidly colonised the world. An expansion began at 60,000 years ago with Australia being colonised by extensive sea voyages (Clive Gamble); then the North European plain; the arid regions of Africa; the tundra and forests of the far north after 40,000 years. Early humans entered but did not stay; modern humans colonized these regions and used them as stepping-stones to the Americas and the Pacific islands. This is all down to cognitive fluidity but it does not happen until well after the emergence of modern Homo sapiens sapiens.

Early modern humans had a degree of cognitive fluidity, but they hadn’t achieved full integration and had still partially Swiss army knife type minds. In the Near East remains of early modern humans in caves at Skhul and Qafzeh dating 100,000 – 80,000 years ago have stone tools almost identical to Neanderthals at Tabun (180,000 – 90,000 years ago) before the early moderns arrived and at Kebara after they left (63,000 – 48,000 years ago). But animal parts in human graves suggest religion/ritual activity (unlike Neanderthal burials) and people/animal association, probably totemic. The early moderns also hunted gazelles more efficiently – though using same spear types, they hunted on a seasonal basis and thus expended less energy; also their spears needed to be repaired less often. This suggests enhanced prey behaviour prediction, achievable only by anthropomorphic thinking. All this implies natural history and social integration. But the technical integration has not yet been achieved.

A similar conclusion is reached when evidence is considered from South Africa. Fossils in caves of Klasies River mouth and Border Cave are less well-preserved but date to same time period of 100,000 years ago. Some archaic features; likely to be original source of H. sapiens sapiens [this before discovery of the Herto remains (155ky) and redating of the Omo remains (200ky)]. Klasies river sequence runs from 100,000 – 20,000 years ago. At 40,000 years ago flake technology changes to blade technology at Middle/Upper Palaeolithic (or Middle to Later Stone Age on the African scheme). Prior to this, tools are similar to those made by early humans elsewhere in Africa during Act 3, even though made by early modern humans after 100,000 years ago. The layers corresponding to appearance of early modern humans contain significantly increased (though still quite rare) amounts of ochre, often used as crayons. Red ochre is entirely unknown prior to 250,000 years ago and extremely rare (a few dozen pieces) prior to 100,000 years ago. Chris Knight and Camilla Powers believe it used for body-painting, since no other art known prior to 30,000 years ago in South Africa [again, this predates discovery of 70ky old incised ochre pieces]. In Border Cave an infant is buried with a perforated Conus shell originating 80 km away. Grave dates 80,000 – 70,000 years ago. Small blades of high-quality stone, apparently designed for use in multi-component tools. Finally there is working of bone with multi-barbed harpoons found at Katanda, Zaire. These are 90,000 years old and are 60,000 years older than known comparable examples.

Mithen suggests that if one assumes that there is only one type of human in southern Africa after 100,000 years ago, then the mentality of these early modern humans is drifting in and out of cognitive fluidity. The benefits of partial cognitive fluidity are insufficient for the change to become “fixed” within the population. A degree of cognitive fluidity exists, but much less than that which arose at the start of the Upper Palaeolithic. It was however sufficient to give early modern humans the edge as they moved out of Africa and spread throughout the world.

The strongest evidence for replacement “Out of Africa” theory is limited genetic diversity of present-day humans, suggesting a recent and severe bottleneck and greater genetic diversity in Africa than elsewhere. One estimate suggests six breeding individuals for 70 years or 50 individuals in all; or 500 if the bottleneck lasted 200 years.

If the exodus comprised humans who were only partially cognitively fluid, they would have taken this condition – genetically encoded – with them as they expanded across the world. The integrated natural history and social intelligence gave them the competitive edge over earlier-type humans, pushing them into extinction.

The final step to full cognitive fluidity – integration of technical intelligence – was taken at different times in different parts of the world. This arose from parallel evolution and was perhaps inevitable as there was an evolutionary momentum towards full cognitive fluidity. As soon as adaptive pressures arose in each area, technical intelligence became part of the cognitive fluid mind and the final step to modernity was taken. [It is doubtful that cognitive fluidity could have arisen by parallel evolution as this would almost certainly have resulted in a degree of cognitive differences between different present-day populations. However in his more recent work, Mithen now suggests that full cognitive fluidity arose prior to Homo sapiens migration from Africa.]

Chapter 10. “So how did it happen?” Mithen believes language gradually broke down the barriers between the domains. He goes along with Robin Dunbar’s proposal that early humans’ language was used to send and receive social info only, unlike modern general-purpose language. But “snippets” of non-social content e.g. tool-making and animal behaviour crept in from two sources. The first source was from general intelligence at domain interfaces. Though limited, this could have permitted some vocalisation about the non-social world. Probably a small range of words, used predominantly as demands, with no more than two or three words strung together, in contrast to the grammatically-complex social language. The second source may have arisen from the specialized intelligences not being entirely isolated and some non-social thoughts about tool-making and foraging etc might have leaked through into the social domain.

As humans used non-social words, they would have entered the minds of other humans and invaded their social domains. There would have been a selective pressure to utilise this non-social information, as better hunting and tool-making decisions could have been made. There would have been a further selective pressure to add to one’s non-social vocabulary in order to question others about animal behaviour and tool-making, etc, and thus add to one’s knowledge. Possibly some happened to have particularly permeable walls between their specialised domains and this physical trait would have been heavily selected for. A general-purpose language would have evolved between 150,000 to 50,000 years ago.

Evidence survives in our conversation today which Robin Dunbar notes is still predominantly about social matters. We also ascribe “minds” to inanimate objects as implied by sentences like “the ball flew through the air” and “the book toppled off the shelf” which linguist Leonard Talmy argues implies the objects move under their own volition because the sentences are so like “the man entered the room”. Utterances use the same range of concepts and structures regardless of whether they refer to mental states, social beings or inanimate objects. Linguists believe that language was originally used for inanimate objects and by “metaphoric extension” were transferred to utterances about the social world. Mithen believes it was the other way around.

The social “chapel” was turned by the invasion of non-social material into one of Dan Sperber’s “superchapels” (chapter 3, MMR). The superchapel allows world-knowledge to be represented in two locations – it’s “home” domain and within the social domain, which now additionally contains non-social information. This multiple representation of data is a crucial feature of Annette Karmiloff-Smith’s ideas about how cognitive fluidity arises during development.

This helps us to understand apparently contradictory views held by hunter-gatherers and indeed any modern humans about the world. For example the Inuit on one hand think of the polar bear as a fellow kinsman, yet they kill and eat it. Deep respect for animals that are hunted, often expressed in terms of social relationships, yet no qualms about actually killing them appears universal among hunter-gatherers. This reflects the same bit of info being present in more than one domain. Another example is the Australian Aborigines and their attitude to landscapes. They exploit these with a profound knowledge of ecology (natural history) yet they view this same landscape as having been created by Ancestral Beings, who don’t follow the laws of ecology. Again the same info is represented in two different domains.

Sperber suggested that non-social info invading the social domain would trigger a cultural explosion, which Mithen claims occurred at the outset of the Upper Palaeolithic.

Mithen has followed Nicholas Humphrey’s argument that reflexive consciousness evolved as a critical feature of social intelligence and he believes a change in the nature of consciousness was a critical feature of the change to cognitive fluidity.

Consciousness was not accessible to thought in other domains. Early humans were not aware of their own knowledge of the non-social world other than via ephemeral rolling consciousness (see chapter 8). Language was the means by which social intelligence was invaded by non-social material and this made the non-social world available for reflexive consciousness to explore. This is the essence of the argument made by Paul Rozin in 1976 – his notion of accessibility was the “bringing to consciousness” of knowledge already in the human mind but located within the “cognitive unconsciousness”. Much mental activity remains closed to us even now; e.g. a potter often will be unable to explain how they throw a pot and can only demonstrate the process.

The new role for consciousness in the human mind is likely to be the one identified by Daniel Schacter in 1989 when he argued that consciousness is a global database integrating the output of modular processes. Such a mechanism is crucial in a modular system where different types of info are handled by different modules. Early humans had only general intelligence to perform this role. But because language acted as the means of delivering non-social thoughts into the social domain, consciousness could start to play the integrating role. Individuals could become introspective about their non-social thought-processes and knowledge, leading to the flexibility and creativity that characterises modern human behaviour.

Sexually-mature females were under selective pressure to achieve cognitive fluidity. Humans can only give birth to small-brained infants (typically 350 cc, no larger than an infant chimp). This is because of design issue of the pelvis – birth canal versus walking upright. There is a huge growth after birth hence massive post-natal dependency. This would have become pronounced during the second phase of brain expansion 500,000 years ago. According to Chris Knight mothers needed to provide good-quality food and Early Modern Human females solved the problem by extracting “unprecedented levels of male energetic investment” from the men and probably co-ordinated their behaviour to that end. A key element was a “sex strike” and use of red ochre as “sham menstruation”. This was the first use of symbolism and increase in ochre use after 100,000 years ago is cited as evidence. Mithen is sceptical about co-ordinated female action, but believes this was a social context in which food became critical in negotiating social relationships between sexes, and in this context snippets of language about food hand hunting may have been particularly valuable in the social language between males and females. Females may have needed to exploit this information in their dealings with men, and this might be why the first step towards cognitive fluidity was an integration of natural history and social intelligences, as seen in Early Modern Humans in the Near East.

The increased time between birth and maturity that arose as brain size increased also provided the time needed for connections between the specialised intelligence domains to be formed in the mind. In chapter 3 Annette Karmiloff-Smith argued a modern child passes through a domain-specific cognition phase and in Chapter 7 Mithen argued that cognitive development in young Early Humans ceased after the domains of thought had arisen and before any connections could be built. With regard to development, the source of cognitive fluidity must lie in a further extension of the period of cognitive development. The fossil record provides some evidence that child development in modern humans is much longer than that of early humans. This shows that Neanderthal children grew up quickly, developing robust limbs and large brain at an early age compared with modern humans. 50,000 year old fragments found at Devils Tower on Gibraltar are of a 3 to 4 year old child. Dental eruption occurred earlier and skull is 1,400 cc – nearly adult size. A two-year old Neanderthal found in Dederiyeh Cave in Syria possessed a brain the size of a six-year-old Modern human. Unfortunately we lack the skulls of children 100,000 years ago from the Near East and those of Upper Palaeolithic. Mithen believes they would show a trend towards increasingly-prolonged infant care over the time period 100,000 – 30,000 years ago.

Chapter 11. “The evolution of the mind”. There has been an oscillation over 65 million years between specialized and generalized ways of thinking.

Mithen believes that the earliest proto-primates, the plesiadapidiforms (Purgatorius, etc) had no general intelligence and that their minds were of the “Swiss army knife” type made up of hard-wired specialised behavioural modules that kicked in in response to specific stimuli and which were little modified by experience. This inability to learn is shared with cats, rats, etc, but not with modern primates who can identify general rules that apply in a set of experiments and use the general rule to solve a new problem. These animals declined because of competition from rodents about 50 million years ago.

The first modern primates included the lemur-like Notharctus who possessed larger brains (greater encephalization) and who appeared around 56 million years ago. The Notharctus probably possessed a general intelligence to supplement the specialist modules, but no dedicated social intelligence. The general intelligence could handle simple learning rules for reducing food acquisition costs and facilitating kin recognition. There was as yet no theory of the mind and Nothartcus interactions with their social world was no more complex than their interaction with the non-social world (similar to present-day lemurs). There is an about-turn from hard-wired behavioural responses to stimuli to a generalised mentality with cognitive mechanisms to allow learning from experience. However a larger brain had higher energy costs and these primates needed to exploit high-quality plant foods such as new leaves, ripe fruit and flowers.

About 35 million years ago more advanced primates such as Aegyptopithecus appeared. They were fruit eating quadrupeds and lived in tall trees of monsoonal rain forests. General intelligence superior to Notharctus and there is a specialised social intelligence domain. There is more complex social behaviour than non-social behaviour (see ch5) and a selective pressure to predict and influence behaviour of other group members. As argued by C&T those individuals with specialised mental modules for social intelligence would be able to solve social problems better. By 35 million years ago, general intelligence had reached its limits and the trend to ever-increasing specialisation of mental faculties had begun, which was to continue almost to the present day.

Andrew Whiten describes brain evolution as deriving from “spiralling pressure as clever individuals selected for yet more cleverness in their companions”. Nicholas Humphrey describes intellectual prowess correlated with social success and if this success means high biological fitness, then any inheritable trait that enables an individual to outwit their fellows will soon spread through the gene pool. The spiralling pressure probably continued 15 to 4.5 million years ago, a period in which the fossil record is poor and during which common ancestor of man probably lived, about 6 million years ago.

The fossil record improves at 4.5 million years ago. As seen in ch2 the best preserved australopithecine A. afarensis is adapted to joint arboreal and terrestrial lifestyle. Fossil record between 3.5 – 2.5 million years ago suggests brain-size remains constant. Why did spiralling pressure come to this hiatus? Probably two constraints applied: bigger brains require more fuel and need to be kept cool. Brains need 22 times more energy than muscle; and overheating by even 2 Celsius can impair their functioning. Australopithecines were probably mainly vegetarian and lived in equatorial wooded savannah. They couldn’t get enough to eat or keep cool.

But at 2 million years ago brain size begins to increase again. Meat-eating and bipedalism had provided the solutions to these problems. Bipedalism begins to evolve 3.5 million years ago, possibly in response to selective pressure to reduce thermal stress. By walking upright solar radiation from tropical sun can be reduced by 60%. The australopithecines’ tree-climbing tree-swinging ancestry pre-adapted them to bipedalism. Bipedalism is also more energy-efficient and australopithecines could forage for longer periods without food and water and in areas with less natural shade, and thus exploit foraging niches closed to predators more dependent on shade and water. The environmental changes occurring in Africa 2.8 million years ago led to more arid and open environments and thus bipedalism would have been advantageous.

Bipedalism required a bigger brain for the more complex muscle control it required. Dean Falk discusses how a network of veins covering the head was also selected for, providing a cooling system or radiator. Thus the overheating constraint was relaxed. Falk also suggests that once feet ceased to be used for manipulation areas of cortex used for foot control were freed up and were utilised to improve manual dexterity.

The new scavenging niche made it possible to obtain animal carcases and with more meat in the diet the gut size could be reduced releasing more energy to run the brain without changing the basal metabolic rate. Thus the second constraint on brain-size was relaxed.

Meanwhile the larger social groups needed to survive in open terrestrial habitats (partly as a defence against predators) meant better social intelligence was required, and this drove brain size up. In ch6 we saw Oldowan tools required more brainpower to make than those used by chimps. However the knowledge probably arose due to enhanced learning opportunities in larger group sizes rather than enhanced technical intelligence.

The distinct domains appear at 1.8 – 1.4 million years ago in response to continuing competition between individuals – removing the constraints on brain size triggered a cognitive arms race. However they may also reflect the appearance of a constraint on further growth of social intelligence. Nicholas Humphrey notes that a point comes when it is pointless devoting any more time to a social argument, so by 2 million years ago possibilities of enhancing reproductive success by enhancing social intelligence were played out, and new cognitive domains were evolved: natural history and technical intelligence. This would have permitted more efficient carcass and other food location, and better butchery techniques. Individuals with these faculties got a better diet and needed to spend less time exposed to predators on the savannah.

With the new domains, humans spread through much of the Old World, reaching Wales, South Africa and south-east Asia. The Swiss army knife mind was so successful there was no further brain enlargement between 1.8 million and 500,000 years ago. It is probable, though, that the minds of different types of human varied subtly in the nature of their multiple intelligences reflecting the diverse environments in which they lived.

The language domain probably began to evolve as far back as 2 million years ago. Mithen follows Robin Dunbar and Leslie Aiello’s argument that language evolved for social purposes only, and selective pressure to reduce grooming time. Anatomic changes required for speech were made possible by bipedalism, such as the descent of the larynx (Aiello). A spin-off from this was the ability to form the sounds of vowels and consonants. Changes in breathing patterns and reduced teeth size due to meat eating also helped as sound quality improved as did fine control of the tongue.

While H. erectus had better vocal ability than modern apes, they were very limited compared with modern speech. A large lexicon and grammar rules did not appear until the next phase of brain enlargement 500,000 – 200,000 years ago, though this remained a social language. Why did this second phase happen?

One possibility was a further expansion of social group size. It’s not clear why. Aiello and Dunbar believe that as the global human population increased so the need for defence against other groups of humans increased. The opportunity so created was however used. As described in ch10 the scope of language spread to the non-social world, leading to cognitive fluidity. Possibly this evolved because the mind had by 100,000 years ago specialisation had reached its limits. Although it only fully developed in modern humans, it may have begun with the last of the Neanderthals, but before it could fully develop they were pushed into extinction by modern and fully cognitively fluid humans.

The minds of today’s humans may have other new domains in response to cultural pressures that begun with the adoption of agriculture, e.g. a maths domain.

We can see an alternation over 65 million years between specialized and generalized intelligences. If we started and finished with general purpose minds, why was there a phase of specialist domains with limited integration? Quite simply the mind developed in a piecemeal fashion, with a general purpose intelligence to keep it running and specialist modules being added on later. Once the latter were working properly they were integrated into the whole, using language and consciousness as the glue. The whole undertaking was far too complex to undertake in one hit, c.f. writing a complex multi-modular computer program.

Art, religion and science are unique achievements of the modern mind. Science has 3 critical properties. Generating and testing hypotheses. Chimps can do this in their social interactions when practicing deception, with social intelligence. Early humans did so for resource distribution, using natural history intelligence. Using tools for problem-solving, e.g. telescopes, microscopes, pencils and paper for making records. Integrated natural history and technical intelligence used for tool making in Upper Palaeolithic. Cave paintings were the DVDs of their day. Use of metaphor and analogy is the third feature of science. Some of these require only one domain, but the most powerful cross boundaries and require cognitive fluidity. Examples include the heart as a pump, atoms as solar systems, clouds of electrons, wormholes, “selfish” genes, “well-behaved” equations – or minds as Swiss army knives or cathedrals.

Epilogue: “The origins of agriculture”. Agriculture arose independently in many parts of the world around 10,000 years ago. How animals reproduced has been known as long as natural history intelligence has existed, 1.8 million years. We also know a good deal about the animals hunter-gatherers hunted. But it is only recently that we have learned about exploitation of plant foods. Charred plant remains found at 18,000 year old sites in Wadi Kubbaniya, to the west of the Nile Valley indicate finely ground plant mush used, probably to wean infants. Roots and tubers have been exploited, possibly all the year round, from permanent settlements. At Tell Abu Hureyra, Syria, occupied by hunter-gatherers between 20,000 – 10,000 years ago, 150 edible plant species have been identified. At both sites technology for grinding and pounding plant material has been found. To sum up, both botanical knowledge and technology to support agriculture was in place at these sites well before agriculture itself was practiced. Why? A degree of compulsion must have been involved.

Agriculture has many disadvantages – being tied to a particular place leads to sanitation problems, the rise of disease, social tensions, depletion of resources such of firewood. Bones and teeth indicate health of early farmers poorer than hunter gatherers. Yet 10,000 years ago agriculture was widely adopted with a wide range of crops being brought under cultivation – wheat and barley in south-west Asia, yams in West Africa, taro and coconuts in south-east Asia.

There are two conventional explanations for the near-simultaneous switch to agriculture. One is that by 10,000 years ago population levels had got too large to be sustained by hunter gathering. This theory is not plausible or supported by the evidence. Hunter-gatherers can limit population by infanticide. Population in a mobile society is limited by difficulties of carrying young children around.

Other possibility was rise in temperature at the end of the last ice age, which was preceded by global climatic fluctuations lasting 5,000 years between warm/wet conditions and cold/dry conditions. In south-west Asia, the first farming communities are seen at Jericho and Gilgad are seen with wheat, barley, sheep and goats. But the wild ancestors of these cereals had grown and been exploited by hunter gatherers in the same places (e.g. Abu Hureyra). The stratified sequence of plant remains has been studied by archaeo-botanist Gordon Hillman and is very informative about changeover from hunter gathering to agriculture. Between 19,000 and 11,000 years ago, climate in south-west Asia improved as European ice sheets retreated, leading to warm/wet conditions especially during the growing season. During this time hunter gatherer punter factor increased exploiting more productive food plants and predictably-moving gazelle herds. Evidence suggests a wide range of plants exploited. But at 11,000 – 10,000 years ago drier and even drought conditions returned. Fruit trees could not survive drought and wild cereals could not survive dry cold conditions. Small-seeded legumes are more hardy but require detox to make them edible. At 10,500 years ago Abu Hureyra was abandoned. When people returned 500 years later they adopted agriculture. Similarly in the Levant at around 13,000 – 12,000 years ago hunter gatherers switched from a mobile to a sedentary life-style, probably in response to a short abrupt period of aridity which resulted in dwindling, less predictable food supplies. This period of building settlements yet retaining the hunter gatherer lifestyle is known as the Natufian and lasted up until 10,500 when true farming settlements appear. The settlements were often extensive and included underground storage pits and terraced huts. There was an expanded range of bone tools, art objects, jewellery and ground stone tools. Stands of wild barley intensively exploited.

There was a point of no return as described by Ofer Bar-Yosef and Anna Belfer-Cohen. Once the sedentary lifestyle had been adopted it was necessary to increase food production, because constraint on having children had been relaxed. Hence agriculture.

But the last ice age wasn’t the only one experienced by humans. Earlier human types didn’t adopt agriculture – but they lacked the mental means.

1) Tools for harvesting and processing plants required an integration of technical and natural history intelligence.
2) Animals and plants were used as a medium for acquiring prestige and power. This required integration of social and natural history intelligence. Eg meat and bones from bison, reindeer and horses, hunted on the tundra-like environment, were stored on the Central Russian plain 20,000 – 12,000 years ago. Access to these resources came increasingly under the control of particular dwellings and hence individuals – who thus used them as a source of power. Similarly in southern Scandinavia 7,500 – 5,000 years ago where people hunted red deer, pigs and roe deer. They appeared to focus on red deer – these were scarce, but the carcases were larger. There was more meat to give away, providing prestige and power. When this was not available day to day needs could be catered for by exploiting plants and fish. Red deer antlers and teeth necklaces are prominent grave goods suggesting these animals were important to these people. It would seem that agriculture was a means for some individuals to gain and maintain power. Brian Hayden favours this explanation and argued that competition between individuals using food resources to wage their competitive battles provided the motives and the means for the development of food production. Hayden felt evidence in the Natufian culture of long distance trade of prestige items and abundance of jewellery, stone figurines and architecture were evidence of social inequality, reflecting the emergence of powerful individuals. To maintain their power base these individuals had to continually introduce new prestige items and generate the economic surplus they needed to maintain power. Many first domesticates were prestige items like dogs, gourds, chilli peppers and avocados rather than resources for feeding a large population.
3) Social relationships with plants and animals, i.e. social and natural history integration. Evidence includes injured reindeer kept alive until injuries healed and the loving care a gardener gives to his plants.
4) Manipulating plants and animals – technical and natural history. This is basically treating them as artefacts to be manipulated. E.g. burning parts of forest as environmental management to encourage plant growth and attract game; leaving a bit of yam in the ground to allow the yam to grow again.


Steven Mithen’s basic premise is that in order to fully understand the mental architecture of a modern human, we need to look at the evolutionary history of Homo sapiens. As an archaeologist, he has used archaeological evidence as the main pillar of his work. He has supported this with evidence from anthropology, linguistics, evolutionary psychology, developmental psychology and primatology.

Mithen believes that are two basic types of intelligence. The first, which appeared early in primate history, is “general intelligence”, a capacity for non-specific learning which can be applied to a wide range of problems. Later there appeared more specialised intelligences for particular tasks – a social intelligence for dealing with social interactions; a linguistic intelligence; a technical intelligence for tool-making; and finally “natural history” intelligence for dealing with the natural world.

In early humans, such as the Neanderthals, these various intelligences were isolated from one another, which restricted the range of thoughts available to the early human mind. Modern Human Behaviour is a “package” of behaviours generally accepted by anthropologists to include the use of abstract thought, symbolic behaviour (such as art, creative expression and religion), use of syntactically-complex language and the ability to plan ahead. This, according to Mithen, did not emerge until humans attained “cognitive fluidity” which enabled the various intelligences or cognitive domains to interact synergistically. Finally, cognitive fluidity permitted reflexive consciousness of the modern type. Language acted as the means of delivering non-social thoughts into the social domain and consciousness could start to play the integrating role. Individuals could become introspective about their non-social thought-processes and knowledge, leading to the flexibility and creativity that characterises modern human behaviour.

That language is intimately linked to modern thought processes has also been suggested by Derek Bickerton, though the Bickerton theory of how modern human behaviour arose differs in a number of details from that of Mithen; in particular Bickerton sees the proto-language of early humans as being entirely distinct from modern language, which evolved primarily as a means of representing concepts rather than a means of communication (Bickerton, 1990). In his later work The Singing Neanderthal, Mithen rejects Bickerton’s compositional (word-based) proto-language and claims that the utterances of early humans were holistic (Mithen, 2005).

The notion of humans initially possessing compartmentalised minds is to some extent reminiscent of Julian Jaynes’ theory of “bicameral minds” proposed two decades before Mithen’s theory (Jaynes, 1976).

In his book, Mithen claimed modern human behaviour did not arise until after anatomically modern humans left Africa, based on the then-prevalent view that the earliest evidence for it is seen in Europe around 40,000 years ago (e.g. Diamond, 1991; Klein, 1999; Klein & Edgar, 2002). However this view has been challenged (McBrearty & Brooks, 2000; McBrearty, 2007; Henshilwood et al, 2004; Lewin & Foley, 2004; Oppenheimer, 2002), with clear evidence for a far earlier emergence in Africa. Mithen now accepts an earlier date, predating the migrations from Africa (Mithen, 2007). This revised timescale doesn’t really affect the validity or otherwise of Mithen’s theory.

However Mithen’s theory is not without its problems and some of the evidence he puts forward in its support is unconvincing. In particular, he suggests that Early Humans (Homo erectus, H. heidelbergensis and the Neanderthals) did not use bone, antler and ivory for tool-making was because they could not think of animal parts (catered for under natural history) in the tool-making technical domain. In other words Mithen is saying that Early Humans were aware that bone, antler and ivory were of organic material, but some kind of cognitive demarcation dispute prevented them from utilising such material for tool-making. But even chimpanzees will use organic material such as sticks for tools (e.g. termite sticks). Mithen seems to be implying that once modularity developed, the ability to use such materials was lost, which strikes me as being implausible. It seems more likely that Early Humans did make tools with organic materials on occasions, but these have failed to survive in the archaeological record.

Mithen then asks why tools were not made for hunting specific types of prey. He attributes this to a lack of integration between the technical intelligence (tools) and natural history intelligence (prey). This question could really be restated as “why are Mode 2 (Acheulian) and to some extent the Mode 3 (Mousterian) technologies of Early Humans primitive in comparison to the Mode 4 (blade) and Mode 5 (microlith) technologies of Modern Humans?”

Clearly this suggests that Early Humans were less cognitively advanced than later humans, but was this solely due to domain isolation?

To conclude, Mithen’s theory is well-argued and the existence of multiple intelligences in early humans is a strong possibility. Personally, though, I am somewhat sceptical as to whether anatomically-modern humans ever had this type of brain. It seems far more likely that “cognitive fluidity”, assuming it did not exist in earlier humans, is a characteristic of Homo sapiens and emerged with it. Derek Bickerton believes that the ability to use syntax in speech and thought is characteristic of our species (Bickerton, 2007): possibly it is this that provided Mithen’s “cognitive fluidity” (though as noted above, Mithen has criticized Bickerton’s theory). The braincase of modern humans is globular, in contrast to the long, low braincases of other human species. This change in shape may have arisen from the need to accommodate the neural anatomy that was responsible for the change in human mental organization to that of the modern type.


Bickerton D (1990): “Language and Species”, University of Chicago Press, USA.

Bickerton D (2007): “Did Syntax Trigger the Human Revolution?” in Rethinking the human revolution, McDonald Institute Monographs, University of Cambridge.

Boden, M (1990): “The Creative Mind: Myths and Mechanisms”, London: Weidenfeld and Nicholson.

Dennett D (1991): “Consciousness Explained”, New York: Little, Brown & Company.

Diamond J (1991): “The Third Chimpanzee”, Radius, London.

Dunbar R (1996): “Grooming, Gossip and the Evolution of Language”, Faber and Faber, London Boston.

Fodor J (1983): “The Modularity of Mind”, MIT Press, Cambridge, MA.

Gardiner H (1983): “Frames of Mind”, Basic Books.

Gardiner H (1999): “Intelligence Reframed”, Basic Books.

Christopher S. Henshilwood and Curtis W. Marean (2003): The Origin of Modern Human Behavior: “Critique of the Models and Their Test Implications”, Current Anthropology Volume 44, Number 5, December 2003.

Jaynes J (1976): “The Origin of Consciousness in the Breakdown of the Bicameral Mind”, Mariner Books, USA.

Karmiloff-Smith A (1992): “Beyond Modularity”, MIT Press, Cambridge, MA.

Klein, R. (1999): “The Human Career” (2nd Edition), University of Chicago Press.

Klein R & Edgar B (2002): “The Dawn of Human Culture”, John Wiley & Sons Inc., New York.

Lewin, R and Foley, R (2004): “Principles of Human Evolution” (2nd edition), Blackwell Science Ltd.

McBrearty S (2007): “Down with the Revolution”, in Rethinking the human revolution, McDonald Institute Monographs, University of Cambridge.

McBrearty S & Brooks A (2000): “The revolution that wasn’t: a new
interpretation of the origin of modern human behaviour”, Journal of Human Evolution (2000) 39, 453–563.

Mithen S (1996): “The Prehistory of the Mind”, Thames & Hudson.

Mithen S (2005): “The Singing Neanderthal”, Weidenfeld & Nicholson.

Mithen S (2007): “Music and the Origin of Modern Humans”, in Rethinking the human revolution, McDonald Institute Monographs, University of Cambridge.

Oppenheimer S (2002): “Out of Eden”, Constable.

Tomasello M (1999): “The Cultural Origins of Human Cognition”, Harvard University Press, Cambridge, MA & London.

© Christopher Seddon 2009

The Modularity of Mind (1983), by Jerry Fodor

Theories about the functional architecture of the human mind have fallen into a number of types.

Cartesian dualism is derived from the thinking of Rene Descartes, who believed that the brain is merely the seat of the mind. The latter was seen as a disembodied non-material entity, interacting with the former via the pineal gland, now known to be a small endocrinal gland linked to sexual development. However, most current theories seek to explain the mind in purely material terms. Historically these theories divided into two types, horizontal and vertical.

Horizontal theories refer to mental processes as if they are interactions between non-domain specific faculties such as memory, imagination, judgement, and perception. By contrast, vertical theories assert mental faculties are differentiated on the basis of domain specificity, are genetically determined, are associated with distinct neurological structures, and are computationally autonomous. This view has its origins in the 19th century phrenology movement founded by Franz Joseph Gall, who claimed that the individual mental faculties were associated with specific physical areas of the brain.

While this early view of modularity has long since been discarded, the theory was revived by Jerry Fodor in his 1983 work The Modularity of Mind. This theory abandons the notion of the “modular hardware” physically located in particular areas of the brain and draws on the work of Noam Chomsky in linguistics.

According to Fodor, the mind has two parts – input systems and cognition or central systems. The input systems are a series of discreet modules with dedicated architectures that govern sight, hearing, touch, etc. Language is also regarded as an input system. However the cognitive or central system has no architecture at all – this is where “thought”, “imagination” and “problem solving” happen and “intelligence” resides.

Each input system is based on independent brain processes and they are quite different from each other, reflecting their different purposes. These systems are localized in specific areas of the brain. The input systems are mandatory, if for example somebody sits behind you on the bus and spends the entire journey gassing away on their mobile, you cannot switch off the hearing module. However this has the advantage of saving time that would otherwise spent on decision-making.

As per the “vertical” view of mental architecture, Fodor believes that the input systems are “encapsulated”, i.e. they do not have direct access to the information being acquired by other input systems. What one is experiencing at a given time in one sensory modality does not any of the others – you cannot, for example, “see” sounds. [One problem for this view is the condition known as synaesthesia where sensory modalities apparently do interact and people can indeed see or taste sounds, etc. Well-known synaesthetes include David Hockney, Wassily Kandinsky and Vladimir Nabokov. The condition was evidently known to the French Romantic poets Arthur Rimbaud and Charles Baudelaire, who both described it in their work.]

A second feature of the input modules is that they only have limited information from the central systems. Fodor cites a number of optical illusions such as the Muller-Lyre arrows, which continue to apparently differ in length even when one is fully aware that this is not the case. The input modules are essentially “dumb” systems that act independently of the cognitive system and each other. To sum up, they are encapsulated, mandatory, fast-operating and hard-wired. Perception is innate, i.e. hard wired into the mind at birth.

The central cognitive systems are very different to the “dumb” input systems. According to Fodor, they are “smart”, they operate slowly, are unencapsulated and domain-neutral, i.e. they cannot be related to specific areas of the brain.

The Fodorian view is that evolution has given the modern human mind the best of both worlds: input modules that can enable swift, unthinking reactions in situations of danger (predators, etc) or opportunity (prey, etc) on one hand; and a slower central cognitive system, to be used when there is time for quiet contemplation, integrating information of many types and from many sources.

© Christopher Seddon 2009

Language & Species (1990), by Derek Bickerton

Derek Bickerton (b.1926) is professor emeritus of linguistics at the University of Hawaii and believes that creole languages provide a powerful insight into both the acquisition of language by infants and the origins of language in humans. A creole is a stable fully-functional language apparently arising from a pidgin, which is a stripped-down lingua franca arising when people sharing no common tongue have to live and/or work together. Examples include merchant seamen in distant ports and, historically, slaves in the West Indies.

Bickerton is the main proponent of the Language Bioprogram Hypothesis (LBH). This theory states that the structural similarity between many creole languages must arise from an innate capacity in the brain.

The following is a summary of Bickerton’s 1990 work Language and Species:

Chapter 1 The Continuity Paradox
Human and animal behaviour separated by one major distinction that not often appreciated – language. Animal communications are holistic and limited, e.g. vervet monkeys have warnings for various types of predators. By contrast, human communications are complex and unlimited. How did one evolve from the other? The theory of evolution states that features do not arise de novo but must be built incrementally upon something already in existence, but how can something infinite arise from something finite? This is known as the Continuity Paradox.

Bickerton resolves this paradox with the bold assertion that language in humans did not arise from the vocalizations of other animals and that its primary function is not in fact communication but representation. Communication is no more than a handy spinoff.

Nouns do not correspond to real objects, only representations of them. If this were not the case we could not have words for things like unicorns and golden mountains, which do not exist in the real world. Our view of the world is always representational and not absolute – what we see is a representation built up by sensory data; through a glass, darkly as St. Paul might have put it.

Chapter 2 Language as Representation: the Atlas
Language can be regarded as a means of mapping reality in a style analogous to both an atlas and an itinerary book. It important to realise that the atlas and the itinerary book are both representations of reality and that therefore they cannot represent with absolute verisimilitude. This limitation also applies to language – it does not directly map the experiential world. Language is a mediated mapping, a mapping that derives from the processing of sensory inputs.

In this chapter, Bickerton considers the atlas-like properties of language and states that a word can have three levels of meaning. Our knowledge of the world, in common with that of other animals, is derived from a series of mapping operations. The first of these – shared with other animals – is from existential objects to neural cells and networks in the brain. The first level of meaning is simple perception of, say, a leopard (non-italicised and not in quotes). We can only perceive a leopard when one is actually present, but we can think about leopards in their absence. This second level of meaning is the concept of something, for example “leopard” – the concept of leopards (in quotes). Some animal such as frogs almost certainly don’t have concepts. Frogs react quickly to snap up passing insects, but this is simply a hard-wired reaction to small rapidly moving objects (it ignores stationary insects and reacts to pellets flicked across their line of vision, but it works more often than not). Humans on the other hand do have concepts: for example an unidentified sound at night will be matched against possible explanations. Vervets probably fall somewhere in between and can equate the smell, sound and sight of a leopard with the same thing. Finally there is leopard (italicised), which refers to the word itself – a label – without any clear meaning being necessarily attached to it.

“Leopard” and leopard are defined in terms of (the perception of a) leopard, but this isn’t necessarily always the case; (the perception of a) burglar can only be expressed in terms of the concept of a “burglar” and the word burglar; we can define “paranoia” and label paranoia, but we cannot perceive paranoia.

Units relating to entities are insufficient to describe the world, because pretty well everything we see is doing something; for example walking, running, swimming, flying, etc. The subject/predicate distinction in language is so fundamental that it tends to be taken for granted, but it corresponds to nothing in nature. You cannot see an animal without perceiving at the same time what it is doing, e.g. a cow grazing. There is no word for cow-grazing, but we would expect there to be if language exactly mirrored reality. One possibility is that this is for reason of economy, because we’d need words for cow-running, cow-mooing, etc. But Bickerton believes that the explanation is that the concept of entities preceded the concept of behaviours. Behaviours are more abstract than entities; a cow cannot be anything other than a cow, but many types of animal can graze or run.

Behaviours are of course not the only things that can be predicated of entities. Properties such as size, colour, temperature, etc may also be attributed to entities. Typically these adjectives are paired, large/small, hot/cold, fast/slow, etc. While we can have words such as fast, faster and fastest, there is no language that represents a continuum of, say, speeds or temperatures.

The level of representation given by the lexicon abstracts away from and interprets the flux of experience. It derives a wide range of entities, together with behaviours and attributes that can be predicated of these entities. These form an inventory of everything that we see; however the lexicon is not unstructured.

Words are hierarchical e.g. animal -> mammal -> dog -> Spaniel. The word “anger” includes a range of words from irritation and annoyance through to rage and fury. Anger in turn falls in the category of emotion. Words can not only be converted to strings of other words, but fall into place within a universal filing system that permits any concept to be retrieved and comprehended.

Words are also constrained by contiguity. For example there is no word for “left leg and left arm”, or “every other Friday” or “red and green”. The referent must be an uninterrupted piece of matter or time or space. This even applies to abstract properties like ownership, location, possession, existence. Some languages, such as English, use one verb (is) for existence, location, ownership (e.g. there IS a book, the pub IS across the road, the book IS yours) and another (have) for possession (I have a book); but no language groups together existence/ownership and location/possession (the equivalent of the pub HAVE across the road). This suggests that contiguity constraints exist even in highly abstract domains. Semantic space may well be an intrinsic property of the brain; the lexicon is carved up into convenient chunks.

Chapter 3 Language as Representation: the Itineraries
While a map can tell you what the terrain is like, an itinerary is required to tell you what journeys may be taken. Similarly there are rules governing a journey through semantic space. Sentences are underlain by three types of structural consistency: predicability, grammaticisation and syntax.

Predicability imposes constraints between entities and predication – e.g. “the story is true” or “the cow is brown” are permissible, but not “the story is brown” or “the cow is true”. Only abstract qualities can be predicated of abstract nouns; and concrete qualities of concrete nouns. What can and cannot be predicated can be drawn up on a tree diagram. A quality at the top of the tree can be predicated of any class below it, but of no class above it. A quality on a side branch can only be predicated of a class on the branch below it.

For example, trees, pigs and men can all be dead; but only pigs and humans can be hungry; and only humans can be honest. All of these things plus thunderstorms can be nearby, but only thunderstorms could have happened yesterday; and so on.

Three observations may be made about the tree. Firstly it has binary branching. There is no obvious reason for this. Why only two? Why not three or more branches at each node? Secondly there is a contiguity constraint – for example anything applying to humans and plants must also apply to animals. Thirdly the tree does not seem to be derived from experience of the world as children as young as three or four used only slightly truncated trees. This does suggest that language as a classification mechanism is constrained by the human-specific conceptual analysis of the natural world.

Grammatical items are structural pieces that hold the meaningful parts of the sentence together – either inflections (-ing, -ed, etc), or words like “of” as in “the handle of the door”, or above, below, on, in, at, by, before, after, while, etc. Some languages to not express all these relations; others express relations not found in English. For example Hopi and Turkish both have inflections that differentiate between information gained through personal experience or obtained second hand. But grammaticization is only used on a few relations – those pertaining to singular/plural, and past/present/future (tense).

No language grammaticizes more than a fraction of the possible relations and while tenses and singular/plural seems to be a universal feature of language there is no language with grammatical constructs for edible/inedible, friendly/hostile, etc, even though these things would be useful. It seems that we are obliged to grammaticize some things, yet other things cannot be grammaticized. While one might dismiss this as a mere convention of languages, conventions can be broken and these never are. We can expand lexicon but not grammar. The latter appears to be a black box; we can neither alter it nor explain it.

Syntax is highly complex, yet we can all master its subtleties. A sentence is constructed of phrases; each phrase is a hierarchical not linear entity. A sentence of 10 words can be re-arranged over 3 million ways, only one of which is correct – yet we can do it effortlessly. Without syntax, complex ideas could not be communicated.

Chapter 4 The Origins of Representational Systems
Language must have evolved as a representational system, not for communications. How did this happen? Our senses give us a species-specific view of reality, only a subset of the data potentially available (e.g. little smell data, unlike dogs). This is the primary representation system, or PRS. All such systems arose from cells that could differentiate between two states, a distinction between sensory cells and motor cells, and motor cells capable of more than one behaviour type in response to a given stimulus. Humans alone have a secondary representation system (SRS) – language.

At lowest level there are organisms like sea anemones that can identify chemical signature of hostile starfish and execute an escape manoeuvre. Next is conditional response such as a crayfish that becomes habituated to being touches and eventually does not waste energy on an escape manoeuvre, or a grub that only moves if light-levels increase above a certain threshold. Ability to evaluate data is more complex in – say – lizard stalking a fly, where there is actual data processing by the brain leading to a choice of behaviours.

Vervet monkeys are genetically-programmed to respond to snakes. Similarly, if you touch something hot you move your hand away without thinking too hard. But such an approach has its limitations. Wildebeest do not always flee when they see a lion. If they did, they’d have less time to feed and they’d either exhaust themselves or starve. So they become alert – indeed they experience fear. But they don’t flee until threat assessment becomes critical. But fear – an emotion – is crucial to making a decision to flee.

Representations are either innate (metabolizing food, growing hair, producing sentences, etc) or learned (writing, sewing, swimming, etc). We are conscious of learned representations, but cannot access innate representations. But all representations lead to category formation – to form a category three things are needed: an object in the external world; patterns of cell activity in observers brain directly or indirectly triggered by the object’s presence; and the observer’s response, both internal and external to these patterns. Categories are species-specific.

For humans, categories are basically concepts, “concept” is simply the name we give a category. In non-human animals, categories might be referred to as proto-concept. Which came first – language or concepts? Probably language originally labelled proto-concepts derived from pre-linguistic experience; this was later expanded to be capable of deriving concepts not present in PRS, e.g. absence, golden mountains, etc. While the SRS can divide the universe exhaustively, PRS must do the same, e.g. for frogs, everything is either a frog, a pond, a large looming object or something else not relevant to frogs.

Pigeons can develop quite sophisticated categories – can be trained to peck certain classes of object, e.g. tree pictures. Such behaviour cannot be entirely innate as they can be trained to respond to objects they could have no knowledge of. But some categories – trees, humans, etc – probably are innate; probably categories of things that are significant to a particular species are innate, but the ability to analyse novel objects as well, by utilizing this processing power subsequently evolved. However provided the referents of particular proto-concepts remained relevant, these would be retained and new ones would be added over evolutionary time.

Categories/proto-concepts such as “tree”, “human” etc may be precursors of nouns. Some monkeys have temporal cortex cells that respond to movement of a primate-like figure – could these be proto-verbs? But these are agent-plus-actions rather than actions – human language does not conflate an entity and its behaviour into single words; subject-predicate distinction is fundamental as seen earlier.

Tiger running, tiger walking, tiger attacking could be broken down into “tiger” + action; however tiger running, dog running, insect running cannot so easily be broken down into X + “running” as the types of “running” differ, as opposed to only one type of tiger. This is why verbs are more abstract than nouns and are harder to represent. However if only a subset of a particular behaviour is considered, it can be restricted to species likely to perform it – for example only primates can “grab with hand”.

Proto-nouns might have represented species interacting with hominids. Proto-verbs might have been actions only hominids could perform. This implies awareness of conspecifics with which the creature interacts – in turn implying a social species. Awareness of self is a cornerstone of language and consciousness.

Chapter 5 The Fossils of Language
Ape language is basically very limited. Does it represent an earlier form of human language? It is comparable to that of a 2-yr old human. Bickerton then considers the possibility that “ontogeny recapitulates phylogeny”. “Genie” was a 13 year old girl imprisoned from birth and not exposed to language. After her rescue, she learned only ape/2 yr old-type language and could not be taught full language. Genie failed to acquire human language but has acquired something else. Language therefore cannot be a unitary system requiring input during critical period or Genie would have not acquired any language at all. Genie acquired proto-language (a robust “mature technology”) but could not go further and acquire full language. The means of acquisition are not the same for both.

Pidgins are proto-languages. Numerous examples known, for example slaves in West Indies, immigrants to Hawaii from 1880-1930, Russian and Scandinavian sailors; their speakers nevertheless have normal linguistic skills.

Thus there are four classes of proto-language speakers: apes, under-2-yr-olds, adults deprived of language and pidgin-speakers. 1. Language has word order constrained by general rules, formal structure; proto-language does not. 2. Language uses null elements in a consistent manner; proto-language does not (not well explained). 3. In language verbs have one, two or three arguments (like subprograms). Sleep 1 “Fred sleeps” Go 2 “Fred goes to bed” Give 3 “Fred gives Bert five pounds”. 4. Proto-language cannot expand phrases – the man to the tall man, tall bald man, tall bald fat man etc, or concatenate phrases – John wants books -> John wants books to study. 5. Proto-languages do not inflect.

Proto-language is not a blanket term for ungrammatical language (e.g. people with aphasia due to damaged Broca’s area). It does appear to be a distinct thing in itself. But how did we get from proto-language to full language?

Chapter 6 The World of Protolanguage
Relative brain size jumps at the Homo habilis-ergaster/erectus boundary. H. habilis is claimed to show enlarged Broca’s and Wernicke’s areas and was right-handed; first signs of lateralization. Proto-language could have begun with H. habilis (Bickerton’s hominid evolution diagrams represent the state of knowledge current in 1990). Tools and language are unlikely to have co-evolved. If H. habilis had proto-language than H. ergaster/erectus possibly had language – in which case why did the Acheulian tool tradition not evolve? It is likely that H. habilis did not have language and H. ergaster/erectus had only proto-language. Possibly this aided them in the use of fire, which seems well-attested. Bickerton rejects the theory of “gesture language”. If this was correct, infants would use sign-language (assuming ontogeny recapitulates phylogeny); language must have been vocal from the beginning. Sufficient cortical control was probably achieved by the time of Australopithecus afarensis – involuntary calling would have been maladaptive to a species that operated on the savannahs of east Africa. Vocal tract developed as the larynx lowered, increasing the risk of choking. Proto-language probably didn’t require the perfected human vocal tract, which would have been maladaptive if it had developed first. However changes to the vocal tract would have been favoured after proto-language developed. Original vocabulary was probably small – phonology may have developed in conjunction with syntax. A pre-phonological stage may exist in pre-syntactic children.

Reliance on sight in primates increased area of brain needed to process data: increased PRS categories -> drove things on in the direction of language. Chimps have few enemies but savannah-dwelling hominids have many. Curiosity about surroundings; recognising and categorizing was adaptive and selected for.

Few animals face the same set of problems as early humans as few are both social and omnivorous, with such varied feeding habits. Social herbivores move in herds, social carnivores hunt in packs and kill much larger animals; humans could not do this 2 million years ago. A band of foraging humans could split into small groups able to use proto-language could get the attention of others and lead them to finds too large for them alone.

There are three types of learning: Experiential learning (e.g. I am trying to escape from a tiger, I jump into a river and swim across and the tiger fails to follow. Next time I am chased by a tiger I’ll head for the river); Observational learning (I see a man escape from a tiger by swimming across a river and conclude this is the thing to do if I’m in the same position); and finally Constructive learning (I note that tigers will go round a body of water rather than swim across it. I conclude that tigers avoid water and if attacked by one this might offer an escape route).

Pretty well all animals are capable of learning from experience, and many can learn by observation such as the blue tits that began pecking their way into milk bottles in the UK in the 1970s. The majority of these birds undoubtedly learned the trick from watching their conspecifics.

But is any animal lacking language capable of constructive learning? For apes, it appears to be possible only in a limited fashion and all the elements involved need to be physically present. Anything involving absent individuals or classes of entity requires a form of representation beyond those available to non-humans.

There is nothing in the fossil record to suggest that H. ergaster/erectus had cognitive capabilities comparable to ours. It therefore seems likely that H. ergaster/erectus lacked syntax, hence had only proto-language and could not think as we do.

Chapter 7 From Protolanguage to Language
Proto-language can evolve to true language without an intermediate. There is no plausible intermediate between the two. A child moves rapidly from proto-language to full language, falling back on the former only when their lexicon does not have the necessary grammatical words. Pidgins develop into creoles – i.e. a true language arises from a proto-language. Creoles tend to have the same grammar regardless of the constituent languages suggesting a biological basis for it. In neither of these proto-to-full language transitions is an intermediate involved.

Modern human behaviour (whenever this did emerge) was probably linked to emergence of true language. Bickerton seems to support early emergence (i.e. AMH were behaviourally modern from Day 1) contra Klein (1999 etc), Mithen (1996), etc. The gap in the fossil record he attributes either to use of perishable materials while H. sapiens remained confined to Africa, or that it took time to develop the artefacts of modern humans despite always having the capacity to do so. This view, quite radical for 1990, is basically the position taken by McBrearty and Brooks (2000).

Cases for an intermediate language are not plausible because the intermediate would be as complex as the full-blown language. Nor could features of a true language be acquired piecemeal as they are all too interlinked. The only way a gradual process could have happened would have been if the structural principles were at hand, but the lexicon was still limited.

Proto-language probably acquired grammatical items – negator, wh-questions; auxiliaries (can, must, etc); time – earlier/later; location particles (often only one meaning on/in/at/to/from); possibly even pronouns.

The verb arguments are of three types (thematic roles) Agent, Patient, Goal – e.g. Bert (agent) gave five pounds (Patient) to Fred (Goal). These roles are not given by nature but are high-level abstractions. They probably originated through millennia of day-to-day hominid routine, with Agent as the most important. These roles were probably not systematically expressed.

These two developments in proto-language could have facilitated the emergence of true language. Verbally expressing emotion may have come next, followed by use of proto-language to model internal states of others. But how did we get to language? Could one mutation have done it? Could one mutation have generated a) syntax, b) skull features and dimensions and c) the larynx positioning?

Author believes possible explanation for a) is visual-processing areas of the brain could have been pressed into service to process syntax and not some central repository such as Broca’s area. This would explain why aphasia affects only grammaticization, not syntax (allegedly) [the role of the FOXP2 gene wasn’t discovered until 1998].

Chapter 8 Mind Consciousness and Knowledge
Einstein’s claims notwithstanding, language is required for thinking. Much goes on beneath the level of conscious thought that the thinker is unaware of. Mind, consciousness and the search for knowledge may all arise from having a language-based SRS with a syntax processor.

Even if the human mind does derive from language, this does not tell us about the precise relationship between language and mind. It was once believed that a full understanding of language would serve as a “window on the mind”, but this implied that language permeated the mind at every level. This in turn implied that the mind was a single problem-solving mechanism, as often been assumed by empiricists.

This view is seemingly at odds with the “modular mind” theory of Jerry Fodor, Howard Gardner, Annette Karmiloff-Smith etc. Despite the success of modularity theories, there is a problem. If modularity emerged after language there would not have been enough time for other modules, each with their own unique mechanism, to have evolved subsequently. [If Steven Mithen is correct, modularity considerably predates the emergence of fully-modern Homo sapiens (Mithen, 1996)].

Conversely if modularity emerged first and remained largely uninfluenced by the development of language it would only work if these were independent of language [which I believe is the accepted view] and language was not a representational system but merely a code for expressing the output [why?]. It would also predict human intellectual capabilities largely pre-existed language, which is clearly not the case [Mithen’s “cognitive fluidity” seems to be the answer here (Mithen, 1996)].

Bickerton’s resolution of this modularity versus window-on-the-mind problem is to suppose that that syntax processing is not an isolated module but a particular type of nervous organization that permeates and interconnects those areas of the brain devoted to higher reasoning processes, concepts and the lexicon, a type of organization that automatically sorts material into binary-branching tree structures. Other modules will then receive and output material that has been pre-processed to conform to syntactic principles [this suggests a mechanism by which Mithen’s “cognitive fluidity” might work, though in fact Mithen is critical of Bickerton’s proto-language and believes utterances of early humans were holistic (Mithen, 2005)].

What is “I”? Am I the whole body or just mind or a homunculus? Human language divides entity from behaviour, so “I am hungry” suggests “I” is divided from being hungry. Is the central directing “homunculus” a product of language – nothing more than an illusion – or is it something more? The latter suggests the human organism is indeed divided in some way, and not necessarily the way language suggests it is. In other words, the brain is modular.

Experiments with left/right hemispheres have suggested that right hemisphere has only PRS, lacks syntax capability but can do inference.

“I” cannot control the entire organism – cannot control bodily functions, which carry on if I’m asleep or unconscious. There is accessible I – linked to language and inaccessible I – not linked to language. This is better than mind-body model. Talking I is a module that forms a part of accessible I, though sometimes other modules grab the microphone. “I forced myself to do x” means “information in the SRS indicated that doing X would bring long term benefit, despite short-term appeal of doing Y”.

Chapter 9 The Nature of the Species
We are living in the fourth age of man [taken to be H. sapiens only]. In the first phase, from 200K years ago to 40K years ago, humans were hunter-gatherers confined to Africa. In the second phase, humans left Africa and “beat the Neanderthals”. The third phase, which began with the coming of agriculture at the end of the last ice age, introduced territorialism and inequality. [In fact early Neolithic societies, such as that at Catal Hoyuk in Anatolia, seem to have been fairly egalitarian, though there is no doubt that sedentism, which made it possible to accumulate possessions for the first time, led to the beginnings of social inequality. The notion that pre-agricultural man was not territorial seems highly dubious to me, considering that chimps are territorial.]

The Fourth Age begun 400 years ago, when inequality between state-level societies emerged.

“Did Syntax Trigger the Human Revolution” (Bickerton, 2007) is a paper submitted as Bickerton’s contribution to a series of papers published after the 2005 Cambridge conference entitled “Rethinking the Human Revolution”, part of the on-going debate about the mode, tempo and timing of the emergence of modern human behaviour.

Bickerton rejects the notion of a “Great Leap forward” 50-30 kya in Europe; the evidence now suggests that features thought to be novel to Europe emerged in Africa earlier. Did characteristically human cognitive capacities (CCHC) emerge gradually over 200-300 ky? It is more plausible that the change occurred with emergence of our own species. Bickerton considers and rejects the notion of studying tool sophistication because of the difficulty of agreeing what constitutes sophistication. There are also the assumptions that gradual increase in tool complexity implies increase in CHCC and that the moment a CHCC emerges it must result in artefact change. It is more plausible that when modern humans evolved, CHCCs emerged with them, but the novel artefacts only appeared later in response to selective pressure or cultural development. However it is equally implausible that these new CHCCs lay dormant for extended periods of time. An in between position seems the most likely.

It is possible to assume that the Acheulian hand-axes required the maker to conceptualise the finished article [the accepted position] but makers could simply copy and possibly modify and improve upon existing axes. Bickerton believes second possibility is more parsimonious, as there are objects intermediate between Oldowan and Acheulian, and between Acheulian and subsequent industries. On this picture, all human (inc. pre-sapiens) artefacts fall into two classes – those that are modifications of earlier artefacts and those that are completely new and would have to be imagined first (e.g. fish hooks, “Venus” figurines, etc).

What is required to create novel artefacts? Not necessarily bigger [or even more encephalised?] brains. Bickerton returns to his thesis of a proto-language developing around 2 mya. It is not, by itself, enough to produce novelties, though it would have increased social and foraging capacities. To sustain the trains of thought needed to produce novelties, something else is required. Bickerton distinguishes between “thought 1” (pre-linguistic thinking), “thought 2” (thinking with proto-language) and “thought 3” (thinking with full language). Only “thought 3” would permit a sustained train of thought.

Thought 1 could permit such thoughts as “that is a lion” (reacting to the sight of a lion) or “I am hungry” (feeling peckish) but not “hungry lions are dangerous” which would require the ability to instantiate the abstract class of “lion” at will rather than in response to actually seeing a lion. Australopithecines and present-day apes were/are probably restricted to thought 1.

Proto-language could have been holistic, like Steven Mithen’s “hmmmm” (Mithen, 2006) or – as per Bickerton’s position – comprise short, unstructured strings of single units (either oral or gestures) roughly corresponding to the individual words of present-day languages. Such a proto-language would have had a term for a lion corresponding to the abstract class “lion”. It would have enabled its possessors to think about things in their absence without triggering the responses (fight, flight, wait, etc) that a pre-linguistic signal might have occasioned.

Bickerton dismisses the long-running debate on the possibility of thought without language. If thought is mental computation, then anything with a brain is capable of thought. The issue is how to think at a level that creates novel behaviours or artefacts. Thinking is not conducted by words or images, as the brain does contain words and images, only neural pathways. Proto-language units enable creation of neural representations that allow thinking without direct reference to external objects to take place. These units could have been used for both language and thought. The former would have required an additional layer of mapping to a phonological representation for utterance and also a relationship between units – words and/or signs.

Language and proto-language both concatenate units of which they are composed, but language does so in a highly-structured manner with embedded phrases and clauses where as proto-language simply assembles words like beads on a string. Complex thought would have been impossible and proto-language speakers would have remained confined to thought 2.

The crucial difference is syntax. Bickerton believes that the same mechanism required to produce full language also enables the brain to marshal the complex trains of thought needed for innovation. Since the functionality is similar it is more parsimonious to assume the existence of just one rather than two distinct mechanisms. Bickerton speculates the need for more complex utterances might have led to the evolution of a syntax system, which served without further modification as an organizer of thought, and that its possessors are capable of thought 3. It is therefore likely that the development of syntax of language was a necessary and possibly sufficient pre-requisite for the emergence of modern human behaviour.

Bickerton D (1990): “Language and Species”, University of Chicago Press, USA.

Bickerton D (2007): “Did Syntax Trigger the Human Revolution?” in Rethinking the human revolution, McDonald Institute Monographs, University of Cambridge.

Fodor J (1983): “The Modularity of Mind”, MIT Press, Cambridge, MA.

Gardiner H (1983): “Frames of Mind”, Basic Books.

Gardiner H (1999): “Intelligence Reframed”, Basic Books.

Karmiloff-Smith A (1992): “Beyond Modularity”, MIT Press, Cambridge, MA.

Klein, R. (1999): “The Human Career” (2nd Edition), University of Chicago Press.

Mithen S (1996): “The Prehistory of the Mind”, Thames & Hudson.

Mithen S (2005): “The Singing Neanderthal”, Weidenfeld & Nicholson.

London Metropolitan University Graduate Centre

Holloway in North London is not normally noted for its cutting-edge architecture. However the Graduate Centre on London Metropolitan University’s London North Campus was designed by the internationally-famous architect Daniel Libeskind, whose portfolio includes the Jewish Museum in Berlin and the Imperial War Museum North in Manchester. Opened in 2004, it is only Libeskind’s second building in the United Kingdom.

© Christopher Seddon 2009

Omega Constellation (1968)

In the 1950s and 1960s, an Omega wristwatch was regarded as being at least as prestigious as a Rolex and many consider the automatic movements developed by Omega during this period to be superior to their Rolex counterparts.

Omega’s flagship model was the Constellation, introduced in 1952 and still in production. The original model became known as the “pie-pan” because its dial resembled an inverted pie dish. The “pie-pan” remained in production until the late 1960s and is still regarded by many as the classic “Connie”. However the “pie-pan” gained a stable-mate in 1964, the tonneau-shaped “C-type”, named for the resemblance of the case and lugs to pair of “C”s facing one another.

The watch above dates to 1968 and houses Omega’s Calibre 564, the final iteration of the 5xx series of chronometer-certified inhouse movements which Omega perfected over the course of a decade, beginning with Cal. 501 in 1955 (earlier “Connies” used Omega’s so-called “bumper automatic” movements, Cal. 354). Cal. 564 featured a quickset date and is widely accepted to be one of the finest automatic movements ever made. This particular watch is still keeping time to within chronometer limits after more than forty years.

The “C-type” is not as popular with collectors as the “pie-pan” and they tend to command far lower prices. This is good news for anybody wishing to acquire a classic example of 1960s design for a fraction of the price of a present-day Rolex!

© Christopher Seddon 2009

Blue pillar box, Manchester

I have been unable to ascertain why this pillar box, located outside Manchester’s Museum of Science and Industry, is sky blue rather than the customary red. A possible explanation is that it’s for the use of long-suffering Manchester City fans who not only have to endure the continuing success of arch-rivals Manchester United, but also would otherwise have to post their letters in Man U-coloured pillar boxes!

© Christopher Seddon 2009

Gettier’s Pint

Edmund and Bertrand had arranged to meet for a few beers after work at the “Philosopher’s Head”. Both were avid drinkers of Fullers London Pride. Edmund was 99% sure he’d had a pint of London Pride in the “Philosopher’s Head” but just to make sure he looked it up on the internet and sure enough it was listed as a Fullers pub.

Bertrand, who worked near the pub, decided to check it out himself in his lunch hour. He saw beer pumps labelled “Fullers London Pride”, but he also saw a sign over the bar saying “Under New Management. We are now a Free house”.

The pub sold London Pride, as Edmund believed. But was he right to say he knew, i.e. did his belief count as knowledge [justified true belief]?

Edmund believed the pub sold London Pride;
He had evidence that this was so (his belief was justified);
It was true that the pub sold London Pride.

But as the pub was now a free house, it could have been selling practically anything.

Regardless of the epistemological implications of all this, the moral of the story is obvious: IT’S ALL TOO EASY TO BE COMPLETELY STITCHED UP BY AN OUT-OF-DATE WEBSITE!

© Christopher Seddon 2009

Bussard Ramjet, by John Timberlake

John Timberlake (born 1967) is a London-based artist and writer whose work frequently explores realities that never came to pass. Blurring the boundaries between literature, painting and photography, this slim retro-styled volume ponders a glittering possible future for mankind that, fifty years ago, seemed to be there for the taking. Proposed by the US physicist Dr. Robert W. Bussard in 1960, the Bussard Ramjet (BR-J) was an ambitious proposal for an interstellar drive that would have required leaps technology and, Timberlake argues, radical changes in global economics and the possible remodelling of humanity itself. His images, some apocalyptic, others featuring futuristic water conduits superimposed onto bleak, contemporary settings, are interposed between dream-like narratives referencing alternate pasts and possible futures. There are constant references to Poul Anderson’s 1970 science-fiction novel Tau Zero, which describes an optimistic future in which the BR-J has made interstellar travel a reality; The contrast with the bleakness of Timberlake’s narratives could not be greater and suggests that Bussard’s bold vision will remain forever a dream.

(This book review appeared in Art World Magazine Issue 10 April/May 2009.)

© Christopher Seddon 2009