C H R O M O S O M E 6
I n t e l l i g e n c e
The hereditarian fallacy is not the simple claim that IQ is to some degree 'heritable' [but] the equation of
'heritable' with 'inevitable'. Stephen Jay Gould I have been misleading you, and breaking my own rule into the bargain. I ought to write it out a hundred times as punishment: G E N E S A R E N O T T H E R E T O C A U S E D I S E A S E S .
Even if a gene causes a disease by being 'broken', most genes are not 'broken' in any of us, they just come in different flavours. The blue-eyed gene is not a broken version of the brown-eyed gene, or the red-haired gene a broken version of the brown-haired gene.
They are, in the jargon, different alleles - alternative versions of the same genetic 'paragraph', all equally fit, valid and legitimate. They are all normal; there is no single definition of normality.
Time to stop beating about the bush. Time to plunge headlong into the most tangled briar of the lot, the roughest, scratchiest, most impenetrable and least easy of all the brambles in the genetic forest: the inheritance of intelligence.
Chromosome 6 is the best place to find such a thicket. It was on I N T E L L I G E N C E 7 7
chromosome 6, towards the end of 1997, that a brave or perhaps foolhardy scientist first announced to the world that he had found a gene 'for intelligence'. Brave, indeed, for however good his evidence, there are plenty of people out there who refuse to admit that such things could exist, let alone do. Their grounds for scepticism are not only a weary suspicion, bred by politically tainted research over many decades, of anybody who even touches the subject of hereditary intelligence, but also a hefty dose of common sense.
Mother Nature has plainly not entrusted the determination,of our intellectual capacities to the blind fate of a gene or genes; she gave us parents, learning, language, culture and education to program ourselves with.
Yet this is what Robert Plomin announced that he and his colleagues had discovered. A group of especially gifted teenage children, chosen from all over America because they are close to genius in their capacity for schoolwork, are brought together every summer in Iowa. They are twelve- to fourteen-year-olds who have taken exams five years early and come in the top one per cent. They have an IQ of about 160. Plomin's team, reasoning that such children must have the best versions of just about every gene that might influence intelligence, took a blood sample from each of them and went fishing in their blood with little bits of D N A from human chromosome 6. (He chose chromosome 6 because he had a hunch based on some earlier work.) By and by, he found a bit on the long arm of chromosome 6 of the brainboxes which was frequently different from the sequence in other people. Other people had a certain sequence just there, but the clever kids had a slightly different one: not always, but often enough to catch the eye. The sequence lies in the middle of the gene called I G F 2 R . 1
The history of IQ is not uplifting. Few debates in the history of science have been conducted with such stupidity as the one about intelligence. Many of us, myself included, come to the subject with a mistrustful bias. I do not know what my IQ is. I took a test at school, but was never told the result. Because I did not realise the test was against the clock, I finished little of it and presumably 7 8 G E N O M E
scored low. But then not realising that the test is against the clock does not especially suggest brilliance in itself. The experience left me with little respect for the crudity of measuring people's intelligence with a single number. To be able to measure such a slippery thing in half an hour seems absurd.
Indeed, the early measurement of intelligence was crudely prejudiced in motivation. Francis Galton, who pioneered the study of twins to tease apart innate and acquired talents, made no bones about why he did so:2
My general object has been to take note of the varied hereditary faculties of different men, and of the great differences in different families and races, to learn how far history may have shown the practicability of supplanting inefficient human stock by better strains, and to consider whether it might not be our duty to do so by such efforts as may be reasonable, thus exerting ourselves to further the ends of evolution more rapidly and with less distress than if events were left to their own course.
In other words he wanted to selectively cull and breed people as if they were cattle.
But it was in America that intelligence testing turned really nasty.
H. H. Goddard took an intelligence test invented by the Frenchman Alfred Binet and applied it to Americans and would-be Americans, concluding with absurd ease that not only were many immigrants to America 'morons', but that they could be identified as such at a glance by trained observers. His IQ tests were ridiculously subjective and biased towards middle-class or western cultural values. How many Polish Jews knew that tennis courts had nets in the middle?
He was in no doubt that intelligence was innate:3 'the consequent grade of intellectual or mental level for each individual is determined by the kind of chromosomes that come together with the union of the germ cells: that it is but little affected by any later influences except such serious accidents as may destroy part of the mechanism.'
With views like these, Goddard was plainly a crank. Yet he prevailed upon national policy sufficiently to be allowed to test I N T E L L I G E N C E 7 9
immigrants as they arrived at Ellis Island and was followed by others with even more extreme views. Robert Yerkes persuaded the United States army to let him administer intelligence tests to millions of recruits in the First World War, and although the army largely ignored the results, the experience provided Yerkes and others with the platform and the data to support their claim that intelligence testing could be of commercial and national use in sorting people quickly and easily into different streams. The army tests had great influence in the debate leading to the passage in 1924 by Congress of an Immigration Restriction Act setting strict quotas for southern and eastern Europeans on the grounds that they were stupider than the 'Nordic' types that had dominated the American population prior to 1890. The Act's aims had little to do with science. It was more an expression of racial prejudice and union protectionism. But it found its excuses in the pseudoscience of intelligence testing.
The story of eugenics will be left for a later chapter, but it is little wonder that this history of intelligence testing has left most academics, especially those in the social sciences, with a profound distrust of anything to do with IQ tests. When the pendulum swung away from racism and eugenics just before the Second World War, the very notion of hereditarian intelligence became almost a taboo.
People like Yerkes and Goddard had ignored environmental influences on ability so completely that they had tested non-English speakers with English tests and illiterate people with tests requiring them to wield a pencil for the first time. Their belief in heredity was so wishful that later critics generally assumed they had no case at all. Human beings are capable of learning, after all. Their IQ can be influenced by their education so perhaps psychology should start from the assumption that there was no hereditary element at all in intelligence: it is all a matter of training.
Science is supposed to advance by erecting hypotheses and testing them by seeking to falsify them. But it does not. Just as the genetic determinists of the 1920s looked always for confirmation of their ideas and never for falsification, so the environmental determinists of the 1960s looked always for supporting evidence and averted 8 o G E N O M E
their eyes from contrary evidence, when they should have been actively seeking it. Paradoxically, this is a corner of science where the 'expert' has usually been more wrong than the layman. Ordinary people have always known that education matters, but equally they have always believed in some innate ability. It is the experts who have taken extreme and absurd positions at either end of the spectrum.
There is no accepted definition of intelligence. Is it thinking speed, reasoning ability, memory, vocabulary, mental arithmetic, mental energy or simply the appetite of somebody for intellectual pursuits that marks them out as intelligent? Clever people can be amazingly dense about some things — general knowledge, cunning, avoiding lamp-posts or whatever. A soccer player with a poor school record may be able to size up in a split second the opportunity and way to make a telling pass. Music, fluency with language and even the ability to understand other people's minds are capacities and talents that frequently do not seem necessarily to go together. Howard Gardner has argued forcefully for a theory of multiple intelligence that recognises each talent as a separate ability. Robert Sternberg has suggested instead that there are essentially three separate kinds of intelligence - analytic, creative and practical. Analytic problems are ones formulated by other people, clearly defined, that come accompanied by all the information required to solve them, have only one right answer, are disembedded from ordinary experience and have no intrinsic interest: a school exam, in short. Practical problems require you to recognise and formulate the problem itself, are poorly defined, lacking in some relevant information, may or may not have a single answer but spring directly out of everyday life. Brazilian street children who have failed badly at mathematics in school are none the less sophisticated at the kind of mathematics they need in their ordinary lives. IQ is a singularly poor predictor of the ability of professional horse-race handicappers. And some Zambian children are as good at IQ tests that use wire models as they are bad at ones requiring pencil and paper - English children the reverse.
Almost by definition, school concentrates on analytic problems and so do IQ tests. However varied they may be in form and I N T E L L I G E N C E 8 l
content, IQ tests are inherently biased towards certain kinds of minds. And yet they plainly measure something. If you compare people's performance on different kinds of IQ tests, there is a tendency for them to co-vary. The statistician Charles Spearman first noticed this in 1904 - that a child who does well in one subject tends to do well in others and that, far from being independent, different intelligences do seem well correlated. Spearman called this general intelligence, or, with admirable brevity, 'g'. Some statisticians argue that 'g' is just a statistical quirk - one possible solution among many to the problem of measuring different performances. Others think it is a direct measurement of a piece of folklore: the fact that most people can agree on who is 'clever' and who is not. Yet there is no doubt that 'g' works. It is a better predictor of a child's later performance in school than almost any other measure. There is also some genuinely objective evidence for 'g': the speed with which people perform tasks involving the scanning and retrieval of information correlates with their I Q . And general IQ remains surprisingly constant at different ages: between six and eighteen, your intelligence increases rapidly, of course, but your IQ relative to your peers changes very little. Indeed, the speed with which an infant habituates to a new stimulus correlates quite strongly with later I Q , as if it were almost possible to predict the adult IQ of a baby when only a few months old, assuming certain things about its education. IQ scores correlate strongly with school test results.
High-IQ children seem to absorb more of the kind of things that are taught in school.4
Not that this justifies fatalism about education: the enormous inter-school and international differences in average achievement at mathematics or other subjects shows how much can still be achieved by teaching. 'Intelligence genes' cannot work in a vacuum; they need environmental stimulation to develop.
So let us accept the plainly foolish definition of intelligence as the thing that is measured by the average of several intelligence tests
- 'g' - and see where it gets us. The fact that IQ tests were so crude and bad in the past and are still far from perfect at pinning 8 2 G E N O M E
down something truly objective makes it more remarkable, not less, that they are so consistent. If a correlation between IQ and certain genes shows through what Mark Philpott has called 'the fog of imperfect tests',5 that makes it all the more likely that there is a strongly heritable element to intelligence. Besides, modern tests have been vastly improved in their objectivity and their insensitivity to cultural background or specific knowledge.
In the heyday of eugenic IQ testing in the 1920s, there was no evidence for heritability of I Q . It was just an assumption of the practitioners. Today, that is no longer the case. The heritability of IQ (whatever IQ is) is a hypothesis that has been tested on two sets of people: twins and adoptees. The results, however you look at them, are startling. No study of the causes of intelligence has failed to find a substantial heritability.
There was a fashion in the 1960s for separating twins at birth, especially when putting them up for adoption. In many cases this was done with no particular thought, but in others it was deliberately done with concealed scientific motives: to test and (it was hoped) demonstrate the prevailing orthodoxy — that upbringing and environment shaped personality and genes did not. The most famous case was that of two New York girls named Beth and Amy, separated at birth by an inquisitive Freudian psychologist. Amy was placed in the family of a poor, overweight, insecure and unloving mother; sure enough, Amy grew up neurotic and introverted, just as Freudian theory would predict. But so - down to the last details - did Beth, whose adoptive mother was rich, relaxed, loving and cheerful. The differences between Amy's and Beth's personalities were almost undetectable when they rediscovered each other twenty years later.
Far from demonstrating the power of upbringing to shape our minds, the study proved the very opposite: the power of instinct.6
Started by environmental determinists, the study of twins reared apart was later taken up by those on the other side of the argument, in particular Thomas Bouchard of the University of Minnesota.
Beginning in 1979, he collected pairs of separated twins from all over the world and reunited them while testing their personalities I N T E L L I G E N C E 8 3
and I Q s . Other studies, meanwhile, concentrated on comparing the I Q s of adopted people with those of their adoptive parents and their biological parents or their siblings. Put all such studies together, totting up the IQ tests of tens of thousands of individuals, and the table looks like this. In each case the number is a percentage correlation, one hundred per cent correlation being perfect identity and zero per cent being random difference.
The same person tested twice 87
Identical twins reared together 86
Identical twins reared apart 76
Fraternal twins reared together 5 5
Biological siblings 47
Parents and children living together 40
Parents and children living apart 31
Adopted children living together 0
Unrelated people living apart 0
Not surprisingly, the highest correlation is between identical twins reared together. Sharing the same genes, the same womb and the same family, they are indistinguishable from the same person taking the test twice. Fraternal twins, who share a womb but are genetically no more similar than two siblings, are much less similar, but they are more similar than ordinary brothers, implying that things experienced in the womb or early family life can matter a little. But the astonishing result is the correlation between the scores of adopted children reared together: zero. Being in the same family has no discernible effect on IQ at all.7
The importance of the womb has only recently been appreciated.
According to one study, twenty per cent of the similarity in intelligence of a pair of twins can be accounted for by events in the womb, while only five per cent of the intelligence of a pair of siblings can be accounted for by events in the womb. The difference is that twins share the same womb at the same time, whereas siblings do not. The influence upon our intelligence of events that happened 8 4 G E N O M E
in the womb is three times as great as anything our parents did to us after our birth. Thus even that proportion of our intelligence that can be attributed to 'nurture' rather than nature is actually determined by a form of nurture that is immutable and firmly in the past. Nature, on the other hand, continues to express genes throughout youth. It is nature, not nurture, that demands we do not make fatalistic decisions about children's intelligence too young.8
This is positively bizarre. It flies in the face of common sense: surely our intelligence is influenced by the books and conversations found in our childhood homes? Yes, but that is not the question.
After all, heredity could conceivably account for the fact that both parents and children from the same home like intellectual pursuits.
No studies have been done - except for twin and adoption studies
- that discriminate between the hereditary and parental-home explanation. The twin and adoption studies are unambiguous at present in favouring the hereditary explanation for the coincidence of parents' and children's I Q s . It remains possible that the twin and adoption studies are misleading because they come from too narrow a range of families. These are mostly white, middle-class families, and very few poor or black families are included in the samples.
Perhaps it is no surprise that the range of books and conversations found in all middle-class, American, white families is roughly the same. When a study of trans-racial adoptees was done, a small correlation was found between the children's IQ and that of their adoptive parents (nineteen per cent).
But it is still a small effect. The conclusion that all these studies converge upon is that about half of your IQ was inherited, and less than a fifth was due to the environment you shared with your siblings - the family. The rest came from the womb, the school and outside influences such as peer groups. But even this is misleading.
Not only does your IQ change with age, but so does its heritability.
As you grow up and accumulate experiences, the influence of your genes increases. What? Surely, it falls off? No: the heritability of childhood IQ is about forty-five per cent, whereas in late adolescence it rises to seventy-five per cent. As you grow up, you gradually express I N T E L L I G E N C E 8 5
your own innate intelligence and leave behind the influences stamped on you by others. You select the environments that suit your innate tendencies, rather than adjusting your innate tendencies to the environments you find yourself in. This proves two vital things: that genetic influences are not frozen at conception and that environmental influences are not inexorably cumulative. Heritability does not mean immutability.
Francis Galton, right at the start of this long debate, used an analogy that may be fairly apt. 'Many a person has amused himself, he wrote, 'with throwing bits of stick into a tiny brook and watching their progress; how they are arrested, first by one chance obstacle, then by another; and again, how their onward course is facilitated by a combination of circumstances. He might ascribe much importance to each of these events, and think how largely the destiny of the stick had been governed by a series of trifling accidents.
Nevertheless, all the sticks succeed in passing down the current, and in the long run, they travel at nearly the same rate.' So the evidence suggests that intensively exposing children to better tuition has a dramatic effect on their IQ scores, but only temporarily. By the end of elementary school, children who have been in Head Start programmes are no further ahead than children who have not.
If you accept the criticism that these studies mildly exaggerate heritability because they are of families from a single social class, then it follows that heritability will be greater in an egalitarian society than an unequal one. Indeed, the definition of the perfect meritoc-racy, ironically, is a society in which people's achievements depend on their genes because their environments are equal. We are fast approaching such a state with respect to height: in the past, poor nutrition resulted in many children not reaching their 'genetic' height as adults. Today, with generally better childhood nutrition, more of the differences in height between individuals are due to genes: the heritability of height is, therefore, I suspect, rising. The same cannot yet be said of intelligence with certainty, because environmental variables - such as school quality, family habits, or wealth — may be growing more unequal in some societies, rather than more equal. But 8 6 G E N O M E
it is none the less a paradox: in egalitarian societies, genes matter more.
These heritability estimates apply to the differences between individuals, not those between groups. IQ heritability does seem to be about the same in different populations or races, which might not have been the case. But it is logically false to conclude that because the difference between the IQ of one person and another is approximately fifty per cent heritable, that the difference between the average IQ s of blacks and whites or between whites and Asians is due to genes. Indeed, the implication is not only logically false, it so far looks empirically wrong, too. Thus does a large pillar of support for part of the thesis of the recent book The bell curve9
crumble. There are differences between the average IQ scores of blacks and whites, but there is no evidence that these differences are themselves heritable. Indeed, the evidence from cases of cross-racial adoption suggests that the average I Q s of blacks reared by and among whites is no different from that of whites.
If IQ is fifty per cent heritable individually, then some genes must influence it. But it is impossible to tell how many. The only thing one can say with certainty is that some of the genes that influence it are variable, that is to say they exist in different versions in different people. Heritability and determinism are very different things. It is entirely possible that the most important genes affecting intelligence are actually non-varying, in which case there would be no heritability for differences caused by those genes, because there would be no such differences For instance, I have five fingers on each hand and so do most people. The reason is that I inherited a genetic recipe that specified five fingers. Yet if I went around the world looking for people with four fingers, about ninety-five per cent of the people I found, possibly more, would be people who had lost fingers in accidents. I would find that having four fingers is something with very low heritability: it is nearly always caused by the environment. But that does not imply that genes had nothing to do with determining finger number. A gene can determine a feature of our bodies that is the same in different people just as surely as it can determine features that are different in different I N T E L L I G E N C E 8 7
people. Robert Plomin's gene-fishing expeditions for IQ genes will only find genes that come in different varieties, not genes that show no variation. They might therefore miss some important genes.
Plomin's first gene, the IGF2R gene on the long arm of chromosome 6, is at first sight an unlikely candidate for an 'intelligence gene'. Its main claim to fame before Plomin linked it with intelligence was that it was associated with liver cancer. It might have been called a 'liver-cancer gene', thus neatly demonstrating the foolishness of identifying genes by the diseases they cause. At some point we may have to decide if its cancer-suppressing function is its main task and its ability to influence intelligence a side-effect, or vice versa. In fact, they could both be side-effects. The function of the protein it encodes is mystifyingly dull: 'the intracellular trafficking of phosphorylated lysosomal enzymes from the Golgi complex and the cell surface to the lysosomes'. It is a molecular delivery van.
Not a word about speeding up brain waves.
IGF2R is an enormous gene, with 7,473 letters in all, but the sense-containing message is spread out over a 98,000-letter stretch of the genome, interrupted forty-eight times by nonsense sequences called introns (rather like one of those irritating magazine articles interrupted by forty-eight advertisements). There are repetitive stretches in the middle of the gene that are inclined to vary in length, perhaps affecting the difference between one person's intelligence and another. Since it seems to be a gene vaguely connected with insulin-like proteins and the burning of sugar, it is perhaps relevant that another study has found that people with high I Q s are more
'efficient' at using glucose in their brains. While learning to play the computer game called Tetris, high-I Q people show a greater fall in their glucose consumption as they get more practised than do low-IQ people. But this is to clutch at straws. Plomin's gene, if it proves real at all, will be one of many that can influence intelligence in many different ways.10
The chief value of Plomin's discovery lies in the fact that, while people may still dismiss the studies of twins and adoptees as too indirect to prove the existence of genetic influences on intelligence, 8 8 G E N O M E
they cannot argue with a direct study of a gene that co-varies with intelligence. One form of the gene is about twice as common in the superintelligent Iowan children as in the rest of the population, a result extremely unlikely to be accidental. But its effect must be small: this version of the gene can only add four points to your I Q , on average. It is emphatically not a 'genius gene'. Plomin hints at up to ten more 'intelligence genes' to come from his Iowa brainboxes. Yet the return of heritable IQ to scientific respectability is greeted with dismay in many quarters. It raises the spectre of eugenic abuse that so disfigured science in the 1920s and 1930s. As Stephen Jay Gould, a severe critic of excessive hereditarianism, has put it: 'A partially inherited low IQ might be subject to extensive improvement through proper education. And it might not. The mere fact of its heritability permits no conclusion.' Indeed. But that is exactly the trouble. It is by no means inevitable that people will react to genetic evidence with fatalism. The discovery of genetic mutations behind conditions like dyslexia has not led teachers to abandon such conditions as incurable - quite the reverse; it has encouraged them to single out dyslexic children for special teaching.11
Indeed, the most famous pioneer of intelligence testing, the Frenchman Alfred Binet, argued fervently that its purpose was not to reward gifted children but to give special attention to less gifted ones. Plomin cites himself as a perfect example of the system at work. As the only one of thirty-two cousins from a large family in Chicago to go to college, he credits his fortune to good results on an intelligence test, which persuaded his parents to send him to a more academic school. America's fondness for such tests is in remarkable contrast to Britain's horror of them. The short-lived and notorious eleven-plus exam, predicated on probably-faked data produced by Cyril Burt, was Britain's only mandatory intelligence test. Whereas in Britain the eleven-plus is remembered as a disastrous device that condemned perfectly intelligent children to second-rate schools, in meritocratic America similar tests are the passports to academic success for the gifted but impoverished.
Perhaps the heritability of IQ implies something entirely different, I N T E L L I G E N C E 8 9
something that once and for all proves that Galton's attempt to discriminate between nature and nurture is misconceived. Consider this apparently fatuous fact. People with high I Q s , on average, have more symmetrical ears than people with low I Q s . Their whole bodies seem to be more symmetrical: foot breadth, ankle breadth, finger length, wrist breadth and elbow breadth each correlates with I Q .
In the early 1990s there was revived an old interest in bodily symmetry, because of what it can reveal about the body's development during early life. Some asymmetries in the body are consistent: the heart is on the left side of the chest, for example, in most people.
But other, smaller asymmetries can go randomly in either direction.
In some people the left ear is larger than the right; in others, vice versa. The magnitude of this so-called fluctuating asymmetry is a sensitive measure of how much stress the body was under when developing, stress from infections, toxins or poor nutrition. The fact that people with high I Q s have more symmetrical bodies suggests that they were subject to fewer developmental stresses in the womb or in childhood. Or rather, that they were more resistant to such stresses. And the resistance may well be heritable. So the heritability of IQ might not be caused by direct 'genes for intelligence' at all, but by indirect genes for resistance to toxins or infections — genes in other words that work by interacting with the environment. You inherit not your IQ but your ability to develop a high IQ under certain environmental circumstances. How does one parcel that one into nature and nurture? It is frankly impossible.12
Support for this idea comes from the so-called Flynn effect. A New Zealand-based political scientist, James Flynn, noticed in the 1980s that IQ is increasing in all countries all the time, at an average rate of about three IQ points per decade. Quite why is hard to determine. It might be for the same reason that height is increasing: improved childhood nutrition. When two Guatemalan villages were given ad-lib protein supplements for several years, the IQ of children, measured ten years later, had risen markedly: a Flynn effect in miniature. But IQ scores are still rising just as rapidly in well-nourished western countries. Nor can school have much to do with 9 0 G E N O M E
it, because interruptions to schooling have demonstrably temporary effects on IQ and because the tests that show the most rapid rises are the ones that have least to do with what is taught in school. It is the ones that test abstract reasoning ability that show the steepest improvements. One scientist, Ulric Neisser, believes that the cause of the Flynn effect is the intense modern saturation of everyday life with sophisticated visual images — cartoons, advertisements, films, posters, graphics and other optical displays — often at the expense of written messages. Children experience a much richer visual environment than once they did, which helps develop their skills in visual puzzles of the kind that dominate IQ tests.13
But this environmental effect is, at first sight, hard to square with the twin studies suggesting such a high heritability for I Q . As Flynn himself notes, an increase of fifteen IQ points in five decades implies either that the world was full of dunces in 1950 or that it is full of geniuses today. Since we are not experiencing a cultural renaissance, he concludes that IQ measures nothing innate. But if Neisser is right, then the modern world is an environment that encourages the development of one form of intelligence - facility with visual symbols. This is a blow to 'g', but it does not negate the idea that these different kinds of intelligence are at least partly heritable. After two million years of culture, in which our ancestors passed on learnt local traditions, human brains may have acquired (through natural selection) the ability to find and specialise in those particular skills that the local culture teaches, and that the individual excels in. The environment that a child experiences is as much a consequence of the child's genes as it is of external factors: the child seeks out and creates his or her own environment. If she is of a mechanical bent, she practises mechanical skills; if a bookworm, she seeks out books. The genes may create an appetite, not an aptitude.
After all, the high heritability of short-sightedness is accounted for not just by the heritability of eye shape, but by the heritability of literate habits. The heritability of intelligence may therefore be about the genetics of nurture, just as much as the genetics of nature. What a richly satisfying end to the century of argument inaugurated by Galton.