[NB: Here is a Christmas meditation from founding editor David F Horrobin on the role and functioning of Medical Hypotheses, as it has existed for the past 34 years - but probably not for very much longer. I doubt whether the real Medical Hypotheses, run on Horrobin's principles, will survive 2010. As Horrobin's successor, the ideals described in his inaugural editorial are the ideals I strive to maintain. As he instructed me, this type of journal can only in practice be run by the editor choosing papers himself (not by delegating decisions), and by his taking responsibility for these choices. This is the traditional sceintific method of editorial review, and entails eschewing the bureaucratic timidities and inefficiencies of mainstream peer review. Horrobin's conviction was that there were two lynch pins of Medical Hypotheses as he founded and maintained it: 1. aiming to publish revolutionary ideas, even when they 'seem improbable and perhaps even faintly ridiculous'; and 2. using editorial review in pursuit of this aim.]
***
Ideas in biomedical science: reasons for the foundation of Medical Hypotheses
David F. Horrobin, (1939–2003) Founder and first editor of Medical Hypotheses
Adapted from the inaugural editorial which appeared in the first issue of Medical Hypotheses (1975) 1(1), 1–2. Re-published in the first issue edited by Bruce G Charlton - Medical Hypotheses, Volume 62, Issue 1, January 2004, Pages 3-4
It is at least arguable that too much biomedical research is published rather than too little. Why then start a new journal?
Scientific progress depends on the existence of creative tension between ideas and observations. An observation is made which cries out for explanation. A hypothesis is proposed to account for this observation. The hypothesis is tested by making more observations which almost invariably require the abandonment of some part of the hypothesis. A new hypothesis is proposed. And so on…
The physical and chemical sciences long ago recognized that observations are not superior to hypotheses in generating scientific progress nor are hypotheses superior to observations. Both are necessary. While the ideal research worker may be one who is equally able to generate hypotheses and to test them experimentally, these sciences also recognized that such paragons are very rare indeed. Most scientists are much better at either one or the other activity. In physico-chemical fields this is fully accepted and the contributions of both theoretical and experimental scientists are recognized.
In contrast, in the biomedical sciences there seems to me much ignorance of the way in which scientific advance actually occurs. Physical scientists often dismiss biology and medicine as backward and the biologists quite legitimately react by pointing out that they are usually dealing with much more complex phenomena. But I have a suspicion that there is some truth in what the physical scientists say and that biology and medicine are backward because they have relied almost exclusively on observation. They have failed to recognize adequately that observation is always more effective when disciplined and channelled by hypothesis.
Many journals will occasionally publish a theoretical paper from a scientist with an outstanding reputation but will not consider similar papers from relative unknowns. The rule that is almost universally applied biology and medicine is that ideas can be presented or criticized only by those with a record of experimental work in a field. Even then they must be kept strictly to the discussion sections of papers and their presentation must usually be rigorously curtailed because ignorant and pedantic referees and editors object to ‘unjustified speculation’ and complain that the discussion ‘goes beyond the observed facts’. It is hardly surprising that the best physicists and chemists find medicine and biology primitive and unsophisticated.
The situation seems to be to be a tragedy. It leads to a total misrepresentation of the way in which science actually takes place. In 10,000 years time, a historian who had access only to the journals would be unable to build up anything like an accurate picture of what biomedical scientists actually do. This would not matter unduly but what does matter is that many scientists and most students also fail to understand how scientific advance actually occurs. As a result of this antipathy to theory the rate of progress is slowed because there is neither free presentation of new ideas nor open criticism of old ones. Outdated concepts can persist for prolonged periods because the evidence against them is scattered through hundreds of papers and no one is allowed to gather it together in one article to mount a sustained attack.
The refusal to accept the equal importance of ideas and observations leads to inefficiency. Most scientists now active in the biomedical field are competent observers whose ability to generate ideas is either naturally absent or has been stultified by the prevailing philosophy. As a result they spend their time in making more and more detailed observations of the same sorts of phenomena. In contrast, a few have far more ideas than they could possibly investigate but their potential contribution is largely nullified because they are allowed to publish only in those areas where they have done experimental or clinical work. If only these differences of ability and emphasis could be accepted and recognized then the people with ideas (who may often be inept experimenters) could generate a steady flow of new concepts which might rejuvenate the work of those whose primary skill is in observation.
Medical Hypotheses will be primarily devoted to publishing ideas and criticisms of ideas in the biomedical area. It will publish papers from anyone regardless of whether they have done experimental work in the field or not and regardless of the reputation of the authors or the institution from which they come. I shall be biased in favour of articles which clearly have some bearing on medicine. I shall try to ensure that all articles are written in a way which enables any intelligent medical scientist to obtain something useful from them. While specialization in research is essential, obscurity in the presentation of specialized ideas is not. I believe strongly in the ability of apparently unrelated fields to cross-fertilize one another and I hope that as a side result of editorial policy Medical Hypotheses may become an important medium for the continuing education of those medical scientists who have open and lively minds. There will be a correspondence column which I should like to make a major debating area as well as fun to read.
What sorts of papers will be published in Medical Hypotheses? Some will describe theories, ideas which already have a great deal of observational support; and some hypotheses, ideas where the experimental support is as yet fragmentary. Some will criticize theories and hypotheses without necessarily having anything to say in replacement. Some will discuss more philosophical aspects of the logical bases of science and of how science functions in practice. In the biomedical field I believe that ignorance of such philosophical considerations remains a serious impediment to progress.
I shall willingly and proudly plead guilty to the charge that I shall publish some ideas which seem improbable and perhaps even faintly ridiculous. Most scientists seem to be under the impression that the best hypotheses are those which seem most likely to be true. I follow Karl Popper in seeing the virtues of improbability. If a hypothesis which most people think is probably true does turn out to be true (or rather is not falsified by crucial and valid experimental tests) then little progress has been made. If a hypothesis which most think is improbable turns out to true then a scientific revolution occurs and progress is dramatic. Many and probably most of the hypotheses published in the journal will turn out in some way to be wrong. But if they stimulate determined experimental testing, progress is inevitable whether they are wrong or right.
The history of science has repeatedly shown that when hypotheses are proposed it is impossible to predict which will turn out to be revolutionary and which ridiculous. The only safe approach is to let all see the light and to let all be discussed, experimented upon, vindicated or destroyed. I hope the journal will provide a new battlefield open to all on which ideas can be tested and put through the fire.
Wednesday, 23 December 2009
Monday, 30 November 2009
Conscience in science
First and second things, and the operations of conscience in science
Bruce G. Charlton
Medical Hypotheses. 2010; Volume 74: Pages 1-3
***
Summary
Why is modern science less efficient than it used to be, why has revolutionary science declined, and why has science become so dishonest? One plausible explanation behind these observations comes from an essay First and second things published by CS Lewis. First Things are the goals that are given priority as the primary and ultimate aim in life. Second Things are subordinate goals or aims – which are justified in terms of the extent to which they assist in pursuing First Things. The classic First Thing in human society is some kind of religious or philosophical world view. Lewis regarded it as a ‘universal law’ that the pursuit of a Second Thing as if it was a First Thing led inevitably to the loss of that Second Thing: ‘You can’t get second things by putting them first; you can get second things only by putting first things first’. I would argue that the pursuit of science as a primary value will lead to the loss of science, because science is properly a Second Thing. Because when science is conceptualized as a First Thing the bottom-line or operational definition of ‘correct behaviour’ is approval and high status within the scientific community. However, this does nothing whatsoever to prevent science drifting-away from its proper function; and once science has drifted then the prevailing peer consensus will tend to maintain this state of corruption. I am saying that science is a Second Thing, and ought to be subordinate to the First Thing of transcendental truth. Truth impinges on scientific practice in the form of individual conscience (noting that, of course, the strength and validity of conscience varies between scientists). When the senior scientists, whose role is to uphold standards, fail to posses or respond-to informed conscience, science will inevitably go rotten from the head downwards. What, then, motivates a scientist to act upon conscience? I believe it requires a fundamental conviction of the reality and importance of truth as an essential part of the basic purpose and meaning of life. Without some such bedrock moral underpinning, there is little possibility that individual scientific conscience would ever have a chance of holding-out against an insidious drift toward corruption enforced by peer consensus.
***
"You can’t get second things by putting them first; you can get second things only by putting first things first." C.S. Lewis. First and second things.
Why is modern science less efficient than it used to be [1], why has revolutionary science declined [2], and why has science become so dishonest? [3] One plausible explanation behind these observations comes from an essay published by CS Lewis in 1942: First and second things [4].
First Things are the goals that are given priority, by a person or a group, as the primary and ultimate aim in life. They are the bottom line in which terms other things are justified. Second Things are subordinate goals or aims – which are justified in terms of the extent to which they assist in pursuing First Things.
The classic First Thing in human society is some kind of transcendental world view – whether religious (such as Judaism or Christianity) or philosophical (such as Platonism or Stoicism). As examples of First Things, Lewis states about earlier societies ‘they cared at different times for all sorts of things, the will of God, for glory, for personal honour, for doctrinal purity, for justice.’
If these are examples of ‘First Things’ then in such a society science would be regarded as a Second Thing: science would ultimately be justified in terms of its assisting in the pursuit of the First Thing. So in a society where the will of God was primary for almost everyone, science would be pursued insofar as it was seen (overall and on average) to further the will of God. But in a society where science was the First Thing, then science would be pursued without further justification, and other societal pursuits would need to justify themselves in term of enhancing the goals of science.
Superficially, it sounds as though science would work better if it was a First Thing – freed from external constraints such as religion – and that such a ‘science first’ world would be a marvellous place to work as a scientist! And, for a while, it was...
But the essential nature of the most developed societies (e.g. of the USA, UK, Europe, East Asia) is that there is no single First Thing: not science, and not anything else [5]. Instead there are now only Second Things, pursued independently of each other. So the social systems – such as science, the arts, politics, public administration, law, the military, education, the mass media – are substantially independent and lack a common language. This is the idea of modernity, of a society based upon increasing autonomy of increasingly-specialized social systems.
The driving force behind modernity is increasing efficiency by means of functional differentiation, a general version of the principle that complexity is necessary to increased efficiency [5]. In its first formulation, in economics, it was noticed by Adam Smith that division of labour can lead to greater productivity [6]. At a societal level modern societies were conceptualized by Niklas Luhmann in terms that their continual functional differentiation enables all social systems to grow by increase in productivity but that no social system has priority over the others [7]. In Lewis’s terms there is no overall First Thing, but instead each social system is a First Thing for itself.
The problem with this conceptualization is to understand how all these First Things are integrated and coordinated. My earlier answer to this was to assume a kind of mutual regulation of a mosaic type, so that each social system is regulated by some others but none has overall control [5]. For instance, science depends on the educational system for expert manpower, the political system for peace, the economic system for resources etc. Then, in turn, the economic system depends on science for technological innovations and on the political system for international treaties and so on – with each system acting as a pressure group for some of the others, exerting influence to ensure that the other systems holds to their necessary functions and do not become too inefficient; thereby (I hoped!) this mosaic of mutual power and response acts to hold the whole society together [5].
I now find this proposed mechanism insufficient to ensure a stable society. It seems more likely that self-beneficial, even parasitic, change and growth within specific social systems has the potential to be much more rapid and parasitic than the evolution of mutual accommodation and symbiosis between autonomous systems that might ensure overall societal growth. Therefore, I would now expect a society of highly autonomous and rapidly-growing social systems that is lacking an overall First Thing to be much less cohesive and much more prone to collapse than I used to believe.
Another way of framing this issue is to ask what maintains the integrity of science. In other words, what keeps science pointing in the right direction, pursuing properly scientific goals as its main aim; and within the pursuit of these goals what keeps science honest in its internal dealings? In short, we need to understand the mechanism(s) that prevent science from becoming corrupt in the face of a continual tendency for short-termist and selfish behaviour to undermine cooperation and functionality. (This is the core problem which must be solved to enable the evolution of complex systems [5].)
My old idea [5] was that science would be kept honest and efficient by pressure from the main users of science – for example, engineers would keep physicists honest, agriculturalists would keep botanists honest, doctors would keep medical scientists honest, and so on. Yet science has, I now realize, become corrupted anyway [2], [3], [8] and [9] – which means that either these corrective mechanisms are non-existent, too slow or too weak.
In general, when people notice corruption of science they appeal to the idea that science ought to be a First Thing – the ‘Humboldtian’ ideal of disinterested pursuit of knowledge ‘for its own sake’. However, this is a mistake if in reality science ought to be a Second Thing, and not a First Thing; if it is only by remaining a Second Thing that science can avoid being corrupted. Indeed, Lewis regarded it as a ‘universal law’ that the pursuit of a Second Thing as if it was a First Thing led inevitably to the loss of that Second Thing: ‘You can’t get second things by putting them first; you can get second things only by putting first things first’ [4].
Lewis’s example (writing during World War Two) was that the Western Civilization had been putting the value of ‘civilization’ first for the last thirty years. He regarded civilization as including ‘Peace, a high standard of life, hygiene, transport, science and amusement’ – and as a result Western civilization had come very close to losing all these things. So, pursuit of civilization as a First Thing very nearly led to the loss of civilization. In particular Lewis focused on how pacifism, or pursuit of peace as a First Thing, had been a major contributor to the occurrence and destructiveness of World War Two: ‘I think many would now agree that a foreign policy dominated by desire for peace is one of the many roads that lead to war’ [4].
The idea, then, is that the pursuit of science as a primary value will lead to the loss of science, because science is properly a Second Thing.
This may happen because when science is conceptualized as a First Thing the bottom-line or operational definition of ‘correct behaviour’ is achieving approval and high status within the scientific community. Science as a First Thing is judged by scientists only – so success is winning the esteem of colleagues. And this amounts to the scientist seeking conformity with the prevailing peer consensus – either immediately, or over the longer time span of their career.
However, by this First Thing conceptualization of science, there is nothing whatsoever to prevent science drifting-away from its original function, from its proper mission. Real science is replaced by the infinite varieties of self-seeking among scientists. And once science has drifted, and a sufficient proportion of scientists are no longer seeking-truth nor speaking-truth, then the prevailing peer consensus will tend maintain this corrupt situation. So that a scientist seeking the esteem of his colleagues will himself need to abandon truth-seeking and truth-speaking.
The alternative conceptualization is for each scientist to regard science as a Second Thing, and for the individual scientist to evaluate his work and its context in terms of their contribution to transcendental truth [3]. When he regards the prevailing peer consensus as having diverged from truth in this transcendental sense, then the scientist may feel duty-bound to seek his own personal understanding of truth, and communicating what he personally regards as the truth, even when this conflicts with prevailing consensus and leads to lowered esteem among scientific colleagues and harms his career.
So, I am saying that science is a Second Thing, and the First Thing ought to be transcendental truth. A formulation of transcendental truth would be an attempted description of the nature of ultimate reality. But it may seem unclear how such a remote and abstract concept could affect scientific practice in real life situations. One answer would be that transcendental truth impinges on scientific practice in the form of conscience. In other words, transcendental truth in science could be ‘operationally-defined’ as the subjective workings of conscience in a scientist.
Conscience seems to be indicative of the First Thing as understood and appreciated in practice by an individual – and when science is a Second Thing, conscience is located outside of science, and science is judged by standards outside of science [3]. Conscience about First Things makes itself felt in science as an inner sense of the nature of reality; such as the nagging doubts and persistent suspicions which afflict a scrupulous scientist when he feels that the consensus of his scientific colleagues is wrong. He has reservations about the validity of prevailing idea of truth, and hunches that he personally has a better idea of the truth than the majority of his powerful scientific peers.
In saying that conscience is the operational definition of transcendental truth, it is important to note that the strength and validity of conscience varies between scientists. Some scientists are unscrupulous and have no conscience to speak of; while other scientists are so inexperienced – or lacking in requisite knowledge or skill – that their conscience with regard to truth is unreliable. And while all scientists need to listen to their conscience, the main consciences of relevance to science are those of the scientific leadership; those whose own behaviour serves as a model for more junior scientists; and those scientists who are themselves responsible for choosing, educating, employing and promoting scientific personnel. It is primarily the senior scientists whose job is to uphold ethical standards, to enforce incentives and sanctions. If (and when) scientific leaders are lacking in informed conscience, or ignore the promptings of conscience, then science will inevitably go rotten from the head downwards [3,10].
Having a conscience about truth is the first step – but what motivates a scientist to listen to his conscience and act upon it? Firstly, if science is regarded as being in service to truth, then ideals of truth might enforce conscience. But then, what is so important about ‘truth’? And the final answer to all that, would be a fundamental conviction that truth is an essential part of what we conceive to be ‘the good’ – in other words the basic purpose and meaning of life. This has the corollary that if a person does not actually have a concept of the basic purpose and meaning of life – then their world view will intrinsically be lacking any firm ground on which they can stand in a situation where the pursuit of truth causes here-and-now disadvantage.
In this respect, science is paradoxically stronger when a Second Thing than as a First Thing. Because science is stronger when science is embedded in the larger value of truth, and when truth is embedded in the still-larger value of a concept of the good life. Of course, not all concepts of the good life will be equally supportive of good science; indeed some transcendental concepts are anti-scientific.
However, without an ultimate, bedrock moral underpinning of some kind, then there seems no possibility that individual scientific conscience would ever have a chance of holding-out against the insidious drift toward corruption enforced by peer consensus.
References
[1] J. Ziman, Real science, Cambridge University Press, Cambridge, UK (2000).
[2] Smolin L. The trouble with physics. London: Allen Lane Penguin; 2006.
[3] B.G. Charlton, The vital role of transcendental truth in science, Med Hypotheses 72 (2009), pp. 373–376.
[4] C.S. Lewis, First and second things. In: W. Hooper, Editor, First and second things: essays on theology and ethics, Collins, Fount, London (1985), pp. 19–24.
[5] Charlton B, Andras P. The modernization imperative. Imprint Academic: Exeter 2003.
[6] A. Smith, The wealth of nations, London, Dent (1910) [originally published 1776–7].
[7] N. Luhmann, Social systems, Harvard University Press, Cambridge, MA, USA (1995).
[8] B.G. Charlton and A. Miles, The rise and fall of EBM, QJM 91 (1998), pp. 371–374.
[9] Healy D. Let them eat Prozac. New York University Press: NY, USA; 2004.
[10] B.G. Charlton and Figureheads, ghost-writers and pseudonymous quant bloggers: the recent evolution of authorship in science publishing, Med Hypotheses 71 (2008), pp. 475–480.
Bruce G. Charlton
Medical Hypotheses. 2010; Volume 74: Pages 1-3
***
Summary
Why is modern science less efficient than it used to be, why has revolutionary science declined, and why has science become so dishonest? One plausible explanation behind these observations comes from an essay First and second things published by CS Lewis. First Things are the goals that are given priority as the primary and ultimate aim in life. Second Things are subordinate goals or aims – which are justified in terms of the extent to which they assist in pursuing First Things. The classic First Thing in human society is some kind of religious or philosophical world view. Lewis regarded it as a ‘universal law’ that the pursuit of a Second Thing as if it was a First Thing led inevitably to the loss of that Second Thing: ‘You can’t get second things by putting them first; you can get second things only by putting first things first’. I would argue that the pursuit of science as a primary value will lead to the loss of science, because science is properly a Second Thing. Because when science is conceptualized as a First Thing the bottom-line or operational definition of ‘correct behaviour’ is approval and high status within the scientific community. However, this does nothing whatsoever to prevent science drifting-away from its proper function; and once science has drifted then the prevailing peer consensus will tend to maintain this state of corruption. I am saying that science is a Second Thing, and ought to be subordinate to the First Thing of transcendental truth. Truth impinges on scientific practice in the form of individual conscience (noting that, of course, the strength and validity of conscience varies between scientists). When the senior scientists, whose role is to uphold standards, fail to posses or respond-to informed conscience, science will inevitably go rotten from the head downwards. What, then, motivates a scientist to act upon conscience? I believe it requires a fundamental conviction of the reality and importance of truth as an essential part of the basic purpose and meaning of life. Without some such bedrock moral underpinning, there is little possibility that individual scientific conscience would ever have a chance of holding-out against an insidious drift toward corruption enforced by peer consensus.
***
"You can’t get second things by putting them first; you can get second things only by putting first things first." C.S. Lewis. First and second things.
Why is modern science less efficient than it used to be [1], why has revolutionary science declined [2], and why has science become so dishonest? [3] One plausible explanation behind these observations comes from an essay published by CS Lewis in 1942: First and second things [4].
First Things are the goals that are given priority, by a person or a group, as the primary and ultimate aim in life. They are the bottom line in which terms other things are justified. Second Things are subordinate goals or aims – which are justified in terms of the extent to which they assist in pursuing First Things.
The classic First Thing in human society is some kind of transcendental world view – whether religious (such as Judaism or Christianity) or philosophical (such as Platonism or Stoicism). As examples of First Things, Lewis states about earlier societies ‘they cared at different times for all sorts of things, the will of God, for glory, for personal honour, for doctrinal purity, for justice.’
If these are examples of ‘First Things’ then in such a society science would be regarded as a Second Thing: science would ultimately be justified in terms of its assisting in the pursuit of the First Thing. So in a society where the will of God was primary for almost everyone, science would be pursued insofar as it was seen (overall and on average) to further the will of God. But in a society where science was the First Thing, then science would be pursued without further justification, and other societal pursuits would need to justify themselves in term of enhancing the goals of science.
Superficially, it sounds as though science would work better if it was a First Thing – freed from external constraints such as religion – and that such a ‘science first’ world would be a marvellous place to work as a scientist! And, for a while, it was...
But the essential nature of the most developed societies (e.g. of the USA, UK, Europe, East Asia) is that there is no single First Thing: not science, and not anything else [5]. Instead there are now only Second Things, pursued independently of each other. So the social systems – such as science, the arts, politics, public administration, law, the military, education, the mass media – are substantially independent and lack a common language. This is the idea of modernity, of a society based upon increasing autonomy of increasingly-specialized social systems.
The driving force behind modernity is increasing efficiency by means of functional differentiation, a general version of the principle that complexity is necessary to increased efficiency [5]. In its first formulation, in economics, it was noticed by Adam Smith that division of labour can lead to greater productivity [6]. At a societal level modern societies were conceptualized by Niklas Luhmann in terms that their continual functional differentiation enables all social systems to grow by increase in productivity but that no social system has priority over the others [7]. In Lewis’s terms there is no overall First Thing, but instead each social system is a First Thing for itself.
The problem with this conceptualization is to understand how all these First Things are integrated and coordinated. My earlier answer to this was to assume a kind of mutual regulation of a mosaic type, so that each social system is regulated by some others but none has overall control [5]. For instance, science depends on the educational system for expert manpower, the political system for peace, the economic system for resources etc. Then, in turn, the economic system depends on science for technological innovations and on the political system for international treaties and so on – with each system acting as a pressure group for some of the others, exerting influence to ensure that the other systems holds to their necessary functions and do not become too inefficient; thereby (I hoped!) this mosaic of mutual power and response acts to hold the whole society together [5].
I now find this proposed mechanism insufficient to ensure a stable society. It seems more likely that self-beneficial, even parasitic, change and growth within specific social systems has the potential to be much more rapid and parasitic than the evolution of mutual accommodation and symbiosis between autonomous systems that might ensure overall societal growth. Therefore, I would now expect a society of highly autonomous and rapidly-growing social systems that is lacking an overall First Thing to be much less cohesive and much more prone to collapse than I used to believe.
Another way of framing this issue is to ask what maintains the integrity of science. In other words, what keeps science pointing in the right direction, pursuing properly scientific goals as its main aim; and within the pursuit of these goals what keeps science honest in its internal dealings? In short, we need to understand the mechanism(s) that prevent science from becoming corrupt in the face of a continual tendency for short-termist and selfish behaviour to undermine cooperation and functionality. (This is the core problem which must be solved to enable the evolution of complex systems [5].)
My old idea [5] was that science would be kept honest and efficient by pressure from the main users of science – for example, engineers would keep physicists honest, agriculturalists would keep botanists honest, doctors would keep medical scientists honest, and so on. Yet science has, I now realize, become corrupted anyway [2], [3], [8] and [9] – which means that either these corrective mechanisms are non-existent, too slow or too weak.
In general, when people notice corruption of science they appeal to the idea that science ought to be a First Thing – the ‘Humboldtian’ ideal of disinterested pursuit of knowledge ‘for its own sake’. However, this is a mistake if in reality science ought to be a Second Thing, and not a First Thing; if it is only by remaining a Second Thing that science can avoid being corrupted. Indeed, Lewis regarded it as a ‘universal law’ that the pursuit of a Second Thing as if it was a First Thing led inevitably to the loss of that Second Thing: ‘You can’t get second things by putting them first; you can get second things only by putting first things first’ [4].
Lewis’s example (writing during World War Two) was that the Western Civilization had been putting the value of ‘civilization’ first for the last thirty years. He regarded civilization as including ‘Peace, a high standard of life, hygiene, transport, science and amusement’ – and as a result Western civilization had come very close to losing all these things. So, pursuit of civilization as a First Thing very nearly led to the loss of civilization. In particular Lewis focused on how pacifism, or pursuit of peace as a First Thing, had been a major contributor to the occurrence and destructiveness of World War Two: ‘I think many would now agree that a foreign policy dominated by desire for peace is one of the many roads that lead to war’ [4].
The idea, then, is that the pursuit of science as a primary value will lead to the loss of science, because science is properly a Second Thing.
This may happen because when science is conceptualized as a First Thing the bottom-line or operational definition of ‘correct behaviour’ is achieving approval and high status within the scientific community. Science as a First Thing is judged by scientists only – so success is winning the esteem of colleagues. And this amounts to the scientist seeking conformity with the prevailing peer consensus – either immediately, or over the longer time span of their career.
However, by this First Thing conceptualization of science, there is nothing whatsoever to prevent science drifting-away from its original function, from its proper mission. Real science is replaced by the infinite varieties of self-seeking among scientists. And once science has drifted, and a sufficient proportion of scientists are no longer seeking-truth nor speaking-truth, then the prevailing peer consensus will tend maintain this corrupt situation. So that a scientist seeking the esteem of his colleagues will himself need to abandon truth-seeking and truth-speaking.
The alternative conceptualization is for each scientist to regard science as a Second Thing, and for the individual scientist to evaluate his work and its context in terms of their contribution to transcendental truth [3]. When he regards the prevailing peer consensus as having diverged from truth in this transcendental sense, then the scientist may feel duty-bound to seek his own personal understanding of truth, and communicating what he personally regards as the truth, even when this conflicts with prevailing consensus and leads to lowered esteem among scientific colleagues and harms his career.
So, I am saying that science is a Second Thing, and the First Thing ought to be transcendental truth. A formulation of transcendental truth would be an attempted description of the nature of ultimate reality. But it may seem unclear how such a remote and abstract concept could affect scientific practice in real life situations. One answer would be that transcendental truth impinges on scientific practice in the form of conscience. In other words, transcendental truth in science could be ‘operationally-defined’ as the subjective workings of conscience in a scientist.
Conscience seems to be indicative of the First Thing as understood and appreciated in practice by an individual – and when science is a Second Thing, conscience is located outside of science, and science is judged by standards outside of science [3]. Conscience about First Things makes itself felt in science as an inner sense of the nature of reality; such as the nagging doubts and persistent suspicions which afflict a scrupulous scientist when he feels that the consensus of his scientific colleagues is wrong. He has reservations about the validity of prevailing idea of truth, and hunches that he personally has a better idea of the truth than the majority of his powerful scientific peers.
In saying that conscience is the operational definition of transcendental truth, it is important to note that the strength and validity of conscience varies between scientists. Some scientists are unscrupulous and have no conscience to speak of; while other scientists are so inexperienced – or lacking in requisite knowledge or skill – that their conscience with regard to truth is unreliable. And while all scientists need to listen to their conscience, the main consciences of relevance to science are those of the scientific leadership; those whose own behaviour serves as a model for more junior scientists; and those scientists who are themselves responsible for choosing, educating, employing and promoting scientific personnel. It is primarily the senior scientists whose job is to uphold ethical standards, to enforce incentives and sanctions. If (and when) scientific leaders are lacking in informed conscience, or ignore the promptings of conscience, then science will inevitably go rotten from the head downwards [3,10].
Having a conscience about truth is the first step – but what motivates a scientist to listen to his conscience and act upon it? Firstly, if science is regarded as being in service to truth, then ideals of truth might enforce conscience. But then, what is so important about ‘truth’? And the final answer to all that, would be a fundamental conviction that truth is an essential part of what we conceive to be ‘the good’ – in other words the basic purpose and meaning of life. This has the corollary that if a person does not actually have a concept of the basic purpose and meaning of life – then their world view will intrinsically be lacking any firm ground on which they can stand in a situation where the pursuit of truth causes here-and-now disadvantage.
In this respect, science is paradoxically stronger when a Second Thing than as a First Thing. Because science is stronger when science is embedded in the larger value of truth, and when truth is embedded in the still-larger value of a concept of the good life. Of course, not all concepts of the good life will be equally supportive of good science; indeed some transcendental concepts are anti-scientific.
However, without an ultimate, bedrock moral underpinning of some kind, then there seems no possibility that individual scientific conscience would ever have a chance of holding-out against the insidious drift toward corruption enforced by peer consensus.
References
[1] J. Ziman, Real science, Cambridge University Press, Cambridge, UK (2000).
[2] Smolin L. The trouble with physics. London: Allen Lane Penguin; 2006.
[3] B.G. Charlton, The vital role of transcendental truth in science, Med Hypotheses 72 (2009), pp. 373–376.
[4] C.S. Lewis, First and second things. In: W. Hooper, Editor, First and second things: essays on theology and ethics, Collins, Fount, London (1985), pp. 19–24.
[5] Charlton B, Andras P. The modernization imperative. Imprint Academic: Exeter 2003.
[6] A. Smith, The wealth of nations, London, Dent (1910) [originally published 1776–7].
[7] N. Luhmann, Social systems, Harvard University Press, Cambridge, MA, USA (1995).
[8] B.G. Charlton and A. Miles, The rise and fall of EBM, QJM 91 (1998), pp. 371–374.
[9] Healy D. Let them eat Prozac. New York University Press: NY, USA; 2004.
[10] B.G. Charlton and Figureheads, ghost-writers and pseudonymous quant bloggers: the recent evolution of authorship in science publishing, Med Hypotheses 71 (2008), pp. 475–480.
Thursday, 26 November 2009
Clever Sillies - Why the high IQ lack common sense
Clever sillies: Why high IQ people tend to be deficient in common sense
Bruce G. Charlton
Medical Hypotheses. 2009;73: 867-870.
Summary
In previous editorials I have written about the absent-minded and socially-inept ‘nutty professor’ stereotype in science, and the phenomenon of ‘psychological neoteny’ whereby intelligent modern people (including scientists) decline to grow-up and instead remain in a state of perpetual novelty-seeking adolescence. These can be seen as specific examples of the general phenomenon of ‘clever sillies’ whereby intelligent people with high levels of technical ability are seen (by the majority of the rest of the population) as having foolish ideas and behaviours outside the realm of their professional expertise. In short, it has often been observed that high IQ types are lacking in ‘common sense’ – and especially when it comes to dealing with other human beings. General intelligence is not just a cognitive ability; it is also a cognitive disposition. So, the greater cognitive abilities of higher IQ tend also to be accompanied by a distinctive high IQ personality type including the trait of ‘Openness to experience’, ‘enlightened’ or progressive left-wing political values, and atheism. Drawing on the ideas of Kanazawa, my suggested explanation for this association between intelligence and personality is that an increasing relative level of IQ brings with it a tendency differentially to over-use general intelligence in problem-solving, and to over-ride those instinctive and spontaneous forms of evolved behaviour which could be termed common sense. Preferential use of abstract analysis is often useful when dealing with the many evolutionary novelties to be found in modernizing societies; but is not usually useful for dealing with social and psychological problems for which humans have evolved ‘domain-specific’ adaptive behaviours. And since evolved common sense usually produces the right answers in the social domain; this implies that, when it comes to solving social problems, the most intelligent people are more likely than those of average intelligence to have novel but silly ideas, and therefore to believe and behave maladaptively. I further suggest that this random silliness of the most intelligent people may be amplified to generate systematic wrongness when intellectuals are in addition ‘advertising’ their own high intelligence in the evolutionarily novel context of a modern IQ meritocracy. The cognitively-stratified context of communicating almost-exclusively with others of similar intelligence, generates opinions and behaviours among the highest IQ people which are not just lacking in common sense but perversely wrong. Hence the phenomenon of ‘political correctness’ (PC); whereby false and foolish ideas have come to dominate, and moralistically be enforced upon, the ruling elites of whole nations.
***
IQ and evolved problem-solving
On the whole, and all else being equal, in modern societies the higher a person’s general intelligence (as measured by the intelligence quotient or IQ), the better will be life for that person; since higher intelligence leads (among other benefits) to higher social status and salary, longer life expectancy and better health [1], [2], [3], [4] and [5]. However, at the same time, it has been recognized for more than a century that increasing IQ is biologically-maladaptive because there is an inverse relationship between IQ and fertility [6], [7] and [8]. Under modern conditions, therefore, high intelligence is fitness-reducing.
In the course of exploring this modern divergence between social-adaptation and biological-adaptation, Satoshi Kanazawa has made the insightful observation that a high level of general intelligence is mainly useful in dealing with life problems which are an evolutionary novelty. By contrast, performance in solving problems which were a normal part of human life in the ancestral hunter–gatherer era may not be helped (or may indeed be hindered) by higher IQ [9] and [10].
(This statement requires a qualification. When a person has suffered some form of brain damage, or a pathology affecting brain function, then this might well produce generalized impairment of cognition: reducing both general intelligence and other forms of evolved cognitive functioning, depending on the site and extent of the brain pathology. Since a population with low IQ would include some whose IQ had been lowered by brain pathology, the average level of social intelligence or common sense would probably also be lower in this population. This confounding effect of brain pathology would be expected to create a weak and non-causal statistical correlation between IQ and social intelligence/common sense, a correlation that would mainly be apparent at low levels of IQ.)
As examples of how IQ may help with evolutionary novelties, it has been abundantly-demonstrated that increasing measures of IQ are strongly and positively correlated with a wide range of abilities which require abstract reasoning and rapid learning of new knowledge and skills; such as educational outcomes, and abilities at most complex modern jobs [1], [2], [3], [4], [5] and [11]. Science and mathematics are classic examples of problem-solving activities that arose only recently in human evolutionary history and in which differential ability is very strongly predicted by relative general intelligence [12].
However, there are also many human tasks which our human ancestors did encounter repeatedly and over manifold generations, and natural selection has often produced ‘instinctive’, spontaneous ways of dealing with these. Since humans are social primates, one major such category is social problems, which have to do with understanding, predicting and manipulating the behaviours of other human beings [13], [14], [15] and [16]. Being able to behave adaptively in dealing with these basic human situations is what I will term having ‘common sense’.
Kanazawa’s idea is that there is therefore a contrast between recurring, mainly social problems which affected fitness for our ancestors and for which all normal humans have evolved behavioural responses; and problems which are an evolutionary novelty but which have a major impact on individual functioning in the context of modern societies [9] and [10]. When a problem is an evolutionary novelty, individual differences in general intelligence make a big difference to each individual’s abilities to analyze the problem, and learn to how solve it. So, the idea is that having a high IQ would predict a better ability in understanding and dealing with new problems; but higher IQ would not increase the level of a person’s common sense ability to deal with social situations.
IQ not just an ability, but also a disposition
Although general intelligence is usually conceptualized as differences in cognitive ability, IQ is not just about ability but also has personality implications [17].
For example, in some populations there is a positive correlation between IQ and the personality trait of Openness to experience (‘Openness’) [18] and [19]; a positive correlation with ‘enlightened’ or progressive values of a broadly socialist and libertarian type [20]; and a negative correlation with religiousness [21].
So, the greater cognitive ability of higher IQ is also accompanied by a somewhat distinctive high IQ personality type. My suggested explanation for this association is that an increasing level of IQ brings with it an increased tendency to use general intelligence in problem-solving; i.e. to over-ride those instinctive and spontaneous forms of evolved behaviour which could be termed common sense.
The over-use of abstract reasoning may be most obvious in the social domain, where normal humans are richly equipped with evolved psychological mechanisms both for here-and-now interactions (e.g. rapidly reading emotions from facial expression, gesture and posture, and speech intonation) and for ‘strategic’ modelling of social interactions to understand predict and manipulate the behaviour of others [16]. Social strategies deploy inferred knowledge about the dispositions, motivations and intentions of others. When the most intelligent people over-ride the social intelligence systems and apply generic, abstract and systematic reasoning of the kind which is enhanced among higher IQ people, they are ignoring an ‘expert system’ in favour of a non-expert system.
In suggesting that the most intelligent people tend to use IQ to over-ride common sense I am unsure of the extent to which this is due to a deficit in the social reasoning ability, perhaps due to a trade-off between cognitive abilities – as suggested by Baron-Cohen’s conceptualization of Asperger’s syndrome, including the male- versus female-type of systematizing/empathizing brain [22]. Or alternatively it could be more of an habitual tendency to over-use abstract analysis, that might (in principle) be overcome by effort or with training. Observing the apparent universality of ‘Silly Clevers’ in modernizing societies, I suspect that a higher IQ bias towards over-utilizing abstract reasoning would probably turn-out to be innate and relatively stable.
Indeed, I suggest that higher levels of the personality trait of Openness in higher IQ people may the flip-side of this over-use of abstraction. I regard Openness as the result of deploying abstract analysis for social problems to yield unstable and unpredictable results, when innate social intelligence would tend to yield predictable and stable results. This might plausibly underlie the tendency of the most intelligent people in modernizing societies to hold ‘left-wing’ political views [10] and [20].
I would argue that neophilia (or novelty-seeking) is a driving attribute of the personality trait of Openness; and a disposition common in adolescents and immature adults who display what I have termed ‘psychological neoteny’ [23] and [24]. When problems are analyzed using common sense ‘instincts’ the evaluative process would be expected to lead to the same answers in all normal humans, and these answers are likely to be stable over time. But when higher IQ people ignore or over-ride common sense, they generate a variety of uncommon ideas. Since these ideas are only feebly-, or wholly un-, supported by emotions; they are held more weakly than common sense ideas, and so are more likely to change over time.
For instance, a group of less intelligent people using instinctive social intelligence to analyze a social situation will presumably reach the same traditional conclusion as everyone else and this conclusion will not change with time; while a more intelligent group might by contrast use abstract analysis and generate a wider range of novel and less-compelling solutions. This behaviour appears as if motivated by novelty-seeking.
Applying abstract analysis to social situations might be seen as ‘creative’, and indeed Openness has been put forward as the major personality trait which supports creativity [19] and [25]. This is reasonable in the sense that an intellectual high in Openness would be likely to disregard common sense, and to generate multiple, unpredictable and unfamiliar answers to evolutionarily-familiar problems which would only yield a single ‘obvious’ solution to those who deployed evolved modes of intelligence. However, I would instead argue that a high IQ person applying abstract systemizing intelligence to activities which are more usually done by instinctive intelligence is not a truly ‘creative’ process.
Instead, following Eysenck, I would regard true psychological creativity as primarily an associative activity which Eysenck includes as part of the trait Psychoticism; cognitively akin to the ‘primary process’ thinking of sleep, delirium and psychotic illness [26] and [27]. A major difference between these two concepts of creativity is that while ‘Openness creativity’ is abstract, coolly-impartial and as if driven by novelty-seeking (neophilia); ‘Psychoticism creativity’ is validated by emotions: such that the high-Psychoticism creative person is guided by their emotional responses to their own creative production.
Clever sillies in the IQ meritocracy
It therefore seems plausible that the folklore or stereotypical idea of the eccentric, unworldly, absent-minded or obtuse scientist – who is brilliant at their job while being fatuous and incompetent in terms of their everyday life [28], might be the result of this psychological tendency to over-use abstract intelligence and use it in inappropriate situations.
However, there is a further aspect of this phenomenon. Modern societies are characterized by large population, extensive division of labour, and a ‘meritocratic’ form of social organization in which social roles (jobs, occupations) tend to be filled on the basis of educational credentials and job performance rather than on an hereditary basis (as was the case in most societies of the past). This means that in modern societies there is an unprecedented degree of cognitive stratification [29]. Cognitive stratification is the layering of social organization by IQ; such that residence, schooling and occupations are characterized by narrow bands of intelligence. Large modern countries are therefore ruled by concentrations of highly intelligent people in the major social systems such as politics, civil administration, law, science and technology, the mass media and education. Communication in these elites is almost-exclusively among the highly intelligent.
In such an evolutionarily-unprecedented, artificial ‘hothouse’ environment, it is plausible that any IQ-related behaviours are amplified: partly because there is little counter-pressure from the less intelligent people with less neophiliac personalities, and perhaps mainly because there is a great deal of IQ-advertisement. Indeed, it looks very much as if the elites of modern societies are characterized by considerable IQ-signalling [19]. Sometimes this is direct advertisement (e.g. when boasting about intellectual attainments or attendance at highly-selective colleges) and more often the signalling is subtly-indirect when people display the attitudes, beliefs, fashions, manners and hobbies associated with high intelligence. This advertising is probably based on sexual selection [30], if IQ has been a measure of general fitness during human evolutionary history, and was associated with a wide range of adaptive traits [31].
My hunch is that it is this kind of IQ-advertisement which has led to the most intelligent people in modern societies having ideas about social phenomena that are not just randomly incorrect (due to inappropriately misapplying abstract analysis) but are systematically wrong. I am talking of the phenomenon known as political correctness (PC) in which foolish and false ideas have become moralistically-enforced among the ruling intellectual elite. And these ideas have invaded academic, political and social discourse. Because while the stereotypical nutty professor in the hard sciences is a brilliant scientist but silly about everything else; the stereotypical nutty professor social scientist or humanities professor is not just silly about ‘everything else’, but also silly in their professional work.
Getting answers to problems relating to hard science is extremely intellectually-difficult and (because the subject is an evolutionary novelty) necessarily requires abstract reasoning [12] and [26]. Therefore the hard scientist is invariably vastly more competent at their science than the average member of the public, and he has no need to be novelty-seeking in order to advertise his intelligence.
But getting answers to problems in science involving human social behaviour is something which is already done very well by evolved human psychological mechanisms [13], [14], [15] and [16]. In this situation it is difficult to improve on common sense, and – even without being taught – normal people already have a pretty good understanding of human motivations, incentives and deterrents, and the basic cause and effect processes of society. Because psychological and social intelligence expertise is so widespread and adaptive; in order to advertise his intelligence the social scientist must produce something systematically-different from common sense, something novel and (necessarily) counter-intuitive. And because it goes against evolved psychology, in this instance something different is likely to be something wrong. So, the social scientist professional deploying abstract reasoning on social problems is often less likely to generate a correct answer than the average member of the public who is using the common sense of evolved, spontaneous social intelligence.
In the human and social sciences there is therefore a professional incentive to be perversely wrong – to be silly, in other words. And this is indeed what we see. The more that the subject matter of an academic field requires, or depends on, common sense; the sillier it will be.
The results of cognitive stratification and IQ-advertising are therefore bad enough to have destroyed the value of whole domains of the arts and academia, and in the domain of public policy the results have been simply disastrous. Over the past four decades the dishonest fantasy-world discourse of non-biological political correctness has evolved to dominate the intellectual arena of whole nations – perhaps the whole developed world – such that wrong and ridiculous ideas have become not just mainstream, but compulsory.
Because clever silliness is not just one of several competing ideas in the elite arena – it is both intellectually- and moralistically-enforced with such zeal as utterly to exclude alternatives [32]. The first level of defence is that denying a PC assertion is taken as proof of dumbness or derangement; such that flat-denial without refutation is regarded as sufficient response. But the toughest enforcement is moral: anyone smart and sane who disbelieves the silly clever falsehoods and asserts something different is not just denounced as dumb but actually pilloried as evil [33].
I infer that the motivation behind the moralizing venom of political correctness is the fact that spontaneous human instincts are universal and more powerfully-felt than the absurd abstractions of PC; plus the fact that common sense is basically correct while PC is perversely wrong. Hence, at all costs a fair debate must be prevented if the PC consensus is to be protected. Common sense requires to be stigmatized in order that it is neutralized.
Ultimately these manoeuvres serve to defend the power, status and distinctiveness of the intellectual elite [34]. They are socially-adaptive over the short-term, even as they are biologically-maladaptive over the longer-term.
Conclusion
Because evolved ‘common sense’ usually produces the right answers in the social domain, yet the most intelligent people have personalities which over-use abstract analysis in the social domain [9] and [10], this implies that the most intelligent people are predisposed to have silly ideas and to behave maladaptively when it comes to solving social problems.
Ever since the development of cognitive stratification in modernizing societies [29], the clever sillies have been almost monopolistically ‘in charge’. They really are both clever and silly – but the cleverness is abstract while the silliness is focused on the psychological and social domains. Consequently, the fatal flaw of modern ruling elites lies in their lack of common sense – especially the misinterpretations of human psychology and socio-political affairs. My guess is that this lack of common sense is intrinsic and incorrigible – and perhaps biologically-linked with the evolution of high intelligence and the rise of modernity [35].
Stanovich has also described the over-riding of the ‘Darwinian brain’ of autonomous systems by the analytic system, and has identified the phenomenon as underlying modern non-adaptive ethical reasoning [36]. Stanovich has also noted that IQ accounts for much (but not all) of the inter-individual differences in using analytic evaluations; however, Stanovich regards the increased use of abstraction to replace traditional ‘common sense’ very positively, not as ‘silly’ but as a vital aspect of what he interprets as the higher status of modern social morality.
Yet, whatever else, to be a clever silly is a somewhat tragic state; because it entails being cognitively-trapped by compulsive abstraction; unable to engage directly and spontaneously with what most humans have traditionally regarded as psycho-social reality; disbarred from the common experience of humankind and instead cut-adrift on the surface of a glittering but shallow ocean of novelties: none of which can ever truly convince or satisfy. It is to be alienated from the world; and to find no stable meaning of life that is solidly underpinned by emotional conviction [37]. Little wonder, perhaps, that clever sillies usually choose sub-replacement reproduction [6].
To term the Western ruling elite ‘clever sillies’ is of course a broad generalization, but is not merely name-calling. Because, as well as political correctness being systematically dishonest [33] and [34]; in relation to absolute and differential fertility, modern elite behaviour is objectively maladaptive in a strictly biological sense. It remains to be seen whether the genetic self-annihilation of the IQ elite will lead-on towards self-annihilation of the societies over which they rule.
Note: I should in all honesty point-out that I recognize this phenomenon from the inside. In other words, I myself am a prime example of a ‘clever silly’; having spent much of adolescence and early adult life passively absorbing high-IQ-elite-approved, ingenious-but-daft ideas that later needed, painfully, to be dismantled. I have eventually been forced to acknowledge that when it comes to the psycho-social domain, the commonsense verdict of the majority of ordinary people throughout history is much more likely to be accurate than the latest fashionably-brilliant insight of the ruling elite. So, this article has been written on the assumption, eminently-challengeable, that although I have nearly-always been wrong in the past – I now am right….
References
[1] U. Neisser et al., Intelligence: knowns and unknowns, Am Psychol 51 (1996), pp. 77–101.
[2] N.J. Mackintosh, IQ and human intelligence, Oxford University Press (1998).
[3] A.R. Jensen, The g factor the science of mental ability, Praeger, Westport, CT, USA (1988).
[4] I.J. Deary, Intelligence: a very short introduction, Oxford, Oxford University Press (2001).
[5] G.D. Batty, I.J. Deary and L.S. Gottfredson, Pre-morbid (early life) IQ and later mortality risk: systematic review, Ann Epidemiol 17 (2007), pp. 278–288.
[6] R. Lynn, Dysgenics, Praeger, Westport, CT, USA (1996).
[7] R. Lynn and M. Van Court, New evidence for dysgenic fertility for intelligence in the United States, Intelligence 32 (2004), pp. 193–201.
[8] D. Nettle and T.V. Pollet, Natural selection on male wealth in humans, Am Nat 172 (2008), pp. 658–666.
[9] S. Kanazawa, General Intelligence as a domain-specific adaptation, Psychol Rev 111 (2004), pp. 512–523.
[10] S. Kanazawa, IQ and the values of nations, J Biosoc Sci 41 (2009), pp. 537–556.
[11] L.S. Gottfredson, Implications of cognitive differences for schooling within diverse societies. In: C.L. Frisby and C.R. Reynolds, Editors, Comprehensive handbook of multicultural school psychology, Wiley, New York (2005), pp. 517–554.
[12] D. Lubinski and C.P. Benbow, Study of mathematically precocious youth after 35 years: uncovering antecedents for the development of math-science expertise, Perspect Psychol Sci 1 (2006), pp. 316–345.
[13] N.K. Humphrey, The social function of intellect. In: P.P.G. Bateson and R.A. Hinde, Editors, Growing points in ethology, Cambridge University Press, Cambridge, UK (1976).
[14] In: R.W. Byrne and A. Whiten, Editors, Machiavellian intelligence social expertise and the evolution of intellect in monkeys, apes and humans, Clarendon Press, Oxford (1988).
[15] L. Brothers, The social brain: a project for integrating primate behavior and neurophysiology in a new domain, Concept Neurosci 1 (1990), pp. 27–51.
[16] B.G. Charlton, Theory of mind delusions and bizarre delusions in an evolutionary perspective: psychiatry and the social brain. In: Martin Brune, Hedda Ribbert and Wulf Schiefenhovel, Editors, The social brain – evolution and pathology, John Wiley & Sons, Chichester (2003), pp. 315–338.
[17] Charlton BG. Why it is ‘better’ to be reliable but dumb than smart but slapdash: are intelligence (IQ) and conscientiousness best regarded as gifts or virtues? Med Hypotheses; in press, doi:10.1016/j.mehy.2009.06.048.
[18] D. Nettle, Personality: what makes you the way you are, Oxford University Press, Oxford, UK (2007).
[19] G. Miller, Spent: sex, evolution and consumer behaviour, Viking, New York (2009).
[20] I.J. Deary, G.D. Batty and C.R. Gale, Bright children become enlightened adults, Psychol Sci 19 (2008), pp. 1–6.
[21] R. Lynn, J. Harvey and H. Nyborg, Average intelligence predicts atheism rates across 137 nations, Intelligence 37 (2009), pp. 11–15.
[22] S. Baron-Cohen, The essential difference: men, women and the extreme male brain, Penguin/Basic Books, London (2003).
[23] B.G. Charlton, The rise of the boy-genius: psychological neoteny, science and modern life, Med Hypotheses 67 (2006), pp. 679–681.
[24] B.G. Charlton, Psychological neoteny and higher education: associations with delayed parenthood, Med Hypotheses 69 (2007), pp. 237–240.
[25] Penke L. Creativity: theories, prediction, and etiology. Diploma thesis. Department of Psychology, University of Bielefeld, Germany; 2003 [accessed 3.08.09].
[26] H.J. Eysenck, Genius: the natural history of creativity, Cambridge University Press, Cambridge, UK (1995).
[27] B.G. Charlton, Why are modern scientists so dull? How science selects for perseverance and sociability at the expense of intelligence and creativity, Med Hypotheses 72 (2009), pp. 237–243.
[28] B.G. Charlton, From nutty professor to buddy love: personality types in modern science, Med Hypotheses 8 (2007), pp. 243–244.
[29] R.J. Herrnstein and C. Murray, The bell curve: intelligence and class structure in American life, New York, Forbes (1994).
[30] G. Miller, The mating mind: how sexual choice shaped the evolution of human nature, Heinemann, London (2000).
[31] A. Pierce, G.F. Miller, R. Arden and L. Gottfredson, Why is intelligence correlated with semen quality? Biochemical pathways common to sperm and neurons, and the evolutionary genetics of general fitness, Commun Integr Biol 2 (2009), pp. 1–3.
[32] B.G. Charlton, Pioneering studies of IQ by G.H. Thomson and J.F. Duff – an example of established knowledge subsequently ‘hidden in plain sight’, Med Hypotheses 71 (2008), pp. 625–628.
[33] B.G. Charlton, First a hero of science and now a martyr to science: the James Watson Affair – political correctness crushes free scientific communication, Med Hypotheses 70 (2008), pp. 1077–1080.
[34] B.G. Charlton, Replacing education with psychometrics: how learning about IQ almost-completely changed my mind about education, Med Hypotheses 73 (2009), pp. 273–277.
[35] G. Clark, A Farewell to Alms: a brief economic history of the world, Princeton University Press, Princeton, NJ, USA (2007).
[36] K.E. Stanovitch, The robot’s rebellion: finding meaning in the age of Darwin, University of Chicago Press, Chicago (2004).
[37] B.G. Charlton, Alienation, recovered animism and altered states of consciousness, Med Hypotheses 68 (2007), pp. 727–731.
Bruce G. Charlton
Medical Hypotheses. 2009;73: 867-870.
Summary
In previous editorials I have written about the absent-minded and socially-inept ‘nutty professor’ stereotype in science, and the phenomenon of ‘psychological neoteny’ whereby intelligent modern people (including scientists) decline to grow-up and instead remain in a state of perpetual novelty-seeking adolescence. These can be seen as specific examples of the general phenomenon of ‘clever sillies’ whereby intelligent people with high levels of technical ability are seen (by the majority of the rest of the population) as having foolish ideas and behaviours outside the realm of their professional expertise. In short, it has often been observed that high IQ types are lacking in ‘common sense’ – and especially when it comes to dealing with other human beings. General intelligence is not just a cognitive ability; it is also a cognitive disposition. So, the greater cognitive abilities of higher IQ tend also to be accompanied by a distinctive high IQ personality type including the trait of ‘Openness to experience’, ‘enlightened’ or progressive left-wing political values, and atheism. Drawing on the ideas of Kanazawa, my suggested explanation for this association between intelligence and personality is that an increasing relative level of IQ brings with it a tendency differentially to over-use general intelligence in problem-solving, and to over-ride those instinctive and spontaneous forms of evolved behaviour which could be termed common sense. Preferential use of abstract analysis is often useful when dealing with the many evolutionary novelties to be found in modernizing societies; but is not usually useful for dealing with social and psychological problems for which humans have evolved ‘domain-specific’ adaptive behaviours. And since evolved common sense usually produces the right answers in the social domain; this implies that, when it comes to solving social problems, the most intelligent people are more likely than those of average intelligence to have novel but silly ideas, and therefore to believe and behave maladaptively. I further suggest that this random silliness of the most intelligent people may be amplified to generate systematic wrongness when intellectuals are in addition ‘advertising’ their own high intelligence in the evolutionarily novel context of a modern IQ meritocracy. The cognitively-stratified context of communicating almost-exclusively with others of similar intelligence, generates opinions and behaviours among the highest IQ people which are not just lacking in common sense but perversely wrong. Hence the phenomenon of ‘political correctness’ (PC); whereby false and foolish ideas have come to dominate, and moralistically be enforced upon, the ruling elites of whole nations.
***
IQ and evolved problem-solving
On the whole, and all else being equal, in modern societies the higher a person’s general intelligence (as measured by the intelligence quotient or IQ), the better will be life for that person; since higher intelligence leads (among other benefits) to higher social status and salary, longer life expectancy and better health [1], [2], [3], [4] and [5]. However, at the same time, it has been recognized for more than a century that increasing IQ is biologically-maladaptive because there is an inverse relationship between IQ and fertility [6], [7] and [8]. Under modern conditions, therefore, high intelligence is fitness-reducing.
In the course of exploring this modern divergence between social-adaptation and biological-adaptation, Satoshi Kanazawa has made the insightful observation that a high level of general intelligence is mainly useful in dealing with life problems which are an evolutionary novelty. By contrast, performance in solving problems which were a normal part of human life in the ancestral hunter–gatherer era may not be helped (or may indeed be hindered) by higher IQ [9] and [10].
(This statement requires a qualification. When a person has suffered some form of brain damage, or a pathology affecting brain function, then this might well produce generalized impairment of cognition: reducing both general intelligence and other forms of evolved cognitive functioning, depending on the site and extent of the brain pathology. Since a population with low IQ would include some whose IQ had been lowered by brain pathology, the average level of social intelligence or common sense would probably also be lower in this population. This confounding effect of brain pathology would be expected to create a weak and non-causal statistical correlation between IQ and social intelligence/common sense, a correlation that would mainly be apparent at low levels of IQ.)
As examples of how IQ may help with evolutionary novelties, it has been abundantly-demonstrated that increasing measures of IQ are strongly and positively correlated with a wide range of abilities which require abstract reasoning and rapid learning of new knowledge and skills; such as educational outcomes, and abilities at most complex modern jobs [1], [2], [3], [4], [5] and [11]. Science and mathematics are classic examples of problem-solving activities that arose only recently in human evolutionary history and in which differential ability is very strongly predicted by relative general intelligence [12].
However, there are also many human tasks which our human ancestors did encounter repeatedly and over manifold generations, and natural selection has often produced ‘instinctive’, spontaneous ways of dealing with these. Since humans are social primates, one major such category is social problems, which have to do with understanding, predicting and manipulating the behaviours of other human beings [13], [14], [15] and [16]. Being able to behave adaptively in dealing with these basic human situations is what I will term having ‘common sense’.
Kanazawa’s idea is that there is therefore a contrast between recurring, mainly social problems which affected fitness for our ancestors and for which all normal humans have evolved behavioural responses; and problems which are an evolutionary novelty but which have a major impact on individual functioning in the context of modern societies [9] and [10]. When a problem is an evolutionary novelty, individual differences in general intelligence make a big difference to each individual’s abilities to analyze the problem, and learn to how solve it. So, the idea is that having a high IQ would predict a better ability in understanding and dealing with new problems; but higher IQ would not increase the level of a person’s common sense ability to deal with social situations.
IQ not just an ability, but also a disposition
Although general intelligence is usually conceptualized as differences in cognitive ability, IQ is not just about ability but also has personality implications [17].
For example, in some populations there is a positive correlation between IQ and the personality trait of Openness to experience (‘Openness’) [18] and [19]; a positive correlation with ‘enlightened’ or progressive values of a broadly socialist and libertarian type [20]; and a negative correlation with religiousness [21].
So, the greater cognitive ability of higher IQ is also accompanied by a somewhat distinctive high IQ personality type. My suggested explanation for this association is that an increasing level of IQ brings with it an increased tendency to use general intelligence in problem-solving; i.e. to over-ride those instinctive and spontaneous forms of evolved behaviour which could be termed common sense.
The over-use of abstract reasoning may be most obvious in the social domain, where normal humans are richly equipped with evolved psychological mechanisms both for here-and-now interactions (e.g. rapidly reading emotions from facial expression, gesture and posture, and speech intonation) and for ‘strategic’ modelling of social interactions to understand predict and manipulate the behaviour of others [16]. Social strategies deploy inferred knowledge about the dispositions, motivations and intentions of others. When the most intelligent people over-ride the social intelligence systems and apply generic, abstract and systematic reasoning of the kind which is enhanced among higher IQ people, they are ignoring an ‘expert system’ in favour of a non-expert system.
In suggesting that the most intelligent people tend to use IQ to over-ride common sense I am unsure of the extent to which this is due to a deficit in the social reasoning ability, perhaps due to a trade-off between cognitive abilities – as suggested by Baron-Cohen’s conceptualization of Asperger’s syndrome, including the male- versus female-type of systematizing/empathizing brain [22]. Or alternatively it could be more of an habitual tendency to over-use abstract analysis, that might (in principle) be overcome by effort or with training. Observing the apparent universality of ‘Silly Clevers’ in modernizing societies, I suspect that a higher IQ bias towards over-utilizing abstract reasoning would probably turn-out to be innate and relatively stable.
Indeed, I suggest that higher levels of the personality trait of Openness in higher IQ people may the flip-side of this over-use of abstraction. I regard Openness as the result of deploying abstract analysis for social problems to yield unstable and unpredictable results, when innate social intelligence would tend to yield predictable and stable results. This might plausibly underlie the tendency of the most intelligent people in modernizing societies to hold ‘left-wing’ political views [10] and [20].
I would argue that neophilia (or novelty-seeking) is a driving attribute of the personality trait of Openness; and a disposition common in adolescents and immature adults who display what I have termed ‘psychological neoteny’ [23] and [24]. When problems are analyzed using common sense ‘instincts’ the evaluative process would be expected to lead to the same answers in all normal humans, and these answers are likely to be stable over time. But when higher IQ people ignore or over-ride common sense, they generate a variety of uncommon ideas. Since these ideas are only feebly-, or wholly un-, supported by emotions; they are held more weakly than common sense ideas, and so are more likely to change over time.
For instance, a group of less intelligent people using instinctive social intelligence to analyze a social situation will presumably reach the same traditional conclusion as everyone else and this conclusion will not change with time; while a more intelligent group might by contrast use abstract analysis and generate a wider range of novel and less-compelling solutions. This behaviour appears as if motivated by novelty-seeking.
Applying abstract analysis to social situations might be seen as ‘creative’, and indeed Openness has been put forward as the major personality trait which supports creativity [19] and [25]. This is reasonable in the sense that an intellectual high in Openness would be likely to disregard common sense, and to generate multiple, unpredictable and unfamiliar answers to evolutionarily-familiar problems which would only yield a single ‘obvious’ solution to those who deployed evolved modes of intelligence. However, I would instead argue that a high IQ person applying abstract systemizing intelligence to activities which are more usually done by instinctive intelligence is not a truly ‘creative’ process.
Instead, following Eysenck, I would regard true psychological creativity as primarily an associative activity which Eysenck includes as part of the trait Psychoticism; cognitively akin to the ‘primary process’ thinking of sleep, delirium and psychotic illness [26] and [27]. A major difference between these two concepts of creativity is that while ‘Openness creativity’ is abstract, coolly-impartial and as if driven by novelty-seeking (neophilia); ‘Psychoticism creativity’ is validated by emotions: such that the high-Psychoticism creative person is guided by their emotional responses to their own creative production.
Clever sillies in the IQ meritocracy
It therefore seems plausible that the folklore or stereotypical idea of the eccentric, unworldly, absent-minded or obtuse scientist – who is brilliant at their job while being fatuous and incompetent in terms of their everyday life [28], might be the result of this psychological tendency to over-use abstract intelligence and use it in inappropriate situations.
However, there is a further aspect of this phenomenon. Modern societies are characterized by large population, extensive division of labour, and a ‘meritocratic’ form of social organization in which social roles (jobs, occupations) tend to be filled on the basis of educational credentials and job performance rather than on an hereditary basis (as was the case in most societies of the past). This means that in modern societies there is an unprecedented degree of cognitive stratification [29]. Cognitive stratification is the layering of social organization by IQ; such that residence, schooling and occupations are characterized by narrow bands of intelligence. Large modern countries are therefore ruled by concentrations of highly intelligent people in the major social systems such as politics, civil administration, law, science and technology, the mass media and education. Communication in these elites is almost-exclusively among the highly intelligent.
In such an evolutionarily-unprecedented, artificial ‘hothouse’ environment, it is plausible that any IQ-related behaviours are amplified: partly because there is little counter-pressure from the less intelligent people with less neophiliac personalities, and perhaps mainly because there is a great deal of IQ-advertisement. Indeed, it looks very much as if the elites of modern societies are characterized by considerable IQ-signalling [19]. Sometimes this is direct advertisement (e.g. when boasting about intellectual attainments or attendance at highly-selective colleges) and more often the signalling is subtly-indirect when people display the attitudes, beliefs, fashions, manners and hobbies associated with high intelligence. This advertising is probably based on sexual selection [30], if IQ has been a measure of general fitness during human evolutionary history, and was associated with a wide range of adaptive traits [31].
My hunch is that it is this kind of IQ-advertisement which has led to the most intelligent people in modern societies having ideas about social phenomena that are not just randomly incorrect (due to inappropriately misapplying abstract analysis) but are systematically wrong. I am talking of the phenomenon known as political correctness (PC) in which foolish and false ideas have become moralistically-enforced among the ruling intellectual elite. And these ideas have invaded academic, political and social discourse. Because while the stereotypical nutty professor in the hard sciences is a brilliant scientist but silly about everything else; the stereotypical nutty professor social scientist or humanities professor is not just silly about ‘everything else’, but also silly in their professional work.
Getting answers to problems relating to hard science is extremely intellectually-difficult and (because the subject is an evolutionary novelty) necessarily requires abstract reasoning [12] and [26]. Therefore the hard scientist is invariably vastly more competent at their science than the average member of the public, and he has no need to be novelty-seeking in order to advertise his intelligence.
But getting answers to problems in science involving human social behaviour is something which is already done very well by evolved human psychological mechanisms [13], [14], [15] and [16]. In this situation it is difficult to improve on common sense, and – even without being taught – normal people already have a pretty good understanding of human motivations, incentives and deterrents, and the basic cause and effect processes of society. Because psychological and social intelligence expertise is so widespread and adaptive; in order to advertise his intelligence the social scientist must produce something systematically-different from common sense, something novel and (necessarily) counter-intuitive. And because it goes against evolved psychology, in this instance something different is likely to be something wrong. So, the social scientist professional deploying abstract reasoning on social problems is often less likely to generate a correct answer than the average member of the public who is using the common sense of evolved, spontaneous social intelligence.
In the human and social sciences there is therefore a professional incentive to be perversely wrong – to be silly, in other words. And this is indeed what we see. The more that the subject matter of an academic field requires, or depends on, common sense; the sillier it will be.
The results of cognitive stratification and IQ-advertising are therefore bad enough to have destroyed the value of whole domains of the arts and academia, and in the domain of public policy the results have been simply disastrous. Over the past four decades the dishonest fantasy-world discourse of non-biological political correctness has evolved to dominate the intellectual arena of whole nations – perhaps the whole developed world – such that wrong and ridiculous ideas have become not just mainstream, but compulsory.
Because clever silliness is not just one of several competing ideas in the elite arena – it is both intellectually- and moralistically-enforced with such zeal as utterly to exclude alternatives [32]. The first level of defence is that denying a PC assertion is taken as proof of dumbness or derangement; such that flat-denial without refutation is regarded as sufficient response. But the toughest enforcement is moral: anyone smart and sane who disbelieves the silly clever falsehoods and asserts something different is not just denounced as dumb but actually pilloried as evil [33].
I infer that the motivation behind the moralizing venom of political correctness is the fact that spontaneous human instincts are universal and more powerfully-felt than the absurd abstractions of PC; plus the fact that common sense is basically correct while PC is perversely wrong. Hence, at all costs a fair debate must be prevented if the PC consensus is to be protected. Common sense requires to be stigmatized in order that it is neutralized.
Ultimately these manoeuvres serve to defend the power, status and distinctiveness of the intellectual elite [34]. They are socially-adaptive over the short-term, even as they are biologically-maladaptive over the longer-term.
Conclusion
Because evolved ‘common sense’ usually produces the right answers in the social domain, yet the most intelligent people have personalities which over-use abstract analysis in the social domain [9] and [10], this implies that the most intelligent people are predisposed to have silly ideas and to behave maladaptively when it comes to solving social problems.
Ever since the development of cognitive stratification in modernizing societies [29], the clever sillies have been almost monopolistically ‘in charge’. They really are both clever and silly – but the cleverness is abstract while the silliness is focused on the psychological and social domains. Consequently, the fatal flaw of modern ruling elites lies in their lack of common sense – especially the misinterpretations of human psychology and socio-political affairs. My guess is that this lack of common sense is intrinsic and incorrigible – and perhaps biologically-linked with the evolution of high intelligence and the rise of modernity [35].
Stanovich has also described the over-riding of the ‘Darwinian brain’ of autonomous systems by the analytic system, and has identified the phenomenon as underlying modern non-adaptive ethical reasoning [36]. Stanovich has also noted that IQ accounts for much (but not all) of the inter-individual differences in using analytic evaluations; however, Stanovich regards the increased use of abstraction to replace traditional ‘common sense’ very positively, not as ‘silly’ but as a vital aspect of what he interprets as the higher status of modern social morality.
Yet, whatever else, to be a clever silly is a somewhat tragic state; because it entails being cognitively-trapped by compulsive abstraction; unable to engage directly and spontaneously with what most humans have traditionally regarded as psycho-social reality; disbarred from the common experience of humankind and instead cut-adrift on the surface of a glittering but shallow ocean of novelties: none of which can ever truly convince or satisfy. It is to be alienated from the world; and to find no stable meaning of life that is solidly underpinned by emotional conviction [37]. Little wonder, perhaps, that clever sillies usually choose sub-replacement reproduction [6].
To term the Western ruling elite ‘clever sillies’ is of course a broad generalization, but is not merely name-calling. Because, as well as political correctness being systematically dishonest [33] and [34]; in relation to absolute and differential fertility, modern elite behaviour is objectively maladaptive in a strictly biological sense. It remains to be seen whether the genetic self-annihilation of the IQ elite will lead-on towards self-annihilation of the societies over which they rule.
Note: I should in all honesty point-out that I recognize this phenomenon from the inside. In other words, I myself am a prime example of a ‘clever silly’; having spent much of adolescence and early adult life passively absorbing high-IQ-elite-approved, ingenious-but-daft ideas that later needed, painfully, to be dismantled. I have eventually been forced to acknowledge that when it comes to the psycho-social domain, the commonsense verdict of the majority of ordinary people throughout history is much more likely to be accurate than the latest fashionably-brilliant insight of the ruling elite. So, this article has been written on the assumption, eminently-challengeable, that although I have nearly-always been wrong in the past – I now am right….
References
[1] U. Neisser et al., Intelligence: knowns and unknowns, Am Psychol 51 (1996), pp. 77–101.
[2] N.J. Mackintosh, IQ and human intelligence, Oxford University Press (1998).
[3] A.R. Jensen, The g factor the science of mental ability, Praeger, Westport, CT, USA (1988).
[4] I.J. Deary, Intelligence: a very short introduction, Oxford, Oxford University Press (2001).
[5] G.D. Batty, I.J. Deary and L.S. Gottfredson, Pre-morbid (early life) IQ and later mortality risk: systematic review, Ann Epidemiol 17 (2007), pp. 278–288.
[6] R. Lynn, Dysgenics, Praeger, Westport, CT, USA (1996).
[7] R. Lynn and M. Van Court, New evidence for dysgenic fertility for intelligence in the United States, Intelligence 32 (2004), pp. 193–201.
[8] D. Nettle and T.V. Pollet, Natural selection on male wealth in humans, Am Nat 172 (2008), pp. 658–666.
[9] S. Kanazawa, General Intelligence as a domain-specific adaptation, Psychol Rev 111 (2004), pp. 512–523.
[10] S. Kanazawa, IQ and the values of nations, J Biosoc Sci 41 (2009), pp. 537–556.
[11] L.S. Gottfredson, Implications of cognitive differences for schooling within diverse societies. In: C.L. Frisby and C.R. Reynolds, Editors, Comprehensive handbook of multicultural school psychology, Wiley, New York (2005), pp. 517–554.
[12] D. Lubinski and C.P. Benbow, Study of mathematically precocious youth after 35 years: uncovering antecedents for the development of math-science expertise, Perspect Psychol Sci 1 (2006), pp. 316–345.
[13] N.K. Humphrey, The social function of intellect. In: P.P.G. Bateson and R.A. Hinde, Editors, Growing points in ethology, Cambridge University Press, Cambridge, UK (1976).
[14] In: R.W. Byrne and A. Whiten, Editors, Machiavellian intelligence social expertise and the evolution of intellect in monkeys, apes and humans, Clarendon Press, Oxford (1988).
[15] L. Brothers, The social brain: a project for integrating primate behavior and neurophysiology in a new domain, Concept Neurosci 1 (1990), pp. 27–51.
[16] B.G. Charlton, Theory of mind delusions and bizarre delusions in an evolutionary perspective: psychiatry and the social brain. In: Martin Brune, Hedda Ribbert and Wulf Schiefenhovel, Editors, The social brain – evolution and pathology, John Wiley & Sons, Chichester (2003), pp. 315–338.
[17] Charlton BG. Why it is ‘better’ to be reliable but dumb than smart but slapdash: are intelligence (IQ) and conscientiousness best regarded as gifts or virtues? Med Hypotheses; in press, doi:10.1016/j.mehy.2009.06.048.
[18] D. Nettle, Personality: what makes you the way you are, Oxford University Press, Oxford, UK (2007).
[19] G. Miller, Spent: sex, evolution and consumer behaviour, Viking, New York (2009).
[20] I.J. Deary, G.D. Batty and C.R. Gale, Bright children become enlightened adults, Psychol Sci 19 (2008), pp. 1–6.
[21] R. Lynn, J. Harvey and H. Nyborg, Average intelligence predicts atheism rates across 137 nations, Intelligence 37 (2009), pp. 11–15.
[22] S. Baron-Cohen, The essential difference: men, women and the extreme male brain, Penguin/Basic Books, London (2003).
[23] B.G. Charlton, The rise of the boy-genius: psychological neoteny, science and modern life, Med Hypotheses 67 (2006), pp. 679–681.
[24] B.G. Charlton, Psychological neoteny and higher education: associations with delayed parenthood, Med Hypotheses 69 (2007), pp. 237–240.
[25] Penke L. Creativity: theories, prediction, and etiology. Diploma thesis. Department of Psychology, University of Bielefeld, Germany
[26] H.J. Eysenck, Genius: the natural history of creativity, Cambridge University Press, Cambridge, UK (1995).
[27] B.G. Charlton, Why are modern scientists so dull? How science selects for perseverance and sociability at the expense of intelligence and creativity, Med Hypotheses 72 (2009), pp. 237–243.
[28] B.G. Charlton, From nutty professor to buddy love: personality types in modern science, Med Hypotheses 8 (2007), pp. 243–244.
[29] R.J. Herrnstein and C. Murray, The bell curve: intelligence and class structure in American life, New York, Forbes (1994).
[30] G. Miller, The mating mind: how sexual choice shaped the evolution of human nature, Heinemann, London (2000).
[31] A. Pierce, G.F. Miller, R. Arden and L. Gottfredson, Why is intelligence correlated with semen quality? Biochemical pathways common to sperm and neurons, and the evolutionary genetics of general fitness, Commun Integr Biol 2 (2009), pp. 1–3.
[32] B.G. Charlton, Pioneering studies of IQ by G.H. Thomson and J.F. Duff – an example of established knowledge subsequently ‘hidden in plain sight’, Med Hypotheses 71 (2008), pp. 625–628.
[33] B.G. Charlton, First a hero of science and now a martyr to science: the James Watson Affair – political correctness crushes free scientific communication, Med Hypotheses 70 (2008), pp. 1077–1080.
[34] B.G. Charlton, Replacing education with psychometrics: how learning about IQ almost-completely changed my mind about education, Med Hypotheses 73 (2009), pp. 273–277.
[35] G. Clark, A Farewell to Alms: a brief economic history of the world, Princeton University Press, Princeton, NJ, USA (2007).
[36] K.E. Stanovitch, The robot’s rebellion: finding meaning in the age of Darwin, University of Chicago Press, Chicago (2004).
[37] B.G. Charlton, Alienation, recovered animism and altered states of consciousness, Med Hypotheses 68 (2007), pp. 727–731.
Tuesday, 13 October 2009
Truthfulness in science should be an iron law
Are you an honest scientist? Truthfulness in science should be an iron law, not a vague aspiration
Bruce G. Charlton
Medical Hypotheses. 2009; Volume 73: 633-635
***
Summary
Anyone who has been a scientist for more than a couple of decades will realize that there has been a progressive and pervasive decline in the honesty of scientific communications. Yet real science simply must be an arena where truth is the rule; or else the activity simply stops being science and becomes something else: Zombie science. Although all humans ought to be truthful at all times; science is the one area of social functioning in which truth is the primary value, and truthfulness the core evaluation. Truth-telling and truth-seeking should not, therefore, be regarded as unattainable aspirations for scientists, but as iron laws, continually and universally operative. Yet such is the endemic state of corruption that an insistence on truthfulness in science seems perverse, aggressive, dangerous, or simply utopian. Not so: truthfulness in science is not utopian and was indeed taken for granted (albeit subject to normal human imperfections) just a few decades ago. Furthermore, as Jacob Bronowski argued, humans cannot be honest only in important matters while being expedient in minor matters: truth is all of a piece. There are always so many incentives to lie that truthfulness is either a habit or else it declines. This means that in order to be truthful in the face of opposition, scientists need to find a philosophical basis which will sustain a life of habitual truth and support them through the pressure to be expedient (or agreeable) rather than honest. The best hope of saving science from a progressive descent into Zombiedom seems to be a moral Great Awakening: an ethical revolution focused on re-establishing the primary purpose of science: which is the pursuit of truth. Such an Awakening would necessarily begin with individual commitment, but to have any impact it would need to progress rapidly to institutional forms. The most realistic prospect is that some sub-specialties of science might self-identify as being engaged primarily in the pursuit of truth, might form invisible colleges, and (supported by strong ethical systems to which their participants subscribe) impose on their members a stricter and more honest standard of behaviour. From such seeds of truth, real science might again re-grow. However, at present, I can detect no sign of any such thing as a principled adherence to perfect truthfulness among our complacent, arrogant and ever-more-powerful scientific leadership – and that is the group of which a Great Awakening would need to take-hold even if the movement were originated elsewhere.
***
The decline of honesty in science
Anyone who has been a scientist for more than 20 years will realize that there has been a progressive decline in the honesty of communications between scientists, between scientists and their institutions, and between scientists and their institutions and the outside world.
Yet real science must be an arena where truth is the rule; or else the activity simply stops being science and becomes something else: Zombie science. Zombie science is a science that is dead, but is artificially kept moving by a continual infusion of funding. From a distance Zombie science looks like the real thing, the surface features of a science are in place – white coats, laboratories, computer programming, Ph.D’s, papers, conferences, prizes, etc. But the Zombie is not interested in the pursuit of truth – its actions are externally-controlled and directed at non-scientific goals, and inside the Zombie everything is rotten.
The most egregious domain of untruthfulness is probably where scientists comment or write about their own work. Indeed, so pervasive are the petty misrepresentations and cautious lies, that it is likely that many scientists are now dishonest even with themselves, in the privacy of their own thoughts. Such things can happen to initially honest people either by force of habit, or because they know no better; and because lies breed lies in order to explain the discrepancies between predictions and observations.
Lying to oneself may be one cause of the remarkable incoherence of so much modern scientific thinking. It is much easier to be coherent, and to recognize incoherence, when discourse is uncontaminated by deliberate misrepresentations. There is less to cover-up. Most scientists can think-straight only by being completely honest. If scientists are not honest even with themselves, then their work will be a mess.
Scientists are usually too careful and clever to risk telling outright lies, but instead they push the envelope of exaggeration, selectivity and distortion as far as possible. And tolerance for this kind of untruthfulness has greatly increased over recent years. So it is now routine for scientists deliberately to ‘hype’ the significance of their status and performance, and ‘spin’ the importance of their research.
Furthermore, it is entirely normal and unremarkable for scientists to spend their entire professional life doing work they know in their hearts to be trivial or bogus – preferring that which promotes their career over that which has the best chance of advancing science. Indeed, such misapplication of effort is positively encouraged in many places, including some of what were the very best places, because careerism is a more reliable route to high productivity than real science – and because senior scientists in the best places are expert at hyping mundane research to create a misleading impression of revolutionary importance.
What is going on? How have matters reached this state? Everyone should be honest at all times and about everything, but especially scientists. Everyone should seriously aim for truthfulness – yet scientists, of all people, must not just aim but actually be truthful: otherwise the very raison d’etre of science is subverted.
So although truthfulness is a basic, universal moral rule; science is the one area of social functioning in which truth is the primary value, and truthfulness the core evaluation. Truth-telling and truth-seeking should not, therefore, be regarded as unattainable ideals within science, but as iron laws, continually and universally operative.
Causes of dishonesty in science
Although some scientists are selfishly dishonest simply in order to promote their own careers, for most people quasi-altruistic arguments for lying (dishonesty in a good cause of helping others, or to be an agreeable colleague) are likely to be a more powerful inducement to routine untruthfulness than is the gaining of personal advantage.
For example, scientists are pressured to be less-than-wholly-truthful for the benefit of their colleagues or institutions, or for official/political reasons. Often, scientists are unable to opt-out of administrative or managerial exercises which almost insist-upon dishonest responses – and for which colleagues expect dishonesty in order to promote the interests of the group. Project leaders may feel responsible for raising money to support their junior team members; and feel obliged to do whatever type of research is most generously funded, and to say or write whatever is necessary to obtain that funding.
So, in a bureaucratic context where cautious dishonesty is rewarded, strict truthfulness is taboo and will cause trouble for colleagues, for teams, for institutions – there may be a serious risk that funding is removed, status damaged, or worse. When everyone else is exaggerating their achievement then any precisely accurate person will, de facto, be judged as even worse than their already modest claims. In this kind of situation, individual truthfulness may be interpreted as an irresponsible indulgence.
Clearly then, even in the absence of the sort of direct coercion which prevails in many un-free societies, scientists may be subjected to such pressure that they are more-or-less forced to be dishonest; and this situation can (in decent people) lead to feelings of regret, or to shame and remorse. Unfortunately, regret and shame may not lead to remorse but instead to rationalization, to the elaborate construction of excuses, and eventually a denial of dishonesty.
Yet, whatever are the motivations and reasons for dishonesty, it has been by such means that modern scientists have become inculcated into habitual falsity; until we have become used-to dishonesty, don’t notice dishonesty, eventually come to expect dishonesty.
Roots of dishonesty in science
My belief is that science has rotted from the head down – and the blame mostly lies with senior scientists in combination with the massive expansion and influence of peer review until it has become the core process of scientific evaluation.
Overall, senior scientists have set a bad example of untruthfulness and self-seeking in their own behaviour, and they have also tended to administer science in such a way as to reward hype and careful-dishonesty, and punish modesty and strict truth-telling. And although some senior scientists have laudably refused to compromise their honesty, they have done this largely by quietly ‘opting out’, and not much by using their power and influence to create and advertise alternative processes and systems in which honest scientists might work.
The corruption of science has been (mostly unintentionally) amplified by the replacement of ‘peer usage’ with peer review as the major mechanism of scientific evaluation. Peer review (of ever greater complexity) has been applied everywhere: to job appointments and promotions, to scientific publications and conferences, to ethical review and funding, to prizes and awards. And peer review processes are set-up and dominated by senior scientists.
Peer usage was the traditional process of scientific evaluation during the Golden Age of science (extending up to about the mid-1960s). Peer usage means that the validity of science is judged retrospectively by whether or not it has been used by peers, i.e. whether ideas or facts turned-out to be useful in further science done by researchers in the same field. For example, a piece of research might be evaluated by its validity in predicting future observations or as a basis for making effective interventions. Peer usage is distinctive to science, probably almost definitive of science.
Peer review, by contrast, means that science is judged by the opinion of other scientists in the same field. Peer review is not distinctive to science, but is found in all academic subjects and in many formal bureaucracies. When peer usage was replaced by peer review, then all the major scientific evaluation processes – their measurement metrics, their rewards and their sanctions - were brought under the direct control of senior scientists whose opinions thereby became the ultimate arbiter of validity. By making its validity a mere matter of professional opinion, the crucial link between science and the natural world was broken, and the door opened to unrestrained error as well as to corruption.
The over-expansion and domination of peer review in science is therefore a sign of scientific decline and decadence, not (as so commonly asserted) a sign of increased rigour. Peer review as the ultimate arbiter represents the conversion of science to generic bureaucracy; a replacement of testing by opinion; a replacement of objectivity by subjectivity. And the increased role for subjectivity in science has created space into which dishonesty has expanded.
In a nutshell, the inducements to dishonesty have come from outside of science – from politics, government administration and the media (for example) all of whom are continually attempting to distort science to the needs of their own agendas and covert real science to Zombie science. But whatever the origin of the pressures to corrupt science, it is sadly obvious that scientific leaders have mostly themselves been corrupted by these pressures rather than courageously resisting them. And these same leaders have degraded hypothesis-testing real science into an elaborate expression of professional opinion (‘peer review’) that is formally indistinguishable from bureaucratic power-games.
Is there a future for honesty?
Such is our state of pervasive corruption that an insistence on truthfulness in science seems perverse, aggressive, dangerous, or simply utopian. Not so. Truthfulness in science is not utopian. Indeed it was mundane reality, taken for granted (albeit subject to normal human imperfections) just a few decades ago. Old-style science had many faults, but deliberate and systematic misrepresentation was not one of them.
To become systematically truthful in a modern scientific environment would be to inflict damage on one’s own career; on one’s chances of getting jobs, promotions, publications, grants and so on. And in a world of dishonesty, of hype, spin and inflated estimations – the occasional truthful individual will be judged by the prevailing corrupt standards. To be truthful would also be to risk becoming exceedingly unpopular with colleagues and employers – since a strictly honest scientist would be perceived as endangering the status and security of those around them.
Nonetheless, science must be honest, and the only answer to dishonesty is honesty; and this is up to individuals. The necessary first step is for scientists who are concerned about truth to acknowledge the prevailing state of corruption, and then to make a personal resolution to be truthful in all things at all times: to become both truth-tellers and truth-seekers.
Honest individuals are clearly necessary for an honest system of science – they are the basis of all that is good in science. However, honest individuals do not necessarily create an honest system. Individual honesty is not sufficient but needs to be supported by new social structures. Scientific truth cannot, over the long stretch, be a product of solitary activity. A solitary truth-seeker who is unsupported either by tradition or community will degenerate into mere eccentricity, eventually to be intimidated and crushed by the organized power of untruthfulness.
Furthermore, as Jacob Bronowski argued, humans cannot be honest only in important matters while being expedient in minor matters: truth is all of a piece. There are so many incentives to be untruthful that truthfulness is either a habit, or else truthfulness declines. This means that in order to retain their principles in the face of opposition, scientists need to find a philosophical basis which will sustain a life of habitual truth and support them through the pressure to be expedient (or agreeable) rather than honest.
A Great Awakening to truth in science
The best hope of saving science from a progressive descent into complete Zombiedom seems to be a moral Great Awakening: an ethical revolution focused on re-establishing the primary purpose of science: the pursuit of truth.
In using the phrase, I am thinking of something akin to the periodic evangelical Great Awakenings which have swept the USA throughout its history, and have (arguably) served periodically to roll-back the advance of societal corruption, and generate improved ethical behaviour.
Such an Awakening would necessarily begin with individual commitment, but to have any impact it would need to progress rapidly to institutional forms. In effect there would need to be a ‘Church’ of truth; or, rather, many such Churches – especially in the different scientific fields or invisible colleges of active scholars and researchers.
I use the word ‘Church’ because nothing less morally-potent than a Church would suffice to overcome the many immediate incentives for seeking status, power, wealth and security. Nothing less powerfully-motivating could, I feel, nurture and sustain the requisite individual commitment. If truth-pursuing groups were not actually religiously-based (and, given the high proportion of atheists in science, this is probable), then such groups would need to be sustained by secular ethical systems of at least equal strength to religion, equally devoted to transcendental ideals, equally capable of eliciting courage, self-sacrifice and adherence to principle.
The most realistic prospect is that some sub-specialties of science might self-identify as being engaged primarily in the pursuit of truth and (supported by strong ethical systems to which their participants subscribe) impose on their members a stricter and more honest standard of behaviour. Since science must be truthful in order to thrive qua science, any such truthful sub-specialities would be expected to thrive over the long term (this is assuming they can attract scientists of sufficient calibre backed-up with sufficient resources). From such seeds of truth, real science might again re-grow.
Could it happen? – could there really be a Great Awakening to truth in science in which scientists in specific disciplines or en masse would simply start being truthful about all things great and small, and would swiftly organize to support each other in this principle? I am hopeful that some kind of moral renewal might potentially occur in science, but I am not optimistic. I am hopeful – or else I would not be writing this. But I am not optimistic, because there appears to be little awareness of the endemic state of corruption – presumably because the relentless but incremental expansion of dishonesty has been so gradual that it failed to cause sufficient alarm; and at each step in the decline scientists quickly habituated to the new situation.
At present, I can detect no sign of anything like a principled adherence to perfect truthfulness among our complacent, arrogant and ever-more-powerful scientific leadership – and that is the group among which a Great Awakening would need to take-hold; even if, as seems likely, the movement originated elsewhere.
Further reading: The above polemical essay builds upon the argument of several of my previous publications including: ‘Peer usage versus peer review’ (BMJ 2007; 335:451); Zombie science’ (Medical Hypotheses 2008; 71:327–329); ‘The vital role of transcendental truth in science’ (Medical Hypotheses 2009; 72:373–376); and ‘Are you an honest academic?’ (Oxford Magazine 2009; 287:8–10).
Bruce G. Charlton
Medical Hypotheses. 2009; Volume 73: 633-635
***
Summary
Anyone who has been a scientist for more than a couple of decades will realize that there has been a progressive and pervasive decline in the honesty of scientific communications. Yet real science simply must be an arena where truth is the rule; or else the activity simply stops being science and becomes something else: Zombie science. Although all humans ought to be truthful at all times; science is the one area of social functioning in which truth is the primary value, and truthfulness the core evaluation. Truth-telling and truth-seeking should not, therefore, be regarded as unattainable aspirations for scientists, but as iron laws, continually and universally operative. Yet such is the endemic state of corruption that an insistence on truthfulness in science seems perverse, aggressive, dangerous, or simply utopian. Not so: truthfulness in science is not utopian and was indeed taken for granted (albeit subject to normal human imperfections) just a few decades ago. Furthermore, as Jacob Bronowski argued, humans cannot be honest only in important matters while being expedient in minor matters: truth is all of a piece. There are always so many incentives to lie that truthfulness is either a habit or else it declines. This means that in order to be truthful in the face of opposition, scientists need to find a philosophical basis which will sustain a life of habitual truth and support them through the pressure to be expedient (or agreeable) rather than honest. The best hope of saving science from a progressive descent into Zombiedom seems to be a moral Great Awakening: an ethical revolution focused on re-establishing the primary purpose of science: which is the pursuit of truth. Such an Awakening would necessarily begin with individual commitment, but to have any impact it would need to progress rapidly to institutional forms. The most realistic prospect is that some sub-specialties of science might self-identify as being engaged primarily in the pursuit of truth, might form invisible colleges, and (supported by strong ethical systems to which their participants subscribe) impose on their members a stricter and more honest standard of behaviour. From such seeds of truth, real science might again re-grow. However, at present, I can detect no sign of any such thing as a principled adherence to perfect truthfulness among our complacent, arrogant and ever-more-powerful scientific leadership – and that is the group of which a Great Awakening would need to take-hold even if the movement were originated elsewhere.
***
The decline of honesty in science
Anyone who has been a scientist for more than 20 years will realize that there has been a progressive decline in the honesty of communications between scientists, between scientists and their institutions, and between scientists and their institutions and the outside world.
Yet real science must be an arena where truth is the rule; or else the activity simply stops being science and becomes something else: Zombie science. Zombie science is a science that is dead, but is artificially kept moving by a continual infusion of funding. From a distance Zombie science looks like the real thing, the surface features of a science are in place – white coats, laboratories, computer programming, Ph.D’s, papers, conferences, prizes, etc. But the Zombie is not interested in the pursuit of truth – its actions are externally-controlled and directed at non-scientific goals, and inside the Zombie everything is rotten.
The most egregious domain of untruthfulness is probably where scientists comment or write about their own work. Indeed, so pervasive are the petty misrepresentations and cautious lies, that it is likely that many scientists are now dishonest even with themselves, in the privacy of their own thoughts. Such things can happen to initially honest people either by force of habit, or because they know no better; and because lies breed lies in order to explain the discrepancies between predictions and observations.
Lying to oneself may be one cause of the remarkable incoherence of so much modern scientific thinking. It is much easier to be coherent, and to recognize incoherence, when discourse is uncontaminated by deliberate misrepresentations. There is less to cover-up. Most scientists can think-straight only by being completely honest. If scientists are not honest even with themselves, then their work will be a mess.
Scientists are usually too careful and clever to risk telling outright lies, but instead they push the envelope of exaggeration, selectivity and distortion as far as possible. And tolerance for this kind of untruthfulness has greatly increased over recent years. So it is now routine for scientists deliberately to ‘hype’ the significance of their status and performance, and ‘spin’ the importance of their research.
Furthermore, it is entirely normal and unremarkable for scientists to spend their entire professional life doing work they know in their hearts to be trivial or bogus – preferring that which promotes their career over that which has the best chance of advancing science. Indeed, such misapplication of effort is positively encouraged in many places, including some of what were the very best places, because careerism is a more reliable route to high productivity than real science – and because senior scientists in the best places are expert at hyping mundane research to create a misleading impression of revolutionary importance.
What is going on? How have matters reached this state? Everyone should be honest at all times and about everything, but especially scientists. Everyone should seriously aim for truthfulness – yet scientists, of all people, must not just aim but actually be truthful: otherwise the very raison d’etre of science is subverted.
So although truthfulness is a basic, universal moral rule; science is the one area of social functioning in which truth is the primary value, and truthfulness the core evaluation. Truth-telling and truth-seeking should not, therefore, be regarded as unattainable ideals within science, but as iron laws, continually and universally operative.
Causes of dishonesty in science
Although some scientists are selfishly dishonest simply in order to promote their own careers, for most people quasi-altruistic arguments for lying (dishonesty in a good cause of helping others, or to be an agreeable colleague) are likely to be a more powerful inducement to routine untruthfulness than is the gaining of personal advantage.
For example, scientists are pressured to be less-than-wholly-truthful for the benefit of their colleagues or institutions, or for official/political reasons. Often, scientists are unable to opt-out of administrative or managerial exercises which almost insist-upon dishonest responses – and for which colleagues expect dishonesty in order to promote the interests of the group. Project leaders may feel responsible for raising money to support their junior team members; and feel obliged to do whatever type of research is most generously funded, and to say or write whatever is necessary to obtain that funding.
So, in a bureaucratic context where cautious dishonesty is rewarded, strict truthfulness is taboo and will cause trouble for colleagues, for teams, for institutions – there may be a serious risk that funding is removed, status damaged, or worse. When everyone else is exaggerating their achievement then any precisely accurate person will, de facto, be judged as even worse than their already modest claims. In this kind of situation, individual truthfulness may be interpreted as an irresponsible indulgence.
Clearly then, even in the absence of the sort of direct coercion which prevails in many un-free societies, scientists may be subjected to such pressure that they are more-or-less forced to be dishonest; and this situation can (in decent people) lead to feelings of regret, or to shame and remorse. Unfortunately, regret and shame may not lead to remorse but instead to rationalization, to the elaborate construction of excuses, and eventually a denial of dishonesty.
Yet, whatever are the motivations and reasons for dishonesty, it has been by such means that modern scientists have become inculcated into habitual falsity; until we have become used-to dishonesty, don’t notice dishonesty, eventually come to expect dishonesty.
Roots of dishonesty in science
My belief is that science has rotted from the head down – and the blame mostly lies with senior scientists in combination with the massive expansion and influence of peer review until it has become the core process of scientific evaluation.
Overall, senior scientists have set a bad example of untruthfulness and self-seeking in their own behaviour, and they have also tended to administer science in such a way as to reward hype and careful-dishonesty, and punish modesty and strict truth-telling. And although some senior scientists have laudably refused to compromise their honesty, they have done this largely by quietly ‘opting out’, and not much by using their power and influence to create and advertise alternative processes and systems in which honest scientists might work.
The corruption of science has been (mostly unintentionally) amplified by the replacement of ‘peer usage’ with peer review as the major mechanism of scientific evaluation. Peer review (of ever greater complexity) has been applied everywhere: to job appointments and promotions, to scientific publications and conferences, to ethical review and funding, to prizes and awards. And peer review processes are set-up and dominated by senior scientists.
Peer usage was the traditional process of scientific evaluation during the Golden Age of science (extending up to about the mid-1960s). Peer usage means that the validity of science is judged retrospectively by whether or not it has been used by peers, i.e. whether ideas or facts turned-out to be useful in further science done by researchers in the same field. For example, a piece of research might be evaluated by its validity in predicting future observations or as a basis for making effective interventions. Peer usage is distinctive to science, probably almost definitive of science.
Peer review, by contrast, means that science is judged by the opinion of other scientists in the same field. Peer review is not distinctive to science, but is found in all academic subjects and in many formal bureaucracies. When peer usage was replaced by peer review, then all the major scientific evaluation processes – their measurement metrics, their rewards and their sanctions - were brought under the direct control of senior scientists whose opinions thereby became the ultimate arbiter of validity. By making its validity a mere matter of professional opinion, the crucial link between science and the natural world was broken, and the door opened to unrestrained error as well as to corruption.
The over-expansion and domination of peer review in science is therefore a sign of scientific decline and decadence, not (as so commonly asserted) a sign of increased rigour. Peer review as the ultimate arbiter represents the conversion of science to generic bureaucracy; a replacement of testing by opinion; a replacement of objectivity by subjectivity. And the increased role for subjectivity in science has created space into which dishonesty has expanded.
In a nutshell, the inducements to dishonesty have come from outside of science – from politics, government administration and the media (for example) all of whom are continually attempting to distort science to the needs of their own agendas and covert real science to Zombie science. But whatever the origin of the pressures to corrupt science, it is sadly obvious that scientific leaders have mostly themselves been corrupted by these pressures rather than courageously resisting them. And these same leaders have degraded hypothesis-testing real science into an elaborate expression of professional opinion (‘peer review’) that is formally indistinguishable from bureaucratic power-games.
Is there a future for honesty?
Such is our state of pervasive corruption that an insistence on truthfulness in science seems perverse, aggressive, dangerous, or simply utopian. Not so. Truthfulness in science is not utopian. Indeed it was mundane reality, taken for granted (albeit subject to normal human imperfections) just a few decades ago. Old-style science had many faults, but deliberate and systematic misrepresentation was not one of them.
To become systematically truthful in a modern scientific environment would be to inflict damage on one’s own career; on one’s chances of getting jobs, promotions, publications, grants and so on. And in a world of dishonesty, of hype, spin and inflated estimations – the occasional truthful individual will be judged by the prevailing corrupt standards. To be truthful would also be to risk becoming exceedingly unpopular with colleagues and employers – since a strictly honest scientist would be perceived as endangering the status and security of those around them.
Nonetheless, science must be honest, and the only answer to dishonesty is honesty; and this is up to individuals. The necessary first step is for scientists who are concerned about truth to acknowledge the prevailing state of corruption, and then to make a personal resolution to be truthful in all things at all times: to become both truth-tellers and truth-seekers.
Honest individuals are clearly necessary for an honest system of science – they are the basis of all that is good in science. However, honest individuals do not necessarily create an honest system. Individual honesty is not sufficient but needs to be supported by new social structures. Scientific truth cannot, over the long stretch, be a product of solitary activity. A solitary truth-seeker who is unsupported either by tradition or community will degenerate into mere eccentricity, eventually to be intimidated and crushed by the organized power of untruthfulness.
Furthermore, as Jacob Bronowski argued, humans cannot be honest only in important matters while being expedient in minor matters: truth is all of a piece. There are so many incentives to be untruthful that truthfulness is either a habit, or else truthfulness declines. This means that in order to retain their principles in the face of opposition, scientists need to find a philosophical basis which will sustain a life of habitual truth and support them through the pressure to be expedient (or agreeable) rather than honest.
A Great Awakening to truth in science
The best hope of saving science from a progressive descent into complete Zombiedom seems to be a moral Great Awakening: an ethical revolution focused on re-establishing the primary purpose of science: the pursuit of truth.
In using the phrase, I am thinking of something akin to the periodic evangelical Great Awakenings which have swept the USA throughout its history, and have (arguably) served periodically to roll-back the advance of societal corruption, and generate improved ethical behaviour.
Such an Awakening would necessarily begin with individual commitment, but to have any impact it would need to progress rapidly to institutional forms. In effect there would need to be a ‘Church’ of truth; or, rather, many such Churches – especially in the different scientific fields or invisible colleges of active scholars and researchers.
I use the word ‘Church’ because nothing less morally-potent than a Church would suffice to overcome the many immediate incentives for seeking status, power, wealth and security. Nothing less powerfully-motivating could, I feel, nurture and sustain the requisite individual commitment. If truth-pursuing groups were not actually religiously-based (and, given the high proportion of atheists in science, this is probable), then such groups would need to be sustained by secular ethical systems of at least equal strength to religion, equally devoted to transcendental ideals, equally capable of eliciting courage, self-sacrifice and adherence to principle.
The most realistic prospect is that some sub-specialties of science might self-identify as being engaged primarily in the pursuit of truth and (supported by strong ethical systems to which their participants subscribe) impose on their members a stricter and more honest standard of behaviour. Since science must be truthful in order to thrive qua science, any such truthful sub-specialities would be expected to thrive over the long term (this is assuming they can attract scientists of sufficient calibre backed-up with sufficient resources). From such seeds of truth, real science might again re-grow.
Could it happen? – could there really be a Great Awakening to truth in science in which scientists in specific disciplines or en masse would simply start being truthful about all things great and small, and would swiftly organize to support each other in this principle? I am hopeful that some kind of moral renewal might potentially occur in science, but I am not optimistic. I am hopeful – or else I would not be writing this. But I am not optimistic, because there appears to be little awareness of the endemic state of corruption – presumably because the relentless but incremental expansion of dishonesty has been so gradual that it failed to cause sufficient alarm; and at each step in the decline scientists quickly habituated to the new situation.
At present, I can detect no sign of anything like a principled adherence to perfect truthfulness among our complacent, arrogant and ever-more-powerful scientific leadership – and that is the group among which a Great Awakening would need to take-hold; even if, as seems likely, the movement originated elsewhere.
Further reading: The above polemical essay builds upon the argument of several of my previous publications including: ‘Peer usage versus peer review’ (BMJ 2007; 335:451); Zombie science’ (Medical Hypotheses 2008; 71:327–329); ‘The vital role of transcendental truth in science’ (Medical Hypotheses 2009; 72:373–376); and ‘Are you an honest academic?’ (Oxford Magazine 2009; 287:8–10).
Monday, 31 August 2009
Reliable but dumb, or smart but slapdash?
Bruce G Charlton
Why it is ‘better’ to be reliable but dumb than smart but slapdash: Are intelligence (IQ) and Conscientiousness best regarded as gifts or virtues?
Medical Hypotheses. 2009; Volume 73: 465-467
Editorial
Summary
The psychological attributes of intelligence and personality are usually seen as being quite distinct in nature: higher intelligence being regarded a ‘gift’ (bestowed mostly by heredity); while personality or ‘character’ is morally evaluated by others, on the assumption that it is mostly a consequence of choice? So a teacher is more likely to praise a child for their highly Conscientious personality (high ‘C’) – an ability to take the long view, work hard with self-discipline and persevere in the face of difficulty – than for possessing high IQ. Even in science, where high intelligence is greatly valued, it is seen as being more virtuous to be a reliable and steady worker. Yet it is probable that both IQ and personality traits (such as high-C) are about-equally inherited ‘gifts’ (heritability of both likely to be in excess of 0.5). Rankings of both IQ and C are generally stable throughout life (although absolute levels of both will typically increase throughout the lifespan, with IQ peaking in late-teens and C probably peaking in middle age). Furthermore, high IQ is not just an ability to be used only as required; higher IQ also carries various behavioural predispositions – as reflected in the positive correlation with the personality trait of Openness to Experience; and characteristically ‘left-wing’ or ‘enlightened’ socio-political values among high IQ individuals. However, IQ is ‘effortless’ while high-C emerges mainly in tough situations where exceptional effort is required. So we probably tend to regard personality in moral terms because this fits with a social system that provides incentives for virtuous behaviour (including Conscientiousness). In conclusion, high IQ should probably more often be regarded in morally evaluative terms because it is associated with behavioural predispositions; while C should probably be interpreted with more emphasis on its being a gift or natural ability. In particular, people with high levels of C are very lucky in modern societies, since they are usually well-rewarded for this aptitude. This includes science, where it seems that C has been selected-for more rigorously than IQ. Indeed, those ‘gifted’ with high Conscientiousness are in some ways even luckier than the very intelligent – because there are more jobs for reliable and hard-working people (even if they are relatively ‘dumb’) than for smart people with undependable personalities.
***
Moral evaluations of intelligence and personality
The psychological attributes of intelligence and personality are usually seen as being quite distinct in nature: higher intelligence being regarded as a morally-neutral aptitude which is a lucky ‘gift’; while personality or ‘character’ is morally evaluated by others, on the assumption that it is mostly a consequence of choices. So a teacher is much more likely repeatedly to praise a child for exceptional self-discipline and hard work than for being of high intelligence. In other words, virtue is seen as an aspect of character/personality rather than intelligence.
General intelligence (aka. ‘g factor’ intelligence, or ‘intelligence quotient’ or IQ) [1], [2], [3] and [4] and the ‘Big Five’ personality trait of Conscientiousness [5], [6] and [7] are the two main measurable psychological factors, higher levels of which are predictive of better educational and job performance [8] and [9]. IQ is the aptitude that enables a person to think abstractly and logically, to solve a wide range of novel problems, and to learn rapidly.
The personality trait of Conscientiousness (‘C’) incorporates features such as perseverance, self-discipline, meticulousness, and long-termism. In a nutshell, Conscientiousness is the capacity to work hard at a task over the long-term despite finding the task uninteresting and despite receiving no immediate reward.
The usual conceptualization sees IQ as a gift and C as a virtue; i.e. intelligence as an ability available to be used when necessary and personality traits such as Conscientiousness as a moral disposition to make better or worse behavioural choices. The mainstream idea would be that people are not responsible for the level of their intelligence but are responsible for their behaviour. So apparently it makes sense to praise Conscientiousness as virtuous but not similarly to praise IQ.
However, I will argue that – while there are indeed practical reasons to praise good behaviour – in reality IQ has morally-relevant elements, while high-C (and other valued personality traits) should also be regarded as a gift. So, both intelligence and personality can be regarded either as gifts or as virtues, according to context.
Intelligence is regarded as a gift
Most people regard intelligence as a ‘gift’ – and highly intelligent children have sometimes been termed gifted. This interpretation is accurate, in the sense that the main known determinant of general intelligence is heredity: people inherit intelligence from their parents [1], [2], [3] and [4]. While bad experiences (such as starvation and disease in the womb or during infancy) can pull intelligence downwards, it is at present difficult or impossible significantly to raise a person’s real, underlying, long-term predictive general intelligence by any kind of environmental intervention[10]. (It may, however, be possible to raise IQ scores by practicing IQ tests and other focused interventions; but this does not cash-out into significant and prolonged general benefits in terms of education and employment).
IQ is calculated by testing groups of people at different ages, and (usually) putting their scores into rank order and organizing rankings onto a normal distribution curve with a mean average IQ of 100 and a standard deviation of 15. Using this type of calculation, intelligence scores/rankings are relatively stable throughout life – so that a child of 8 with high IQ will usually grow to become an adult with similarly high IQ, and vice versa [1], [2], [3] and [4].
Because intelligence is a gift which is substantially hereditary and stable throughout life, on the whole it is regarded as a result of ‘luck’ and something for which people should be grateful; and not, therefore, as a virtue deserving of moral approbation or praise. Indeed, people with high intelligence may be given less help than they need, and may be held to a higher standard of behaviour, precisely because they are regarded as lucky.
Higher intelligence is socially valued more highly than lower intelligence, probably because people with a higher IQ are on average more useful economically [11] (having higher economic productivity, on average); nonetheless the most intelligent people are not usually regarded as intrinsically virtuous nor especially morally praiseworthy. And although it is true that people of low intelligence may attract hurtful and insulting descriptors such as dumb, dull, slow or stupid; nonetheless, a person with these attributes is not regarded as intrinsically wicked.
Personality traits are morally evaluated
There is a contrast between IQ and personality in respect of moral evaluations. While IQ is seen as a gift there is a spontaneous tendency to regard personality as a morally distinguishing feature – as a visible marker of a person’s underlying moral nature. It is quite normal to praise the most diligent people for their high capacity for hard work, and at the same time to regard them as merely fortunate if they are also of high intelligence.
Yet it is probable that both IQ and personality traits (such as the ability to work hard) are almost-equally hereditary ‘gifts’. The heritability of IQ is generally quoted as between 0.5 and 0.8 (probably at the higher end) [1], [2], [3] and [4] and the heritability of personality is quoted as being around 0.5 [5], [6] and [7]. However, the estimate of personality heritability is certainly an underestimate due to the sub-optimal conceptualization of personality traits, and especially to the lesser precision of current personality measurement methods compared with IQ tests [4]. To the extent that these things can be observed in everyday experience, both IQ and personality are probably about-equally inherited; and the high IQ and extra-hard-working person should about-equally thank their genes rather than congratulate themselves.
Furthermore, rankings of personality, like IQ, are generally stable throughout life; so that a highly Conscientious child will probably grow into a highly Conscientious adult and vice versa (whatever their familial, educational and socially experiences may be). However, it is also important to recognize that average personality traits change through the lifespan – e.g. Conscientiousness levels increase through early adult life, while Extraversion declines [12]. The high-C personality type which enables people to work hard, be self-disciplined and pursue long-term goals is therefore, in this sense, no more ‘virtuous’ than the high IQ ability quickly to do complex verbal, mathematic and symbolic puzzles.
But Conscientiousness is often regarded as highly moral behaviour, and an exceptionally-reliable individual will probably be regarded as virtuous even when they are of low IQ. However, in contrast, a person who is low in C is likely to be feckless, distractible, slapdash, and focused on short-term rewards – even when they are very intelligent. These behaviours are regarded as moral deficiencies; and the coexistence of high IQ in some ways makes it worse, because it is often felt that clever people ‘should know better’. Of course, low-C traits are negatively evaluated probably for the obvious reason that they are not very useful socially – indeed a person of very low Conscientiousness is likely to be a poor student and troublesome employee under most circumstances.
Aside, it should be noted that low-C may also be associated with some positively-evaluated attributes; especially creativity (insofar as highly creative people tend to have very high IQ and moderately high ‘Psychoticism’ which trait includes moderately-low Conscientiousness [13]). I have previously suggested that selecting for very high-C will therefore – as an unintended side effect – tend to reduce the average level of creativity; and that this may have happened in science over the past several decades [14].
Furthermore, it has been argued that in the hunter gatherer societies of our ancestors it would probably have been advantageous for most people to have lower levels of C than seem to be optimal nowadays; in the sense that it was more important for hunter gatherers to react spontaneously and quickly to immediate stimuli; and less important for them to plan far ahead, or to be able to persevere in the unrewarding and often repetitive tasks that characterize much of formal education or agricultural and industrial employment [7].
But in modern societies, it is certainly an advantage (on average) to have higher levels of C.
Moral evaluation of personality
The evidence therefore suggests that it is likely that although the two psychological attributes of IQ and C are not highly-correlated (see Ref. [13] for review); the ability to work hard and with self-discipline and the ability of general intelligence are about-equally inherited, about-equally stable throughout life, and about-equally difficult to change either by self-determination or by the social interventions of other people. It seems that we as individuals are pretty much ‘stuck with’ the intelligence and the personalities with which we were born; and it is strange that exceptional IQ should be regarded as a gift while exceptional C is regarded as being the praiseworthy result of resolution and effort.
It might be argued that personality traits are associated with moral behaviours in a way that IQ is not. Certainly personality traits do have moral aspects. Three of the Big Five – Conscientiousness, Agreeableness and Neuroticism – have one extreme which would generally be immoral [6] and [7]. For example, it would generally be regarded as ‘bad behaviour’ to be low in Agreeableness since this would include selfishness, uncooperativeness, emotional coldness, unfriendliness, unhelpfulness. Likewise it may be regarded as socially-undesirable to be high in Neuroticism since this would include proneness to mood swings, irritability and anger.
But the reason that humans apparently spontaneously regard personality in moral terms is presumably because humans respond to incentives. Society would probably wish to encourage pro-social behaviour by praising it, on the basis that even though personality rankings cannot be much changed by whole-population interventions, at the individual level behaviour can be shaped by incentives – by rewards and punishments.
Furthermore, high-C behaviour takes more effort than low-C behaviour. Although the ability to work hard on topics that are uninteresting is mostly hereditary, and therefore a gift, hard work is still hard work, and it is still easier not to work hard! Slapdash, distractible behaviour is undemanding, takes less effort. So, unless there is system of incentives which encourages hard work, then the default position is to work less hard, or not to work at all.
However, when the same incentives are applied to the whole of a group of people varying in C; it is unreasonable and may be cruel to expect that the Conscientiousness gap between high and low individuals to disappear. Although all students might work harder, at least while the incentives were being applied, the gap between high-C and low-C students would remain, and the size of this gap might increase. Certainly, this is what has been found with IQ, when attempting to close various IQ-testing ‘gaps’. And, insofar as C is like IQ (heritable and stable), the possible size of improvement due to interventions is likely to be modest or negligible [2]. The accumulated experience of trying to improve general intelligence (in developed nations) is that it is difficult or impossible to produce sustained long-term improvements in intelligence, especially when the improvements are tested by independent outcomes such performance in employment. Improvements are often superficial results of specific training which only enhance specific types of test performance or evaluations done while under the influence of structured motivational systems [10].
Conclusion
Personality clearly has a moral dimension, but something similar could also be said of intelligence in the indirect sense that higher intelligence is associated with reduced levels of a range of social pathologies including crime and family breakdown [15].
Furthermore intelligence is associated with several aspects of personality and behaviour. There is a positive association between IQ and the Big Five trait of Openness to Experience – which means that more-intelligent people are more likely to seek novelty, enjoy artistic experiences, and be imaginative [7]. Furthermore, intelligence is associated positively with atheism and also with what have been termed ‘enlightened’ values such as left-wing or ‘liberal’ and anti-traditional/anti-conservative views [16]. So that IQ is associated with several morally-evaluated socio-political views which could be judged as virtuous, adaptive, mistaken or even damaging – according to one’s socio-political and religious perspective.
I do not, however, wish to press the similarity of personality and intelligence too hard since these attributes may have a somewhat distinct evolutionary rationale, and selectional basis [17]. My main point is that, although we regard intelligence and personality as different kinds of psychological attributes, in fact they are similar in several important ways.
Nonetheless, in sum, it seems that our traditional interpretations of intelligence and personality require modification. IQ is not just an ability which can be used as required; instead higher IQ is also a predisposition which on average includes a bias towards some types of behaviours and away from others. And high conscientiousness – such as the ability to take the long view, work hard and persevere in the face of difficulty – should probably be interpreted with more emphasis on its being a gift in much the same sense as high intelligence – despite the fact that IQ is ‘effortless’ while high-C emerges mainly in tough situations where exceptional diligence is required.
People with high levels of IQ are mostly very lucky, as is widely recognized; but people with high-C are very lucky too, because they are usually well-rewarded for this aptitude in modern society; and indeed rewarded in science too, where it seems that self-discipline is now selected-for more rigorously than IQ [14].
Indeed, in some ways those ‘gifted’ with high-C are even luckier than very intelligent people, because there are always going to be more jobs for reliable and hard-working people (even if they are relatively ‘dumb’) than jobs which are suitable for smart people who are undependable, short-termist and slapdash.
References
[1] U. Neisser et al., Intelligence: knowns and unknowns, Am Psychol 51 (1996), pp. 77–101.
[2] A.R. Jensen, The g factor: the science of mental ability, Praeger, Westport, CT, USA (1988).
[3] N.J. Mackintosh, IQ and human intelligence, Oxford University Press, Oxford (1998).
[4] I.J. Deary, Intelligence: a very short introduction, Oxford University Press, Oxford (2001).
[5] J.R. Harris, The nurture assumption: why children turn out the way they do, Bloomsbury, London (1998).
[6] G. Matthews, I.J. Deary and M.C. Whiteman, Personality traits, Cambridge University Press, Cambridge, UK (2003).
[7] D. Nettle, Personality: what makes you the way you are, Oxford University Press, Oxford, UK (2007).
[8] M.R. Barrick and M.K. Mount, The big five personality dimensions and job performance: a meta analysis, Pers Psychol 44 (1991), pp. 1–26.
[9] A.L. Duckworth and M.E.P. Seligman, Self-discipline outdoes IQ in predicting academic performance of adolescents, Psychol Sci 12 (2005), pp. 939–944.
[10] H.H. Spitz, The raising of intelligence: a selected history of attempts to raise retarded intelligence, Erlbaum, Hillsdale, NJ, USA (1986).
[11] L.S. Gottfredson, Implications of cognitive differences for schooling within diverse societies. In: C.L. Frisby and C.R. Reynolds, Editors, Comprehensive handbook of multicultural school psychology, Wiley, New York (2005), pp. 517–554.
[12] P.T. Costa and R.R. McCrae, Stability and change in personality from adolescence through adulthood. In: C.F. Halverson Jr, G.A. Kohnstamm and R.P. Martin, Editors, The developing structure of temperament and personality from infancy to adulthood, Lawrence Erlbaum Associates, Hillsdale, NJ, USA (1994), pp. 139–150.
[13] H.J. Eysenck, Genius: the natural history of creativity, Cambridge University Press, Cambridge, UK (1995).
[14] B.G. Charlton, Why are modern scientists so dull? How science selects for perseverance and sociability at the expense of intelligence and creativity, Med Hypotheses 72 (2009), pp. 237–243.
[15] R.J. Herrnstein and C. Murray, The bell curve: intelligence and class structure in American life, Forbes, New York (1994).
[16] I.J. Deary, C.D. Batty and C.R. Gale, Bright children become enlightened adults, Psychol Sci 19 (2008), pp. 1–6.
[17] L. Penke, J.J. Denissen and G.F. Miller, The evolutionary genetics of personality, Eur J Personality 21 (2007), pp. 549–587.
Why it is ‘better’ to be reliable but dumb than smart but slapdash: Are intelligence (IQ) and Conscientiousness best regarded as gifts or virtues?
Medical Hypotheses. 2009; Volume 73: 465-467
Editorial
Summary
The psychological attributes of intelligence and personality are usually seen as being quite distinct in nature: higher intelligence being regarded a ‘gift’ (bestowed mostly by heredity); while personality or ‘character’ is morally evaluated by others, on the assumption that it is mostly a consequence of choice? So a teacher is more likely to praise a child for their highly Conscientious personality (high ‘C’) – an ability to take the long view, work hard with self-discipline and persevere in the face of difficulty – than for possessing high IQ. Even in science, where high intelligence is greatly valued, it is seen as being more virtuous to be a reliable and steady worker. Yet it is probable that both IQ and personality traits (such as high-C) are about-equally inherited ‘gifts’ (heritability of both likely to be in excess of 0.5). Rankings of both IQ and C are generally stable throughout life (although absolute levels of both will typically increase throughout the lifespan, with IQ peaking in late-teens and C probably peaking in middle age). Furthermore, high IQ is not just an ability to be used only as required; higher IQ also carries various behavioural predispositions – as reflected in the positive correlation with the personality trait of Openness to Experience; and characteristically ‘left-wing’ or ‘enlightened’ socio-political values among high IQ individuals. However, IQ is ‘effortless’ while high-C emerges mainly in tough situations where exceptional effort is required. So we probably tend to regard personality in moral terms because this fits with a social system that provides incentives for virtuous behaviour (including Conscientiousness). In conclusion, high IQ should probably more often be regarded in morally evaluative terms because it is associated with behavioural predispositions; while C should probably be interpreted with more emphasis on its being a gift or natural ability. In particular, people with high levels of C are very lucky in modern societies, since they are usually well-rewarded for this aptitude. This includes science, where it seems that C has been selected-for more rigorously than IQ. Indeed, those ‘gifted’ with high Conscientiousness are in some ways even luckier than the very intelligent – because there are more jobs for reliable and hard-working people (even if they are relatively ‘dumb’) than for smart people with undependable personalities.
***
Moral evaluations of intelligence and personality
The psychological attributes of intelligence and personality are usually seen as being quite distinct in nature: higher intelligence being regarded as a morally-neutral aptitude which is a lucky ‘gift’; while personality or ‘character’ is morally evaluated by others, on the assumption that it is mostly a consequence of choices. So a teacher is much more likely repeatedly to praise a child for exceptional self-discipline and hard work than for being of high intelligence. In other words, virtue is seen as an aspect of character/personality rather than intelligence.
General intelligence (aka. ‘g factor’ intelligence, or ‘intelligence quotient’ or IQ) [1], [2], [3] and [4] and the ‘Big Five’ personality trait of Conscientiousness [5], [6] and [7] are the two main measurable psychological factors, higher levels of which are predictive of better educational and job performance [8] and [9]. IQ is the aptitude that enables a person to think abstractly and logically, to solve a wide range of novel problems, and to learn rapidly.
The personality trait of Conscientiousness (‘C’) incorporates features such as perseverance, self-discipline, meticulousness, and long-termism. In a nutshell, Conscientiousness is the capacity to work hard at a task over the long-term despite finding the task uninteresting and despite receiving no immediate reward.
The usual conceptualization sees IQ as a gift and C as a virtue; i.e. intelligence as an ability available to be used when necessary and personality traits such as Conscientiousness as a moral disposition to make better or worse behavioural choices. The mainstream idea would be that people are not responsible for the level of their intelligence but are responsible for their behaviour. So apparently it makes sense to praise Conscientiousness as virtuous but not similarly to praise IQ.
However, I will argue that – while there are indeed practical reasons to praise good behaviour – in reality IQ has morally-relevant elements, while high-C (and other valued personality traits) should also be regarded as a gift. So, both intelligence and personality can be regarded either as gifts or as virtues, according to context.
Intelligence is regarded as a gift
Most people regard intelligence as a ‘gift’ – and highly intelligent children have sometimes been termed gifted. This interpretation is accurate, in the sense that the main known determinant of general intelligence is heredity: people inherit intelligence from their parents [1], [2], [3] and [4]. While bad experiences (such as starvation and disease in the womb or during infancy) can pull intelligence downwards, it is at present difficult or impossible significantly to raise a person’s real, underlying, long-term predictive general intelligence by any kind of environmental intervention[10]. (It may, however, be possible to raise IQ scores by practicing IQ tests and other focused interventions; but this does not cash-out into significant and prolonged general benefits in terms of education and employment).
IQ is calculated by testing groups of people at different ages, and (usually) putting their scores into rank order and organizing rankings onto a normal distribution curve with a mean average IQ of 100 and a standard deviation of 15. Using this type of calculation, intelligence scores/rankings are relatively stable throughout life – so that a child of 8 with high IQ will usually grow to become an adult with similarly high IQ, and vice versa [1], [2], [3] and [4].
Because intelligence is a gift which is substantially hereditary and stable throughout life, on the whole it is regarded as a result of ‘luck’ and something for which people should be grateful; and not, therefore, as a virtue deserving of moral approbation or praise. Indeed, people with high intelligence may be given less help than they need, and may be held to a higher standard of behaviour, precisely because they are regarded as lucky.
Higher intelligence is socially valued more highly than lower intelligence, probably because people with a higher IQ are on average more useful economically [11] (having higher economic productivity, on average); nonetheless the most intelligent people are not usually regarded as intrinsically virtuous nor especially morally praiseworthy. And although it is true that people of low intelligence may attract hurtful and insulting descriptors such as dumb, dull, slow or stupid; nonetheless, a person with these attributes is not regarded as intrinsically wicked.
Personality traits are morally evaluated
There is a contrast between IQ and personality in respect of moral evaluations. While IQ is seen as a gift there is a spontaneous tendency to regard personality as a morally distinguishing feature – as a visible marker of a person’s underlying moral nature. It is quite normal to praise the most diligent people for their high capacity for hard work, and at the same time to regard them as merely fortunate if they are also of high intelligence.
Yet it is probable that both IQ and personality traits (such as the ability to work hard) are almost-equally hereditary ‘gifts’. The heritability of IQ is generally quoted as between 0.5 and 0.8 (probably at the higher end) [1], [2], [3] and [4] and the heritability of personality is quoted as being around 0.5 [5], [6] and [7]. However, the estimate of personality heritability is certainly an underestimate due to the sub-optimal conceptualization of personality traits, and especially to the lesser precision of current personality measurement methods compared with IQ tests [4]. To the extent that these things can be observed in everyday experience, both IQ and personality are probably about-equally inherited; and the high IQ and extra-hard-working person should about-equally thank their genes rather than congratulate themselves.
Furthermore, rankings of personality, like IQ, are generally stable throughout life; so that a highly Conscientious child will probably grow into a highly Conscientious adult and vice versa (whatever their familial, educational and socially experiences may be). However, it is also important to recognize that average personality traits change through the lifespan – e.g. Conscientiousness levels increase through early adult life, while Extraversion declines [12]. The high-C personality type which enables people to work hard, be self-disciplined and pursue long-term goals is therefore, in this sense, no more ‘virtuous’ than the high IQ ability quickly to do complex verbal, mathematic and symbolic puzzles.
But Conscientiousness is often regarded as highly moral behaviour, and an exceptionally-reliable individual will probably be regarded as virtuous even when they are of low IQ. However, in contrast, a person who is low in C is likely to be feckless, distractible, slapdash, and focused on short-term rewards – even when they are very intelligent. These behaviours are regarded as moral deficiencies; and the coexistence of high IQ in some ways makes it worse, because it is often felt that clever people ‘should know better’. Of course, low-C traits are negatively evaluated probably for the obvious reason that they are not very useful socially – indeed a person of very low Conscientiousness is likely to be a poor student and troublesome employee under most circumstances.
Aside, it should be noted that low-C may also be associated with some positively-evaluated attributes; especially creativity (insofar as highly creative people tend to have very high IQ and moderately high ‘Psychoticism’ which trait includes moderately-low Conscientiousness [13]). I have previously suggested that selecting for very high-C will therefore – as an unintended side effect – tend to reduce the average level of creativity; and that this may have happened in science over the past several decades [14].
Furthermore, it has been argued that in the hunter gatherer societies of our ancestors it would probably have been advantageous for most people to have lower levels of C than seem to be optimal nowadays; in the sense that it was more important for hunter gatherers to react spontaneously and quickly to immediate stimuli; and less important for them to plan far ahead, or to be able to persevere in the unrewarding and often repetitive tasks that characterize much of formal education or agricultural and industrial employment [7].
But in modern societies, it is certainly an advantage (on average) to have higher levels of C.
Moral evaluation of personality
The evidence therefore suggests that it is likely that although the two psychological attributes of IQ and C are not highly-correlated (see Ref. [13] for review); the ability to work hard and with self-discipline and the ability of general intelligence are about-equally inherited, about-equally stable throughout life, and about-equally difficult to change either by self-determination or by the social interventions of other people. It seems that we as individuals are pretty much ‘stuck with’ the intelligence and the personalities with which we were born; and it is strange that exceptional IQ should be regarded as a gift while exceptional C is regarded as being the praiseworthy result of resolution and effort.
It might be argued that personality traits are associated with moral behaviours in a way that IQ is not. Certainly personality traits do have moral aspects. Three of the Big Five – Conscientiousness, Agreeableness and Neuroticism – have one extreme which would generally be immoral [6] and [7]. For example, it would generally be regarded as ‘bad behaviour’ to be low in Agreeableness since this would include selfishness, uncooperativeness, emotional coldness, unfriendliness, unhelpfulness. Likewise it may be regarded as socially-undesirable to be high in Neuroticism since this would include proneness to mood swings, irritability and anger.
But the reason that humans apparently spontaneously regard personality in moral terms is presumably because humans respond to incentives. Society would probably wish to encourage pro-social behaviour by praising it, on the basis that even though personality rankings cannot be much changed by whole-population interventions, at the individual level behaviour can be shaped by incentives – by rewards and punishments.
Furthermore, high-C behaviour takes more effort than low-C behaviour. Although the ability to work hard on topics that are uninteresting is mostly hereditary, and therefore a gift, hard work is still hard work, and it is still easier not to work hard! Slapdash, distractible behaviour is undemanding, takes less effort. So, unless there is system of incentives which encourages hard work, then the default position is to work less hard, or not to work at all.
However, when the same incentives are applied to the whole of a group of people varying in C; it is unreasonable and may be cruel to expect that the Conscientiousness gap between high and low individuals to disappear. Although all students might work harder, at least while the incentives were being applied, the gap between high-C and low-C students would remain, and the size of this gap might increase. Certainly, this is what has been found with IQ, when attempting to close various IQ-testing ‘gaps’. And, insofar as C is like IQ (heritable and stable), the possible size of improvement due to interventions is likely to be modest or negligible [2]. The accumulated experience of trying to improve general intelligence (in developed nations) is that it is difficult or impossible to produce sustained long-term improvements in intelligence, especially when the improvements are tested by independent outcomes such performance in employment. Improvements are often superficial results of specific training which only enhance specific types of test performance or evaluations done while under the influence of structured motivational systems [10].
Conclusion
Personality clearly has a moral dimension, but something similar could also be said of intelligence in the indirect sense that higher intelligence is associated with reduced levels of a range of social pathologies including crime and family breakdown [15].
Furthermore intelligence is associated with several aspects of personality and behaviour. There is a positive association between IQ and the Big Five trait of Openness to Experience – which means that more-intelligent people are more likely to seek novelty, enjoy artistic experiences, and be imaginative [7]. Furthermore, intelligence is associated positively with atheism and also with what have been termed ‘enlightened’ values such as left-wing or ‘liberal’ and anti-traditional/anti-conservative views [16]. So that IQ is associated with several morally-evaluated socio-political views which could be judged as virtuous, adaptive, mistaken or even damaging – according to one’s socio-political and religious perspective.
I do not, however, wish to press the similarity of personality and intelligence too hard since these attributes may have a somewhat distinct evolutionary rationale, and selectional basis [17]. My main point is that, although we regard intelligence and personality as different kinds of psychological attributes, in fact they are similar in several important ways.
Nonetheless, in sum, it seems that our traditional interpretations of intelligence and personality require modification. IQ is not just an ability which can be used as required; instead higher IQ is also a predisposition which on average includes a bias towards some types of behaviours and away from others. And high conscientiousness – such as the ability to take the long view, work hard and persevere in the face of difficulty – should probably be interpreted with more emphasis on its being a gift in much the same sense as high intelligence – despite the fact that IQ is ‘effortless’ while high-C emerges mainly in tough situations where exceptional diligence is required.
People with high levels of IQ are mostly very lucky, as is widely recognized; but people with high-C are very lucky too, because they are usually well-rewarded for this aptitude in modern society; and indeed rewarded in science too, where it seems that self-discipline is now selected-for more rigorously than IQ [14].
Indeed, in some ways those ‘gifted’ with high-C are even luckier than very intelligent people, because there are always going to be more jobs for reliable and hard-working people (even if they are relatively ‘dumb’) than jobs which are suitable for smart people who are undependable, short-termist and slapdash.
References
[1] U. Neisser et al., Intelligence: knowns and unknowns, Am Psychol 51 (1996), pp. 77–101.
[2] A.R. Jensen, The g factor: the science of mental ability, Praeger, Westport, CT, USA (1988).
[3] N.J. Mackintosh, IQ and human intelligence, Oxford University Press, Oxford (1998).
[4] I.J. Deary, Intelligence: a very short introduction, Oxford University Press, Oxford (2001).
[5] J.R. Harris, The nurture assumption: why children turn out the way they do, Bloomsbury, London (1998).
[6] G. Matthews, I.J. Deary and M.C. Whiteman, Personality traits, Cambridge University Press, Cambridge, UK (2003).
[7] D. Nettle, Personality: what makes you the way you are, Oxford University Press, Oxford, UK (2007).
[8] M.R. Barrick and M.K. Mount, The big five personality dimensions and job performance: a meta analysis, Pers Psychol 44 (1991), pp. 1–26.
[9] A.L. Duckworth and M.E.P. Seligman, Self-discipline outdoes IQ in predicting academic performance of adolescents, Psychol Sci 12 (2005), pp. 939–944.
[10] H.H. Spitz, The raising of intelligence: a selected history of attempts to raise retarded intelligence, Erlbaum, Hillsdale, NJ, USA (1986).
[11] L.S. Gottfredson, Implications of cognitive differences for schooling within diverse societies. In: C.L. Frisby and C.R. Reynolds, Editors, Comprehensive handbook of multicultural school psychology, Wiley, New York (2005), pp. 517–554.
[12] P.T. Costa and R.R. McCrae, Stability and change in personality from adolescence through adulthood. In: C.F. Halverson Jr, G.A. Kohnstamm and R.P. Martin, Editors, The developing structure of temperament and personality from infancy to adulthood, Lawrence Erlbaum Associates, Hillsdale, NJ, USA (1994), pp. 139–150.
[13] H.J. Eysenck, Genius: the natural history of creativity, Cambridge University Press, Cambridge, UK (1995).
[14] B.G. Charlton, Why are modern scientists so dull? How science selects for perseverance and sociability at the expense of intelligence and creativity, Med Hypotheses 72 (2009), pp. 237–243.
[15] R.J. Herrnstein and C. Murray, The bell curve: intelligence and class structure in American life, Forbes, New York (1994).
[16] I.J. Deary, C.D. Batty and C.R. Gale, Bright children become enlightened adults, Psychol Sci 19 (2008), pp. 1–6.
[17] L. Penke, J.J. Denissen and G.F. Miller, The evolutionary genetics of personality, Eur J Personality 21 (2007), pp. 549–587.
Saturday, 11 July 2009
Replacing education with psychometrics
Bruce G Charlton
Replacing education with psychometrics: How learning about IQ almost-completely changed my mind about education.
Medical Hypotheses. 2009; 73: 273-277
***
Summary
I myself am a prime example of the way in which ignorance of IQ leads to a distorted understanding of education (and many other matters). I have been writing on the subject of education – especially higher education, science and medical education – for about 20 years, but now believe that many of my earlier ideas were wrong for the simple reason that I did not know about IQ. Since discovering the basic facts about IQ, several of my convictions have undergone a U-turn. Just how radically my ideas were changed has been brought home by two recent books: Real Education by Charles Murray and Spent by Geoffrey Miller. Since IQ and personality are substantially hereditary and rankings (although not absolute levels) are highly stable throughout a persons adult life, this implies that differential educational attainment within a society is mostly determined by heredity and therefore not by differences in educational experience. This implies that education is about selection more than enhancement, and educational qualifications mainly serve to ‘signal’ or quantify a person’s hereditary attributes. So education mostly functions as an extremely slow, inefficient and imprecise form of psychometric testing. It would therefore be easy to construct a modern educational system that was both more efficient and more effective than the current one. I now advocate a substantial reduction in the average amount of formal education and the proportion of the population attending higher education institutions. At the age of about sixteen each person could leave school with a set of knowledge-based examination results demonstrating their level of competence in a core knowledge curriculum; and with usefully precise and valid psychometric measurements of their general intelligence and personality (especially their age ranked degree of Conscientiousness). However, such change would result in a massive down-sizing of the educational system and this is a key underlying reason why IQ has become a taboo subject. Miller suggests that academics at the most expensive, elite, intelligence-screening universities tend to be sceptical of psychometric testing; precisely because they do not want to be undercut by cheaper, faster, more-reliable IQ and personality evaluations.
***
Introduction
It was only in early 2007 that I began properly to engage, for the first time in my professional career, with the literature on IQ. Surprisingly, this engagement had been stimulated by a book of economic history. And learning the basic facts about IQ rapidly changed my views on many things, none more so than education.
Just how radically my ideas about education were changed by learning about IQ has been brought home by two recent books: Real Education By Charles Murray [1] and Spent by Geoffrey Miller [2]. In line with analyses of Murray and Miller, I would now repudiate many of my previous opinions on the subject, and advocate a substantial reduction in the average amount of formal education and the proportion of the population attending higher education. In general, I now believe that many years of formal education can and should be substantially (but not entirely!) replaced with ‘psychometric’ measures of intelligence and personality as a basis for evaluating career potential.
In this article I use my own experience as a case study of the potentially-disruptive influence of psychometric knowledge, and discuss further the reasons why basic IQ facts have been so effectively concealed, confused and denied by mainstream elite intellectual opinion in the UK and USA.
The importance of IQ
I have been writing on the subject of education for about 20 years (especially on higher education, science and medical education), but I now believe that much of what I wrote was wrong for the simple reason that I did not know about IQ. Personality traits are important in a similar way to IQ, however personality measurement is currently less reliable and valid than IQ testing, and less-well quantified.
In the early 2000s I argued that modern formal education should be directed primarily at inculcating the ability to think abstractly and systematically [3] and that therefore the structure and not the specific content of education was critical (although ‘science’ – broadly defined – was likely to be the best basis for this type of education [4]). I suggested that higher education should be regarded as a non-vocational process, in which most degrees are modular, and modules were optional and multi-disciplinary, so that each student would assemble their own degree program in a minimally-constrained, ‘pick and mix’ fashion [5]. I also contended that since abstract systemizing cognition was so essential to modernizing societies, a major aim of social reform should be to include as many people as possible in formal education for as long as possible [6].
All of these views I would now regard as mistaken – and the reason is mostly my new understanding of IQ [7], [8], [9] and [10]. Miller concisely explains the basic facts about IQ:
“General intelligence (a.k.a. IQ, general cognitive ability, the g factor) is a way of quantifying intelligence’s variability among people. It is the best-established, most predictive, most heritable mental trait ever discovered in psychology. Whether measured with formal IQ tests or assessed through informal conversations and observations, intelligence predicts objective performance and learning ability across all important life-domains that show reliable individual differences” [2].
The crux of my new understanding is that IQ, and to a lesser but important extent personality traits, are highly predictive of educational attainment. This is a very old finding, and scientifically uncontroversial – but the implications have still not been acknowledged.
Since IQ is very substantially inherited with a true heritability of about 80% [7], [8], [9] and [10] and personality too has about a 50% heritability [11] and [12]; and since both IQ and personality rankings are highly stable throughout a persons adult life [13] (it is, for example, very difficult for educational interventions to have any significant and lasting effect on underlying IQ [1]) – then this implies that differential educational attainment within societies is mostly determined by heredity and therefore not by differences in educational experience.
(The other big factor which influences attainment is of course the large element of chance – which affects individuals unpredictably. However, chance is not completely random, in the sense that many outcomes such as accidental injuries and a range of illnesses are also correlated with IQ and personality [14]).
When full account has been taken of IQ and personality (and the measured effects of IQ and personality have been increased to take account of the inevitable imprecision of IQ measurements and the even greater difficulties of determining personality), and when the presumed effects of chance have also been subtracted – then there is not much variation of outcomes left-over within which educational differences could have an effect. Of course there will be some systemic effect of educational differences, but the effect is likely to be very much smaller than generally assumed, and even the direction of the education effect may be hard to detect when other more powerful factors are operative [1].
I found the fact that differences in educational attainment within a society are mostly due to heredity to be a stunning conclusion, which effectively demolished most of what I believed about education. My understanding of what education was doing was radically reshaped, and my beliefs about the justifiable duration and proper focus of the system of formal education were transformed. I began to realize that the educational system in modern societies was operating under false pretences. It seems that current educational systems are barely ‘fit for purpose’ and (lacking a proper understanding of IQ) are in many instances progressively getting worse rather than better.
In sum, education is more about selection than enhancement, and educational qualifications mostly serve to ‘signal’ or quantify a person’s hereditary attributes [15] – especially IQ and personality. Differential educational experience does not seem to have much of a systemic effect on people’s ability to think or work.
To put it another way – education mostly functions as an extremely slow, inefficient and imprecise form of psychometric testing. And because this fact is poorly understood, those aspects of modern education which are not psychometric are consequently neglected and misdirected.
Policy implications of psychometrics
If psychometric measures of IQ and personality were available, then it would be easy to construct a modern educational system that was both more efficient and more effective than the current one. However, such change would result in a massive down-sizing of the educational system – with substantial and permanent loss of jobs and status for educational professionals of all types including teachers, professors, administrators and managers.
According to Geoffrey Miller’s analysis [2], this impact on educational professionals is likely to be a key underlying reason why IQ has become a taboo subject, and why the basic facts of IQ have been so effectively obfuscated. Miller notes that it is the ultra-elite, most-selective and heavily research-oriented universities which are the focus of IQ resistance. At the same time more functionally-orientated institutions, such as the United States military, have for many decades quietly been using IQ as a tool to assist with selection and training allocations [16].
“Is it an accident that researchers at the most expensive, elite, IQ-screening universities tend to be most sceptical of IQ tests? I think not. Universities offer a costly, slow, unreliable intelligence-indicating product that competes directly with cheap, fast, more-reliable IQ tests. (…) Harvard and Yale sell nicely printed sheets of paper called degrees that cost about $160,000 (…). To obtain the degree, one must demonstrate a decent level of Conscientiousness, emotional stability, and openness in one’s coursework, but above all, one must have the intelligence to get admitted, based on SAT scores and high school grades. Thus the Harvard degree is basically an IQ guarantee”.
“Elite universities do not want to be undercut by competitors. They do not want their expensive IQ-warranties to suffer competition from cheap, fast IQ tests which would commodify the intelligence-display market and drive down costs. Therefore, elite universities have a hypocritical, love-hate relationship with intelligence tests”.
The vulnerability of the elite institutions to IQ knowledge is because most of the assumed advantages of an expensive elite education can be ascribed to their historic ability to select the top stratum of IQ (and also the most desirable personality types): given the stability and predictive power of these traits the elite students are therefore pre-determined to be (on average) highly successful.
Consequently the most elite institutions and their graduates have in the past few decades, both via academic publications and in the mass media, thoroughly obscured the basic and validated facts about IQ. We now have a situation where the high predictive powers of IQ and personality and the stable and hereditary nature of these traits are routinely concealed, confused or (in extremis) explicitly denied by some of the most prestigious and best-educated members of modern society [17].
Four mistaken beliefs resulting from my lack of IQ knowledge
I will summarize under four heading my main pre-IQ errors regarding education.
Mistaken belief number 1: Modern formal education should be directed primarily at inculcating the ability to think abstractly and systematically [3].
Revision: Modern formal education should be directed primarily at inculcating specific knowledge content.
Abstract systematic thinking is exceptionally important in modern societies. And I used to believe that that abstract systematic thinking was mostly a product of formal education – indeed I regarded this as the main function of formal education [3]. But I now recognize that abstract systematic thinking is pretty close to a definition of IQ; and that strongly IQ related (or heavily ‘g-loaded’) educational outcomes – such as differentials in reading comprehension and mathematical ability – are very difficult/impossible to improve in a real and sustained fashion by educational interventions [1].
In other words, a person’s level of ability to think abstractly and systematically is mostly a biological given – and not a consequence of formal education. The implication is that formal education should not be focusing on trying to do what it cannot do – i.e. enhance IQ. Instead, formal education should focus on educational goals where is can make a difference: i.e. the teaching of specific knowledge [1].
Mistaken belief number 2: Structure not content of formal education is crucial [5].
Revision: Content not structure of education is crucial.
I used to think that it did not matter what subject was studied in formal education, so long as the method of education was one which nurtured abstract systematic thinking [3]. I believed that how we learned was more important than what we learned, because I believed that abstract systematic thinking was a result of formal education – and this cognitive ability was more important than any particular body of information which had been memorized.
This line of reasoning meant that I favoured ‘pick and mix’, wide choice and multi-disciplinary curricula as a method of improving motivation by allowing students to study what most interested them, and giving students practice in learning new material and applying systematic thinking in many knowledge domains [5].
The reason that I believed all this has been summarized by Geoffrey Miller:
“The highly selective credential with little relevant content [such as an elite college degree in any subject] often trumps the less-selective credential with very relevant content. Nor are such preferences irrational. General intelligence is such a powerful predictor of job performance that a content-free IQ guarantee can be much more valuable to an employer or graduate school than a set of rote-learned content with no IQ guarantee” [2].
Since IQ is such a powerful influence on educational (and other) outcomes [18], the value of specific educational content is therefore only apparent when IQ has been controlled-for. Since IQ is routinely ignored or denied, the value of educational content is not apparent in outcomes which are sensitive to differences in general intelligence.
Murray argues that variations in the structure and methods of education are not able significantly to influence those educational outcomes which are ‘g-loaded’ such as reading comprehension or mathematic reasoning [1]. Numerous attempts to raise real long-term intelligence (rather than merely raising specific test scores) have failed [19]. However, the subject matter being studied will (obviously!) make a big difference to what gets learned. Once we set aside the delusional goal of enhancing IQ by educational reform, then the subject matter – or curriculum – becomes a more important focus than educational structure and methods.
Charles Murray therefore endorses the approach to ‘Cultural Literacy’ or a core knowledge curriculum pioneered by Ed Hirsch (www.coreknowledge.org). This educational philosophy focuses on constructing a comprehensive curriculum of the factual material that people should know, or ‘need to know’. Over the past couple of decades some detailed and well-validated programmes of study have been developed for the USA, and these can be purchased by educational institutions and also home-schooling parents.
It is claimed that such a core knowledge curriculum should enable the student to become a citizen participating at the highest possible social level, and that a shared education in core knowledge should hold society together with a stronger ‘cultural glue’. If such benefits are real, then school, especially between the ages of about 6 and 14, is the best place to follow such a program; since, although the core curriculum involves more than mere memorization, nonetheless memorization is an important element – and young children can memorize information much more easily and lastingly than adults [1].
Understanding IQ has therefore provoked me into a U-turn on the matter of curricula. I now believe that what we learn in formal education is more important than how we learn, because what we learn can have a lasting effect on what we know; while how we learn does not, after all, teach us how to think.
Mistaken belief number 3: A major aim of social reform should be to include as many people as possible in formal education for as long as possible. Ever-more people should get ever-more education for the foreseeable future [6].
Revision: The system of formal education is hugely over-expanded and should be substantially reduced (to considerably less than half its current size). The average person should receive fewer years of formal education, fewer people should attend higher education institutions and do fewer bachelor’s degrees, and those in higher education should – on average – complete the process in fewer years.
The proportion of school leavers entering higher education in the UK has at least trebled over the past three decades, from around 15% to more than 45%. The rationale behind this vast expansion was based on the observation of higher all-round performance among college graduates – better performance in jobs, and also a wide range of other good outcomes including improved health and happiness [6].
However, it turns out that almost all of this differential in behaviours can be explained in terms of selection for (mostly hereditary) intelligence, rather than these improvements being something added to individuals by their educational experience. The main extra information provided by the successful completion of prolonged educational programs (i.e. extra in addition to signalling IQ) is that educational certification provides a broadly-reliable signal of a highly-Conscientious personality.
Miller has neatly described this trait: “Conscientiousness is the Big Five personality trait that includes such characteristics as integrity, reliability, predictability, consistency, and punctuality. It predicts respect for social norms and responsibilities, and the likelihood of fulfilling promises and contracts. A century ago, people would have called it character, principle, honor, or moral fiber. (…) Conscientiousness is lower on average in juveniles, and it matures slowly with age” [2].
Other attributes of a highly-Conscientious personality are self-discipline, perseverance and long-termism [20].
But a person’s degree of Conscientiousness is not a product of their educational experience; rather it is a mostly-inherited psychological attribute which develops throughout life, the relative (or differential) possession of which is stable throughout life [13]. In other words, Conscientiousness is (mostly) an innate ability in a similar sense to intelligence – and similarly difficult to influence by educational means.
It turns out that modern formal education is mainly signalling [15], or providing indirect evidence about, a person’s IQ and personality abilities which they have mostly inherited [1] and [2]. This means that imposing an ever-increasing number of years of formal education for an ever-increasing proportion of the population is ever-increasingly inefficient – and is wasting years of people’s lives, wasting vast amounts of money on the education provision, and imposing huge economic and social ‘opportunity costs’ by forcing people to remain in formal education when their time would often be better spent doing something else (for example something economically-productive or something more personally-fulfilling).
Mistaken belief number 4: Higher education should be regarded as a general, non-vocational process, in which most degrees are modular and multi-disciplinary; and where specialization or vocational preparation should be a relatively brief and ‘last-minute’ training at the end of a long process of education [3], [5] and [6].
Revision: The period of general education should not extend much beyond about 16 (the approximate age of IQ maturity), and this general education should be focused on the basic skills of literacy and numeracy together with a core knowledge curriculum.
At the age of about 16 each person could potentially leave school with a set of knowledge-based examination results demonstrating their level of competence in a core knowledge curriculum; and with usefully precise and valid psychometric measurements of their general intelligence and personality (especially their age ranked degree of Conscientiousness). The combination of psychometric measures of IQ and Conscientiousness would serve the same kind of function as educational evaluations do at present, providing a basis for employment selection or valid predictions to guide the allocation of access to further levels of formal education.
Beyond this I believe that most education should be ‘functional’ or vocational, in the sense of being a relatively-focused training in the knowledge and skills required to do something specific. This functional post-sixteen formal higher education could vary in duration from weeks or months (for semi-skilled jobs) to several years (for access to the starting level of the most highly skilled and knowledge-intensive professions such as architecture, engineering, medicine or law).
But when IQ and personality measurements are available, then the majority of ‘white collar’ jobs – jobs such as management, administration, or school teaching (up to the age of about 16) – would no longer require a college degree. Instead specific knowledge-based training would be provided ‘on the job’, presumably by the traditional mixture of a formally-structured curriculum for imparting the core knowledge and systematic elements with apprenticeship and individual instruction in order to impart specialized skills.
Murray also suggests that much specialist educational certification for careers could in principle be better done by rigorous public examinations such as those for accountancy, than by the medium of minimum-duration college degrees [1].
Measuring personality
The main unsolved problem for this psychometric approach is the evaluation of personality. Most of the current evidence for the predictive and explanatory power of personality comes from self-rating questionnaires, and clearly these would not be suitable for educational and job evaluations since it is facile to learn the responses which would lead to a high rating for Conscientiousness.
Rather than being simply asserted in a questionnaire, a Conscientious, persevering, self-disciplined personality requires to be demonstrated in actual practice. The modern educational system has, inadvertently, evolved in the direction of requiring higher levels of Conscientiousness [20]. The main factor in this evolution has been the progressive lengthening of the educational process (in the UK the modal average age for leaving formal education has increased from 16 to about 21 in the space of 30 years), but educational evaluations have also become less IQ-orientated (less g-loaded) and more dependent upon the ability of students frequently and punctually to complete neat and regular course work assignments [20] and [21].
However, the modern educational system is not explicitly aware that it is measuring Conscientiousness – the changes have been an accidental by-product of other trends, and there was not a deliberate attempt to enhance Conscientiousness-selectivity as a matter of policy. Because the educational system is blind to the consequences of its own actions, there are counter-pressures to make course work easier and more-interesting and to offer more choices – when in fact it would be a more efficient and accurate measure of Conscientiousness to have students complete compulsory, dull and irrelevant tasks which required a great deal of toil and effort!
However, it may be socially-preferable to have students prove their Conscientiousness in the realm of economic employment rather than by setting them pointless and grinding work in a formal educational context. There are plenty of dull and demanding but necessary jobs, the successful and sufficiently-prolonged accomplishment of which could serve as a valid and accurate reliable signal of Conscientiousness. So it would be more useful for people to prove their level of Conscientiousness in the arena of paid work, than by having this measurement task done by formal educational institutions.
An alternative suggestion for evaluating Conscientiousness comes from Geoffrey Miller, who advocates using broad surveys of opinion from families, peers, employers or any reliable and informed person who is in prolonged social contact with the subject [2].
Conclusions
I have previously written about the extraordinary way in which knowledge of IQ in particular, and psychometrics in general, is ‘hidden in plain sight’ in modern culture [17]. The basic facts about IQ are accessible, abundant and convincing for those who take the trouble to look; but modern mainstream intellectual culture has for around half a decade ‘immunized’ most educated people against looking-at or learning about IQ by multiple forms of misinformation and denigration [22] and [23].
The recent books of Murray and Miller marshal more strongly than before the evidence that one major reason for its taboo status is that IQ knowledge has extremely damaging implications for the vast and expanding system of formal education which employs many intellectuals directly, and which provides almost all other intellectuals with the credentials upon which their status and employability depend. Miller’s phrase is worth repeating: “they do not want their expensive IQ-warranties to suffer competition from cheap, fast IQ tests which would commodify the intelligence-display market and drive down costs” [2].
Murray argues that a properly-demanding 4 year, general and core knowledge-based, ‘liberal arts’ degree would be valuable as a pre-specialization education for the high IQ intellectual elite [1]. Perhaps because I am a product of the (now disappeared) traditional English system of early educational specialization, I am unconvinced about the systematic benefits of general education at a college level. I suspect that the most efficient pattern of higher education would be to specialize at age 16 (or earlier for the highest IQ individuals) on completion of the standard core knowledge program; and that liberal arts should mainly be seen as an avocation (done for reasons of personal fulfilment) rather than a vocation (done as a job).
In other words, a liberal arts education beyond core knowledge could, and perhaps should, be optional and provided by the market, rather than being included in the educational ‘system’. For example, in the UK such an education is universally available without any residential requirement at a reasonable price and high quality via the Open University (www3.open.ac.uk/about).
But in a system where objective IQ and personality evaluations were available as signals of aptitude, it could be left to ‘the market’ to decide whether the possession of a rigorous 4 year general liberal arts degree opened more doors; or attracted any extra premium of status, salary or conditions compared with a specialized, early vocational degree such as medicine, law, architecture, engineering, or one of the sciences. (There would presumably also be some specialist arts and humanities degrees, mainly vocationally-orientated towards training high-level school and college teachers – as was the traditional English practice until about 40 years ago [3].)
In summary, modern societies are currently vastly over-provided with formal education, and this education has the wrong emphasis. In particular, the job of sorting people by their general aptitude could be done more accurately, cheaply and quickly by using psychometrics to measure IQ and Conscientiousness. This would free-up time and energy for early training in key skills such as reading, writing and mathematics; and to focus on a core knowledge curriculum.
However, for reasons related to self-interest, the intellectual class do not want people to know the basic facts about IQ; and since the intellectual class provide the information upon which the rest of society depends for their understanding – consequently most people do not know the basic facts about IQ. And lacking knowledge of IQ, people are not able to understand the education system and what it actually does.
I can point to myself as a prime example of the way in which ignorance of IQ leads to a distorted understanding of education. Before I knew about the basic facts of IQ, I had articulated what seemed to be a rational and coherent set of beliefs about education. But since discovering the facts about IQ several of my convictions have undergone what amounts to a U-turn.
Acknowledgements
“A Farewell to Alms: a brief economic history of the world” by Gregory Clark (Princeton University Press: Princeton, NJ, USA, 2007) was the book of economic history which first stimulated my (belated) engagement with the scientific literature of intelligence and personality. The web pages of Steve Sailer have since provided both an invaluable introduction and also a higher education in the subject (e.g. www.isteve.com/Articles_IQ.htm).
References
[1] C. Murray, Real education: four simple truths for bringing America’s schools back to reality, Crown Forum, New York (2008).
[2] G. Miller, Spent: sex, evolution and consumer behaviour, Viking, New York (2009).
[3] B.G. Charlton and P. Andras, Auditing as a tool of public policy – the misuse of quality assurance techniques in the UK university expansion, Eur Polit Sci 2 (2002), pp. 24–35.
[4] B.G. Charlton, Science as a general education: conceptual science should constitute the compulsory core of multi-disciplinary undergraduate degrees, Med Hypotheses 66 (2006), pp. 451–453.
[5] Charlton BG, Andras P. The educational function and implications for teaching of multi-disciplinary modular (MDM) undergraduate degrees. OxCHEPS Occasional Paper No. 12; 2003. http://oxcheps.new.ox.ac.uk
[6] B.G. Charlton and P. Andras, Universities and social progress in modernizing societies: how educational expansion has replaced socialism as an instrument of political reform, CQ (Crit Quart) 47 (2005), pp. 30–39.
[7] N.J. Mackintosh, IQ and human intelligence, Oxford University Press, Oxford (1998).
[8] A.R. Jensen, The g factor: the science of mental ability, Praeger, Westport, CT, USA (1988).
[9] U. Neisser et al., Intelligence: knowns and unknowns, Am Psychol 51 (1996), pp. 77–101.
[10] I.J. Deary, Intelligence: a very short introduction, Oxford University Press, Oxford (2001).
[11] J.R. Harris, The nurture assumption: why children turn out the way they do, Bloomsbury, London (1998).
[12] D. Nettle, Personality: what makes you the way you are, Oxford University Press (2007).
[13] P.T. Costa and R.R. McCrae, Stability and change in personality from adolescence through adulthood. In: C.F. Halverson Jr., G.A. Kohnstamm and R.P. Martin, Editors, The developing structure of temperament and personality from infancy to adulthood, Lawrence Erlbaum Associates, Hillsdale, NJ, USA (1994), pp. 139–150.
[14] G.D. Batty, I.J. Deary and L.S. Gottfredson, Pre-morbid (early life) IQ and later mortality risk: systematic review, Ann Epidemiol 17 (2007), pp. 278–288.
[15] Caplan B. Mixed signals: Why Becker, Cowen, and Kling should reconsider the signaling model of education.. Accessed 06.04.09.
[16] R.J. Herrnstein and C. Murray, The bell curve: intelligence and class structure in American life, Forbes, New York (1994).
[17] B.G. Charlton, Pioneering studies of IQ by G.H. Thomson and J.F. Duff – an example of established knowledge subsequently ‘hidden in plain sight’, Med Hypotheses 71 (2008), pp. 625–628.
[18] L.S. Gottfredson, Implications of cognitive differences for schooling within diverse societies. In: C.L. Frisby and C.R. Reynolds, Editors, Comprehensive handbook of multicultural school psychology, Wiley, New York (2005), pp. 517–554.
[19] Spitz HH. The raising of intelligence: a selected history of attempts to raise retarded intelligence. Hillsdale, NJ, USA: Erlbaum; 1986.
[20] B.G. Charlton, Why are modern scientists so dull? How science selects for perseverance and sociability at the expense of intelligence and creativity, Med Hypotheses 72 (2009), pp. 237–243.
[21] Charlton BG. Sex ratios in the most-selective elite undergraduate US colleges and universities are consistent with the hypothesis that modern educational systems increasingly select for conscientious personality compared with intelligence. Med Hypotheses; in press, doi:10.1016/j.mehy.2009.03.016.
[22] A. Wooldridge, Measuring the mind: education and psychology in England, c.1860–c.1990, Cambridge University Press, Cambridge, UK (1994).
[23] L.S. Gottfredson, Logical fallacies used to dismiss the evidence on intelligence testing. In: R. Phelps, Editor, Correcting fallacies about educational and psychological testing, American Psychological Association, Washington, DC (2009), pp. 11–65.
Replacing education with psychometrics: How learning about IQ almost-completely changed my mind about education.
Medical Hypotheses. 2009; 73: 273-277
***
Summary
I myself am a prime example of the way in which ignorance of IQ leads to a distorted understanding of education (and many other matters). I have been writing on the subject of education – especially higher education, science and medical education – for about 20 years, but now believe that many of my earlier ideas were wrong for the simple reason that I did not know about IQ. Since discovering the basic facts about IQ, several of my convictions have undergone a U-turn. Just how radically my ideas were changed has been brought home by two recent books: Real Education by Charles Murray and Spent by Geoffrey Miller. Since IQ and personality are substantially hereditary and rankings (although not absolute levels) are highly stable throughout a persons adult life, this implies that differential educational attainment within a society is mostly determined by heredity and therefore not by differences in educational experience. This implies that education is about selection more than enhancement, and educational qualifications mainly serve to ‘signal’ or quantify a person’s hereditary attributes. So education mostly functions as an extremely slow, inefficient and imprecise form of psychometric testing. It would therefore be easy to construct a modern educational system that was both more efficient and more effective than the current one. I now advocate a substantial reduction in the average amount of formal education and the proportion of the population attending higher education institutions. At the age of about sixteen each person could leave school with a set of knowledge-based examination results demonstrating their level of competence in a core knowledge curriculum; and with usefully precise and valid psychometric measurements of their general intelligence and personality (especially their age ranked degree of Conscientiousness). However, such change would result in a massive down-sizing of the educational system and this is a key underlying reason why IQ has become a taboo subject. Miller suggests that academics at the most expensive, elite, intelligence-screening universities tend to be sceptical of psychometric testing; precisely because they do not want to be undercut by cheaper, faster, more-reliable IQ and personality evaluations.
***
Introduction
It was only in early 2007 that I began properly to engage, for the first time in my professional career, with the literature on IQ. Surprisingly, this engagement had been stimulated by a book of economic history. And learning the basic facts about IQ rapidly changed my views on many things, none more so than education.
Just how radically my ideas about education were changed by learning about IQ has been brought home by two recent books: Real Education By Charles Murray [1] and Spent by Geoffrey Miller [2]. In line with analyses of Murray and Miller, I would now repudiate many of my previous opinions on the subject, and advocate a substantial reduction in the average amount of formal education and the proportion of the population attending higher education. In general, I now believe that many years of formal education can and should be substantially (but not entirely!) replaced with ‘psychometric’ measures of intelligence and personality as a basis for evaluating career potential.
In this article I use my own experience as a case study of the potentially-disruptive influence of psychometric knowledge, and discuss further the reasons why basic IQ facts have been so effectively concealed, confused and denied by mainstream elite intellectual opinion in the UK and USA.
The importance of IQ
I have been writing on the subject of education for about 20 years (especially on higher education, science and medical education), but I now believe that much of what I wrote was wrong for the simple reason that I did not know about IQ. Personality traits are important in a similar way to IQ, however personality measurement is currently less reliable and valid than IQ testing, and less-well quantified.
In the early 2000s I argued that modern formal education should be directed primarily at inculcating the ability to think abstractly and systematically [3] and that therefore the structure and not the specific content of education was critical (although ‘science’ – broadly defined – was likely to be the best basis for this type of education [4]). I suggested that higher education should be regarded as a non-vocational process, in which most degrees are modular, and modules were optional and multi-disciplinary, so that each student would assemble their own degree program in a minimally-constrained, ‘pick and mix’ fashion [5]. I also contended that since abstract systemizing cognition was so essential to modernizing societies, a major aim of social reform should be to include as many people as possible in formal education for as long as possible [6].
All of these views I would now regard as mistaken – and the reason is mostly my new understanding of IQ [7], [8], [9] and [10]. Miller concisely explains the basic facts about IQ:
“General intelligence (a.k.a. IQ, general cognitive ability, the g factor) is a way of quantifying intelligence’s variability among people. It is the best-established, most predictive, most heritable mental trait ever discovered in psychology. Whether measured with formal IQ tests or assessed through informal conversations and observations, intelligence predicts objective performance and learning ability across all important life-domains that show reliable individual differences” [2].
The crux of my new understanding is that IQ, and to a lesser but important extent personality traits, are highly predictive of educational attainment. This is a very old finding, and scientifically uncontroversial – but the implications have still not been acknowledged.
Since IQ is very substantially inherited with a true heritability of about 80% [7], [8], [9] and [10] and personality too has about a 50% heritability [11] and [12]; and since both IQ and personality rankings are highly stable throughout a persons adult life [13] (it is, for example, very difficult for educational interventions to have any significant and lasting effect on underlying IQ [1]) – then this implies that differential educational attainment within societies is mostly determined by heredity and therefore not by differences in educational experience.
(The other big factor which influences attainment is of course the large element of chance – which affects individuals unpredictably. However, chance is not completely random, in the sense that many outcomes such as accidental injuries and a range of illnesses are also correlated with IQ and personality [14]).
When full account has been taken of IQ and personality (and the measured effects of IQ and personality have been increased to take account of the inevitable imprecision of IQ measurements and the even greater difficulties of determining personality), and when the presumed effects of chance have also been subtracted – then there is not much variation of outcomes left-over within which educational differences could have an effect. Of course there will be some systemic effect of educational differences, but the effect is likely to be very much smaller than generally assumed, and even the direction of the education effect may be hard to detect when other more powerful factors are operative [1].
I found the fact that differences in educational attainment within a society are mostly due to heredity to be a stunning conclusion, which effectively demolished most of what I believed about education. My understanding of what education was doing was radically reshaped, and my beliefs about the justifiable duration and proper focus of the system of formal education were transformed. I began to realize that the educational system in modern societies was operating under false pretences. It seems that current educational systems are barely ‘fit for purpose’ and (lacking a proper understanding of IQ) are in many instances progressively getting worse rather than better.
In sum, education is more about selection than enhancement, and educational qualifications mostly serve to ‘signal’ or quantify a person’s hereditary attributes [15] – especially IQ and personality. Differential educational experience does not seem to have much of a systemic effect on people’s ability to think or work.
To put it another way – education mostly functions as an extremely slow, inefficient and imprecise form of psychometric testing. And because this fact is poorly understood, those aspects of modern education which are not psychometric are consequently neglected and misdirected.
Policy implications of psychometrics
If psychometric measures of IQ and personality were available, then it would be easy to construct a modern educational system that was both more efficient and more effective than the current one. However, such change would result in a massive down-sizing of the educational system – with substantial and permanent loss of jobs and status for educational professionals of all types including teachers, professors, administrators and managers.
According to Geoffrey Miller’s analysis [2], this impact on educational professionals is likely to be a key underlying reason why IQ has become a taboo subject, and why the basic facts of IQ have been so effectively obfuscated. Miller notes that it is the ultra-elite, most-selective and heavily research-oriented universities which are the focus of IQ resistance. At the same time more functionally-orientated institutions, such as the United States military, have for many decades quietly been using IQ as a tool to assist with selection and training allocations [16].
“Is it an accident that researchers at the most expensive, elite, IQ-screening universities tend to be most sceptical of IQ tests? I think not. Universities offer a costly, slow, unreliable intelligence-indicating product that competes directly with cheap, fast, more-reliable IQ tests. (…) Harvard and Yale sell nicely printed sheets of paper called degrees that cost about $160,000 (…). To obtain the degree, one must demonstrate a decent level of Conscientiousness, emotional stability, and openness in one’s coursework, but above all, one must have the intelligence to get admitted, based on SAT scores and high school grades. Thus the Harvard degree is basically an IQ guarantee”.
“Elite universities do not want to be undercut by competitors. They do not want their expensive IQ-warranties to suffer competition from cheap, fast IQ tests which would commodify the intelligence-display market and drive down costs. Therefore, elite universities have a hypocritical, love-hate relationship with intelligence tests”.
The vulnerability of the elite institutions to IQ knowledge is because most of the assumed advantages of an expensive elite education can be ascribed to their historic ability to select the top stratum of IQ (and also the most desirable personality types): given the stability and predictive power of these traits the elite students are therefore pre-determined to be (on average) highly successful.
Consequently the most elite institutions and their graduates have in the past few decades, both via academic publications and in the mass media, thoroughly obscured the basic and validated facts about IQ. We now have a situation where the high predictive powers of IQ and personality and the stable and hereditary nature of these traits are routinely concealed, confused or (in extremis) explicitly denied by some of the most prestigious and best-educated members of modern society [17].
Four mistaken beliefs resulting from my lack of IQ knowledge
I will summarize under four heading my main pre-IQ errors regarding education.
Mistaken belief number 1: Modern formal education should be directed primarily at inculcating the ability to think abstractly and systematically [3].
Revision: Modern formal education should be directed primarily at inculcating specific knowledge content.
Abstract systematic thinking is exceptionally important in modern societies. And I used to believe that that abstract systematic thinking was mostly a product of formal education – indeed I regarded this as the main function of formal education [3]. But I now recognize that abstract systematic thinking is pretty close to a definition of IQ; and that strongly IQ related (or heavily ‘g-loaded’) educational outcomes – such as differentials in reading comprehension and mathematical ability – are very difficult/impossible to improve in a real and sustained fashion by educational interventions [1].
In other words, a person’s level of ability to think abstractly and systematically is mostly a biological given – and not a consequence of formal education. The implication is that formal education should not be focusing on trying to do what it cannot do – i.e. enhance IQ. Instead, formal education should focus on educational goals where is can make a difference: i.e. the teaching of specific knowledge [1].
Mistaken belief number 2: Structure not content of formal education is crucial [5].
Revision: Content not structure of education is crucial.
I used to think that it did not matter what subject was studied in formal education, so long as the method of education was one which nurtured abstract systematic thinking [3]. I believed that how we learned was more important than what we learned, because I believed that abstract systematic thinking was a result of formal education – and this cognitive ability was more important than any particular body of information which had been memorized.
This line of reasoning meant that I favoured ‘pick and mix’, wide choice and multi-disciplinary curricula as a method of improving motivation by allowing students to study what most interested them, and giving students practice in learning new material and applying systematic thinking in many knowledge domains [5].
The reason that I believed all this has been summarized by Geoffrey Miller:
“The highly selective credential with little relevant content [such as an elite college degree in any subject] often trumps the less-selective credential with very relevant content. Nor are such preferences irrational. General intelligence is such a powerful predictor of job performance that a content-free IQ guarantee can be much more valuable to an employer or graduate school than a set of rote-learned content with no IQ guarantee” [2].
Since IQ is such a powerful influence on educational (and other) outcomes [18], the value of specific educational content is therefore only apparent when IQ has been controlled-for. Since IQ is routinely ignored or denied, the value of educational content is not apparent in outcomes which are sensitive to differences in general intelligence.
Murray argues that variations in the structure and methods of education are not able significantly to influence those educational outcomes which are ‘g-loaded’ such as reading comprehension or mathematic reasoning [1]. Numerous attempts to raise real long-term intelligence (rather than merely raising specific test scores) have failed [19]. However, the subject matter being studied will (obviously!) make a big difference to what gets learned. Once we set aside the delusional goal of enhancing IQ by educational reform, then the subject matter – or curriculum – becomes a more important focus than educational structure and methods.
Charles Murray therefore endorses the approach to ‘Cultural Literacy’ or a core knowledge curriculum pioneered by Ed Hirsch (www.coreknowledge.org). This educational philosophy focuses on constructing a comprehensive curriculum of the factual material that people should know, or ‘need to know’. Over the past couple of decades some detailed and well-validated programmes of study have been developed for the USA, and these can be purchased by educational institutions and also home-schooling parents.
It is claimed that such a core knowledge curriculum should enable the student to become a citizen participating at the highest possible social level, and that a shared education in core knowledge should hold society together with a stronger ‘cultural glue’. If such benefits are real, then school, especially between the ages of about 6 and 14, is the best place to follow such a program; since, although the core curriculum involves more than mere memorization, nonetheless memorization is an important element – and young children can memorize information much more easily and lastingly than adults [1].
Understanding IQ has therefore provoked me into a U-turn on the matter of curricula. I now believe that what we learn in formal education is more important than how we learn, because what we learn can have a lasting effect on what we know; while how we learn does not, after all, teach us how to think.
Mistaken belief number 3: A major aim of social reform should be to include as many people as possible in formal education for as long as possible. Ever-more people should get ever-more education for the foreseeable future [6].
Revision: The system of formal education is hugely over-expanded and should be substantially reduced (to considerably less than half its current size). The average person should receive fewer years of formal education, fewer people should attend higher education institutions and do fewer bachelor’s degrees, and those in higher education should – on average – complete the process in fewer years.
The proportion of school leavers entering higher education in the UK has at least trebled over the past three decades, from around 15% to more than 45%. The rationale behind this vast expansion was based on the observation of higher all-round performance among college graduates – better performance in jobs, and also a wide range of other good outcomes including improved health and happiness [6].
However, it turns out that almost all of this differential in behaviours can be explained in terms of selection for (mostly hereditary) intelligence, rather than these improvements being something added to individuals by their educational experience. The main extra information provided by the successful completion of prolonged educational programs (i.e. extra in addition to signalling IQ) is that educational certification provides a broadly-reliable signal of a highly-Conscientious personality.
Miller has neatly described this trait: “Conscientiousness is the Big Five personality trait that includes such characteristics as integrity, reliability, predictability, consistency, and punctuality. It predicts respect for social norms and responsibilities, and the likelihood of fulfilling promises and contracts. A century ago, people would have called it character, principle, honor, or moral fiber. (…) Conscientiousness is lower on average in juveniles, and it matures slowly with age” [2].
Other attributes of a highly-Conscientious personality are self-discipline, perseverance and long-termism [20].
But a person’s degree of Conscientiousness is not a product of their educational experience; rather it is a mostly-inherited psychological attribute which develops throughout life, the relative (or differential) possession of which is stable throughout life [13]. In other words, Conscientiousness is (mostly) an innate ability in a similar sense to intelligence – and similarly difficult to influence by educational means.
It turns out that modern formal education is mainly signalling [15], or providing indirect evidence about, a person’s IQ and personality abilities which they have mostly inherited [1] and [2]. This means that imposing an ever-increasing number of years of formal education for an ever-increasing proportion of the population is ever-increasingly inefficient – and is wasting years of people’s lives, wasting vast amounts of money on the education provision, and imposing huge economic and social ‘opportunity costs’ by forcing people to remain in formal education when their time would often be better spent doing something else (for example something economically-productive or something more personally-fulfilling).
Mistaken belief number 4: Higher education should be regarded as a general, non-vocational process, in which most degrees are modular and multi-disciplinary; and where specialization or vocational preparation should be a relatively brief and ‘last-minute’ training at the end of a long process of education [3], [5] and [6].
Revision: The period of general education should not extend much beyond about 16 (the approximate age of IQ maturity), and this general education should be focused on the basic skills of literacy and numeracy together with a core knowledge curriculum.
At the age of about 16 each person could potentially leave school with a set of knowledge-based examination results demonstrating their level of competence in a core knowledge curriculum; and with usefully precise and valid psychometric measurements of their general intelligence and personality (especially their age ranked degree of Conscientiousness). The combination of psychometric measures of IQ and Conscientiousness would serve the same kind of function as educational evaluations do at present, providing a basis for employment selection or valid predictions to guide the allocation of access to further levels of formal education.
Beyond this I believe that most education should be ‘functional’ or vocational, in the sense of being a relatively-focused training in the knowledge and skills required to do something specific. This functional post-sixteen formal higher education could vary in duration from weeks or months (for semi-skilled jobs) to several years (for access to the starting level of the most highly skilled and knowledge-intensive professions such as architecture, engineering, medicine or law).
But when IQ and personality measurements are available, then the majority of ‘white collar’ jobs – jobs such as management, administration, or school teaching (up to the age of about 16) – would no longer require a college degree. Instead specific knowledge-based training would be provided ‘on the job’, presumably by the traditional mixture of a formally-structured curriculum for imparting the core knowledge and systematic elements with apprenticeship and individual instruction in order to impart specialized skills.
Murray also suggests that much specialist educational certification for careers could in principle be better done by rigorous public examinations such as those for accountancy, than by the medium of minimum-duration college degrees [1].
Measuring personality
The main unsolved problem for this psychometric approach is the evaluation of personality. Most of the current evidence for the predictive and explanatory power of personality comes from self-rating questionnaires, and clearly these would not be suitable for educational and job evaluations since it is facile to learn the responses which would lead to a high rating for Conscientiousness.
Rather than being simply asserted in a questionnaire, a Conscientious, persevering, self-disciplined personality requires to be demonstrated in actual practice. The modern educational system has, inadvertently, evolved in the direction of requiring higher levels of Conscientiousness [20]. The main factor in this evolution has been the progressive lengthening of the educational process (in the UK the modal average age for leaving formal education has increased from 16 to about 21 in the space of 30 years), but educational evaluations have also become less IQ-orientated (less g-loaded) and more dependent upon the ability of students frequently and punctually to complete neat and regular course work assignments [20] and [21].
However, the modern educational system is not explicitly aware that it is measuring Conscientiousness – the changes have been an accidental by-product of other trends, and there was not a deliberate attempt to enhance Conscientiousness-selectivity as a matter of policy. Because the educational system is blind to the consequences of its own actions, there are counter-pressures to make course work easier and more-interesting and to offer more choices – when in fact it would be a more efficient and accurate measure of Conscientiousness to have students complete compulsory, dull and irrelevant tasks which required a great deal of toil and effort!
However, it may be socially-preferable to have students prove their Conscientiousness in the realm of economic employment rather than by setting them pointless and grinding work in a formal educational context. There are plenty of dull and demanding but necessary jobs, the successful and sufficiently-prolonged accomplishment of which could serve as a valid and accurate reliable signal of Conscientiousness. So it would be more useful for people to prove their level of Conscientiousness in the arena of paid work, than by having this measurement task done by formal educational institutions.
An alternative suggestion for evaluating Conscientiousness comes from Geoffrey Miller, who advocates using broad surveys of opinion from families, peers, employers or any reliable and informed person who is in prolonged social contact with the subject [2].
Conclusions
I have previously written about the extraordinary way in which knowledge of IQ in particular, and psychometrics in general, is ‘hidden in plain sight’ in modern culture [17]. The basic facts about IQ are accessible, abundant and convincing for those who take the trouble to look; but modern mainstream intellectual culture has for around half a decade ‘immunized’ most educated people against looking-at or learning about IQ by multiple forms of misinformation and denigration [22] and [23].
The recent books of Murray and Miller marshal more strongly than before the evidence that one major reason for its taboo status is that IQ knowledge has extremely damaging implications for the vast and expanding system of formal education which employs many intellectuals directly, and which provides almost all other intellectuals with the credentials upon which their status and employability depend. Miller’s phrase is worth repeating: “they do not want their expensive IQ-warranties to suffer competition from cheap, fast IQ tests which would commodify the intelligence-display market and drive down costs” [2].
Murray argues that a properly-demanding 4 year, general and core knowledge-based, ‘liberal arts’ degree would be valuable as a pre-specialization education for the high IQ intellectual elite [1]. Perhaps because I am a product of the (now disappeared) traditional English system of early educational specialization, I am unconvinced about the systematic benefits of general education at a college level. I suspect that the most efficient pattern of higher education would be to specialize at age 16 (or earlier for the highest IQ individuals) on completion of the standard core knowledge program; and that liberal arts should mainly be seen as an avocation (done for reasons of personal fulfilment) rather than a vocation (done as a job).
In other words, a liberal arts education beyond core knowledge could, and perhaps should, be optional and provided by the market, rather than being included in the educational ‘system’. For example, in the UK such an education is universally available without any residential requirement at a reasonable price and high quality via the Open University (www3.open.ac.uk/about).
But in a system where objective IQ and personality evaluations were available as signals of aptitude, it could be left to ‘the market’ to decide whether the possession of a rigorous 4 year general liberal arts degree opened more doors; or attracted any extra premium of status, salary or conditions compared with a specialized, early vocational degree such as medicine, law, architecture, engineering, or one of the sciences. (There would presumably also be some specialist arts and humanities degrees, mainly vocationally-orientated towards training high-level school and college teachers – as was the traditional English practice until about 40 years ago [3].)
In summary, modern societies are currently vastly over-provided with formal education, and this education has the wrong emphasis. In particular, the job of sorting people by their general aptitude could be done more accurately, cheaply and quickly by using psychometrics to measure IQ and Conscientiousness. This would free-up time and energy for early training in key skills such as reading, writing and mathematics; and to focus on a core knowledge curriculum.
However, for reasons related to self-interest, the intellectual class do not want people to know the basic facts about IQ; and since the intellectual class provide the information upon which the rest of society depends for their understanding – consequently most people do not know the basic facts about IQ. And lacking knowledge of IQ, people are not able to understand the education system and what it actually does.
I can point to myself as a prime example of the way in which ignorance of IQ leads to a distorted understanding of education. Before I knew about the basic facts of IQ, I had articulated what seemed to be a rational and coherent set of beliefs about education. But since discovering the facts about IQ several of my convictions have undergone what amounts to a U-turn.
Acknowledgements
“A Farewell to Alms: a brief economic history of the world” by Gregory Clark (Princeton University Press: Princeton, NJ, USA, 2007) was the book of economic history which first stimulated my (belated) engagement with the scientific literature of intelligence and personality. The web pages of Steve Sailer have since provided both an invaluable introduction and also a higher education in the subject (e.g. www.isteve.com/Articles_IQ.htm).
References
[1] C. Murray, Real education: four simple truths for bringing America’s schools back to reality, Crown Forum, New York (2008).
[2] G. Miller, Spent: sex, evolution and consumer behaviour, Viking, New York (2009).
[3] B.G. Charlton and P. Andras, Auditing as a tool of public policy – the misuse of quality assurance techniques in the UK university expansion, Eur Polit Sci 2 (2002), pp. 24–35.
[4] B.G. Charlton, Science as a general education: conceptual science should constitute the compulsory core of multi-disciplinary undergraduate degrees, Med Hypotheses 66 (2006), pp. 451–453.
[5] Charlton BG, Andras P. The educational function and implications for teaching of multi-disciplinary modular (MDM) undergraduate degrees. OxCHEPS Occasional Paper No. 12; 2003. http://oxcheps.new.ox.ac.uk
[6] B.G. Charlton and P. Andras, Universities and social progress in modernizing societies: how educational expansion has replaced socialism as an instrument of political reform, CQ (Crit Quart) 47 (2005), pp. 30–39.
[7] N.J. Mackintosh, IQ and human intelligence, Oxford University Press, Oxford (1998).
[8] A.R. Jensen, The g factor: the science of mental ability, Praeger, Westport, CT, USA (1988).
[9] U. Neisser et al., Intelligence: knowns and unknowns, Am Psychol 51 (1996), pp. 77–101.
[10] I.J. Deary, Intelligence: a very short introduction, Oxford University Press, Oxford (2001).
[11] J.R. Harris, The nurture assumption: why children turn out the way they do, Bloomsbury, London (1998).
[12] D. Nettle, Personality: what makes you the way you are, Oxford University Press (2007).
[13] P.T. Costa and R.R. McCrae, Stability and change in personality from adolescence through adulthood. In: C.F. Halverson Jr., G.A. Kohnstamm and R.P. Martin, Editors, The developing structure of temperament and personality from infancy to adulthood, Lawrence Erlbaum Associates, Hillsdale, NJ, USA (1994), pp. 139–150.
[14] G.D. Batty, I.J. Deary and L.S. Gottfredson, Pre-morbid (early life) IQ and later mortality risk: systematic review, Ann Epidemiol 17 (2007), pp. 278–288.
[15] Caplan B. Mixed signals: Why Becker, Cowen, and Kling should reconsider the signaling model of education.
[16] R.J. Herrnstein and C. Murray, The bell curve: intelligence and class structure in American life, Forbes, New York (1994).
[17] B.G. Charlton, Pioneering studies of IQ by G.H. Thomson and J.F. Duff – an example of established knowledge subsequently ‘hidden in plain sight’, Med Hypotheses 71 (2008), pp. 625–628.
[18] L.S. Gottfredson, Implications of cognitive differences for schooling within diverse societies. In: C.L. Frisby and C.R. Reynolds, Editors, Comprehensive handbook of multicultural school psychology, Wiley, New York (2005), pp. 517–554.
[19] Spitz HH. The raising of intelligence: a selected history of attempts to raise retarded intelligence. Hillsdale, NJ, USA: Erlbaum; 1986.
[20] B.G. Charlton, Why are modern scientists so dull? How science selects for perseverance and sociability at the expense of intelligence and creativity, Med Hypotheses 72 (2009), pp. 237–243.
[21] Charlton BG. Sex ratios in the most-selective elite undergraduate US colleges and universities are consistent with the hypothesis that modern educational systems increasingly select for conscientious personality compared with intelligence. Med Hypotheses; in press, doi:10.1016/j.mehy.2009.03.016.
[22] A. Wooldridge, Measuring the mind: education and psychology in England, c.1860–c.1990, Cambridge University Press, Cambridge, UK (1994).
[23] L.S. Gottfredson, Logical fallacies used to dismiss the evidence on intelligence testing. In: R. Phelps, Editor, Correcting fallacies about educational and psychological testing, American Psychological Association, Washington, DC (2009), pp. 11–65.