*
Since I was sacked from editing Medical Hypotheses in May 2010, the Impact Factor...
(citations
to Medical Hypotheses in the target year for papers published in the
preceding two years - so that the 2012 IF is citations in that year for
papers published in 2010 and 2011 - which means that the 2012 IF is
still not free of the effect of papers I accepted while still editor in
the first four months of 2010)
...has declined from being above average for all medical journals (and therefore considerably above average for all journals) to, well, mediocrity:
I do not note this fact merely from schadenfreude
but also because the journal which currently styles itself 'Medical
Hypotheses' is a dishonest fake and a travesty of the vision bequeathed
by the founder David Horrobin; and as such it ought to be closed-down
- and on present trends it surely will be.
Which is nice.
*
Thursday, 4 July 2013
Thursday, 31 March 2011
David Horrobin's letter handing-over Medical Hypotheses editorship
*
It was the letter in which he offered me the editorship of Medical Hypotheses.
This was posted on my Miscellany blog last year - but I am copying it here for the record.
***
Dear Bruce,
Although I am slowly recovering from the latest recurrence of my mantle cell lymphoma, I have to be realistic and accept the probability that I have only a year or so to live. Rather than leave everything to the last minute, I would rather put things in order now.
[Note: in fact David had less time than he hoped, and he died just a few weeks later on 1 April 2003. We had a chance to speak a couple more times on the telephone - once while he was actually having chemotherapy - but that was all.]
I am therefore writing to ask whether you would be willing to take over as Editor-in-Chief of Medical Hypotheses? Frankly you are the only person I really trust to take it over and run it in an open-minded fashion.
(...)
The primary criteria for acceptance are very different from the usual journals. In essence what I look for are answers to two questions only: Is there some biological plausibility to what the author is saying? Is the paper readable? We are NOT looking at whether or not the paper is true but merely at whether it is interesting.
I now make most of the editorial judgments myself unless I am really puzzled as to whether the paper is a lot of nonsense. I have found most referees far more trouble than they are worth: they are so used to standard refereeing which is usually aimed at trying to determine whether or not a paper is true that they are incapable of suspending judgment and end up being inappropriately hypercritical.
In the early days I used to spend a lot of time editing and rewriting papers which were poorly written or where the English was inadequate. However, I am now quite ruthless about not doing that which has greatly reduced the editorial burden. I simply return papers which are poorly written, suggesting that they are rewritten in conjunction with someone whose native language is English. If that does not produce a result then the paper is simply rejected.
I do very much hope that you will be willing to take up this proposal. Overall Medical Hypotheses is a lot of fun and it gives one access to a very wide range of interesting medical science. Since I started the journal in Newcastle in the 1970s there is an appropriateness in the possibility of it returning there.
Very best wishes, David Horrobin.
*
Saturday, 12 February 2011
Leigh Van Valen's letter of support for Medical Hypotheses
*
The late, and in my opinion almost-great, Leigh Van Valen was one of only a handful of eminent scientists who publicly supported the principles of Medical Hypotheses - indeed it was apparently one of the last things he did before dying a few months later:
http://www.nytimes.com/2010/10/31/us/31valen.html
I would like to preserve his comment from the online edition of Nature:
The late, and in my opinion almost-great, Leigh Van Valen was one of only a handful of eminent scientists who publicly supported the principles of Medical Hypotheses - indeed it was apparently one of the last things he did before dying a few months later:
http://www.nytimes.com/2010/10/31/us/31valen.html
I would like to preserve his comment from the online edition of Nature:
*
A more conspicuous statement in such a journal (perhaps with each paper) that it doesn't use peer review should be adequate warning for those who can't evaluate a paper themselves. For those who can, there are sometimes gems among the (often unintentionally humorous) matrix.
Genuine conceptual originality is by definition outside the accepted way of looking at things. It often has rough edges that can be easily refuted, thereby making its core seem suspect. And, indeed, most conceptual deviants are justifiably discarded.
Originality at the conceptual level can come from empirical discoveries. However, it can also come from looking at the world in a different way.
It's commonly recognized in the metascientific literature that conceptual originality is inversely related to publishability. As someone who has made some conceptually original contributions, I've noticed the same phenomenon myself.
More specifically, there are indeed occasional gems in Medical Hypotheses that would be difficult to publish elsewhere.
Medical Hypotheses does have mandatory publication charges, which discriminate against those without such money wherever such charges occur. Otherwise, though, I wish the journal well and hope that it will survive its current crisis.
2010-03-19 12:04:12
http://www.nature.com/news/2010/100318/full/news.2010.132.html
Posted by: leigh van valen
*
Note: "there are indeed occasional gems in Medical Hypotheses that would be difficult to publish elsewhere" - that satisfies me as an obituary for MeHy.
In fact, contrary to what LVV said, the mandatory publication charges for Medical Hypotheses had been abolished about a year before this letter, by Elsevier. The subsequent Medical Hypotheses Affair, and this action taken by the publishers (without consulting me), was therefore *in part* probably an unfortunate side effect of the resulting large (albeit self-inflicted) loss of income from the journal - which went at a stroke from being very profitable to only mildly so.
The editorial review system (and that the journal was not peer reviewed) was very prominently noted on the title page of the journal, which included exerpts from and a link to an essay by me describing the rationale. A note on each paper could easily have been added, if the publisher had wanted to preserve the journal's true nature: but they wanted (and got) peer review and a mainstream, non-controversial journal - an anti-Medical Hypotheses.
Water under the bridge...
Van Valen had accepted my first evolutionary paper for publication in his own journal - years before this: http://www.hedweb.com/bgcharlton/endopara.html
*
A more conspicuous statement in such a journal (perhaps with each paper) that it doesn't use peer review should be adequate warning for those who can't evaluate a paper themselves. For those who can, there are sometimes gems among the (often unintentionally humorous) matrix.
Genuine conceptual originality is by definition outside the accepted way of looking at things. It often has rough edges that can be easily refuted, thereby making its core seem suspect. And, indeed, most conceptual deviants are justifiably discarded.
Originality at the conceptual level can come from empirical discoveries. However, it can also come from looking at the world in a different way.
It's commonly recognized in the metascientific literature that conceptual originality is inversely related to publishability. As someone who has made some conceptually original contributions, I've noticed the same phenomenon myself.
More specifically, there are indeed occasional gems in Medical Hypotheses that would be difficult to publish elsewhere.
Medical Hypotheses does have mandatory publication charges, which discriminate against those without such money wherever such charges occur. Otherwise, though, I wish the journal well and hope that it will survive its current crisis.
2010-03-19 12:04:12
http://www.nature.com/news/2010/100318/full/news.2010.132.html
Posted by: leigh van valen
*
Note: "there are indeed occasional gems in Medical Hypotheses that would be difficult to publish elsewhere" - that satisfies me as an obituary for MeHy.
In fact, contrary to what LVV said, the mandatory publication charges for Medical Hypotheses had been abolished about a year before this letter, by Elsevier. The subsequent Medical Hypotheses Affair, and this action taken by the publishers (without consulting me), was therefore *in part* probably an unfortunate side effect of the resulting large (albeit self-inflicted) loss of income from the journal - which went at a stroke from being very profitable to only mildly so.
The editorial review system (and that the journal was not peer reviewed) was very prominently noted on the title page of the journal, which included exerpts from and a link to an essay by me describing the rationale. A note on each paper could easily have been added, if the publisher had wanted to preserve the journal's true nature: but they wanted (and got) peer review and a mainstream, non-controversial journal - an anti-Medical Hypotheses.
Water under the bridge...
Van Valen had accepted my first evolutionary paper for publication in his own journal - years before this: http://www.hedweb.com/bgcharlton/endopara.html
*
Wednesday, 22 December 2010
Im-Personal reflections on the Medical Hypotheses Affair: Editors of major medical and scientific journals- The Dog that didn'tbark
A brief account of the Medical Hypotheses Affair may be found here:
http://medicalhypotheses.blogspot.com/2010/05/medical-hypotheses-affair-times-higher.html
But there is one general aspect which I learned from the experience, and which is - I think - worth further emphasis.
This is the aspect of The Dog That Didn't Bark.
*
The Dogs whose silence throughout this episode was so highly significant were the editors of the major medical and scientific journals, indeed editors of all academic journals were silent.
Twenty five years ago there would, without any shadow of doubt, have been vigorous comment on the happenings at Medical Hypotheses from (say) the editors of Nature, Science, the New England Journal of Medicine, JAMA, the Lancet, the British Medical Journal and others.
And the gist of this would have been: publishers must keep their hands-off editorial independence.
Instead: silence.
Tumbleweed.
Crickets...
*
The MeHy Affair was a very explicit and highly public example of a publisher intervening directly to over-ride the editor of an established scholarly journal.
This was not merely affecting the conduct of academic discourse, but directly shaping the content of published academic discourse.
In their actions towards Medical Hypotheses, the publishers (Reed-Elsevier - who publish about 20 percent of the world scholarly journals, and a higher proportion of those journals with high impact in their fields) decided what went into the scholarly literature and what did not.
More exactly, specific managers employed by a publishing corporation decided what went into the scholarly literature and what did not.
*
Precisely, the publishers of Medical Hypotheses acted unilaterally to withdraw two already-e-published papers from a scholarly journal and delete them from the online records.
And then (in the period of time leading up to the editor being sacked) Elsevier managers continued to filter-out papers that had been formally accepted for publication by the editor (in other words the papers were officially 'in the press') - but which these managers regarded as unacceptable in some way, and therefore withdrew from the publication process.
In other words, managers took direct control of the content of the published academic literature.
*
Why was The Silence of the Editors so significant?
In an abstract sense, Elsevier's behaviour contravened the basic established conduct of academic discourse - which is supposed to be independent of publishers and a matter decided between editors and scholars.
Indeed, this was, by a strict 'legalistic' definition, a direct breach of the principle of academic freedom.
So - even abstractly considered - it would be expected that leading journal editors would have raised objections to the corruption of academic discourse.
*
But there is a much more direct and personal reason to expect leading editors to comment.
Which is that condoning Elsevier's actions set a precedent for further instances whereby managers employed by publishers will simply over-ride editorial independence: managers will decide what gets into journals and what does not.
So, by remaining silent, each editor of each major journal made it more likely that in future their publisher would do the same to them as Elsevier did to me!
*
Why would leading editors of major journals condone such a thing?
There is a simple explanation: that they are afraid.
As in Vaclav Havel's Poster Test: the Silence of the Editors was a coded statement unambiguously (but deniably) meaning: "I am afraid and therefore unquestioningly obedient".
http://charltonteaching.blogspot.com/2010/08/vaclav-havels-poster-test.html
*
So now we know that the editors of leading scholarly journals are not independent.
That editors of leading journals are already doing what publishers want.
That the editors of leading journals have accepted this situation as a fait accompli.
*
This particularly applies to The Lancet, which is published by Elsevier.
In the past, the Lancet was a fiercely, indeed aggressively, independent journal.
Past editors of the Lancet would not have imagined for a moment acceding to managerial pressure from publishers.
Clearly things have changed, and the current Lancet is happy to operate as a smokescreen for the publishers influence on the medical science literature.
*
Yet the current Lancet editors went one step further than merely acceding to pressure from the publishers, they actually assisted the publishers in over-riding editorial independence in a quasi rational manner.
The Lancet arranged a 'show trial ' whereby the papers which Elsevier management had withdrawn from Medical Hypotheses were 'refereed' by a group of anonymous persons such that it could be claimed that for the papers had been rejected by peer review.
This sham process was implemented by The Lancet, despite the blazingly obvious paradox that the main point of Medical Hypotheses was that it was an editorially reviewed - not peer reviewed; on the rationale that MeHy provided a forum for papers which would probably be rejected by peer review, but which justified publication as hypotheses for other reasons.
There is only one coherent conclusion: that the modern Lancet is a lap-dog of its publisher.
*
What did I conclude from the Dogs That Did Not Bark?
I realized that science was in an even-worse state that I had previously recognized. That the level of corruption and deception went both deeper and further than I had previously recognized.
And that the role of major journals had moved beyond acquiescence with the forces of darkness and into actual collusion.
That, in fact, science was not just sick but in an advanced state of dissolution: and that indeed the head of the fish was by-now dead and already putrefied.
**
Note added: Glen P Campbell - who (seemed to be - the senior Elsevier manager responsible for the Medical Hypotheses Affair, (presumably) including over-riding of editorial autonomy, was subsequently appointed to be American director of the British Medical Journal in late 2013. This is consistent with the above argument.
From: International Association of Scientific, Technical & Medical Publishers- The Voice of Academic and Professional Publishing:
GLEN P. CAMPBELL, US Managing Director, The BMJ Glen has been in STM publishing for more than 34 years, starting with Alan R. Liss in 1980. In 1984, he was appointed an Editor for books and journals in the life sciences, and continued at John Wiley & Sons after they acquired Liss. In 1990, Glen joined Elsevier as a Biomedical Journals Editor. Over more than 23 years at Elsevier, he held a number of positions with responsibility for setting strategies for the growth and development of biomedical journals in print and online. In his roles as EVP, Global Medical Research, he oversaw more than 435 journals in the health science, including The Lancet, and many premier society journals. In his role as EVP, STM Society Publishing, Glen worked with many of the most prestigious societies in the health, life, physical, and social sciences. Glen joined BMJ late in 2013 as Managing Director the US, and is thrilled to be working on the development and growth of The BMJ, BMJ Journals, and BMJ Clinical Improvement Products in North America. Glen is a past Chair of the Executive Council of the Professional and Scholarly Publishing Division (PSP) of the Association of American Publishers (AAP). In addition, he serves on the American Medical Publishers Committee (AMPC) of the PSP and the AMPA/National Library of Medicine Subcommittee of that group. Glen is currently Chair of Board of Directors of the Friends of the National Library of Medicine.
http://medicalhypotheses.blogspot.com/2010/05/medical-hypotheses-affair-times-higher.html
But there is one general aspect which I learned from the experience, and which is - I think - worth further emphasis.
This is the aspect of The Dog That Didn't Bark.
*
The Dogs whose silence throughout this episode was so highly significant were the editors of the major medical and scientific journals, indeed editors of all academic journals were silent.
Twenty five years ago there would, without any shadow of doubt, have been vigorous comment on the happenings at Medical Hypotheses from (say) the editors of Nature, Science, the New England Journal of Medicine, JAMA, the Lancet, the British Medical Journal and others.
And the gist of this would have been: publishers must keep their hands-off editorial independence.
Instead: silence.
Tumbleweed.
Crickets...
*
The MeHy Affair was a very explicit and highly public example of a publisher intervening directly to over-ride the editor of an established scholarly journal.
This was not merely affecting the conduct of academic discourse, but directly shaping the content of published academic discourse.
In their actions towards Medical Hypotheses, the publishers (Reed-Elsevier - who publish about 20 percent of the world scholarly journals, and a higher proportion of those journals with high impact in their fields) decided what went into the scholarly literature and what did not.
More exactly, specific managers employed by a publishing corporation decided what went into the scholarly literature and what did not.
*
Precisely, the publishers of Medical Hypotheses acted unilaterally to withdraw two already-e-published papers from a scholarly journal and delete them from the online records.
And then (in the period of time leading up to the editor being sacked) Elsevier managers continued to filter-out papers that had been formally accepted for publication by the editor (in other words the papers were officially 'in the press') - but which these managers regarded as unacceptable in some way, and therefore withdrew from the publication process.
In other words, managers took direct control of the content of the published academic literature.
*
Why was The Silence of the Editors so significant?
In an abstract sense, Elsevier's behaviour contravened the basic established conduct of academic discourse - which is supposed to be independent of publishers and a matter decided between editors and scholars.
Indeed, this was, by a strict 'legalistic' definition, a direct breach of the principle of academic freedom.
So - even abstractly considered - it would be expected that leading journal editors would have raised objections to the corruption of academic discourse.
*
But there is a much more direct and personal reason to expect leading editors to comment.
Which is that condoning Elsevier's actions set a precedent for further instances whereby managers employed by publishers will simply over-ride editorial independence: managers will decide what gets into journals and what does not.
So, by remaining silent, each editor of each major journal made it more likely that in future their publisher would do the same to them as Elsevier did to me!
*
Why would leading editors of major journals condone such a thing?
There is a simple explanation: that they are afraid.
As in Vaclav Havel's Poster Test: the Silence of the Editors was a coded statement unambiguously (but deniably) meaning: "I am afraid and therefore unquestioningly obedient".
http://charltonteaching.blogspot.com/2010/08/vaclav-havels-poster-test.html
*
So now we know that the editors of leading scholarly journals are not independent.
That editors of leading journals are already doing what publishers want.
That the editors of leading journals have accepted this situation as a fait accompli.
*
This particularly applies to The Lancet, which is published by Elsevier.
In the past, the Lancet was a fiercely, indeed aggressively, independent journal.
Past editors of the Lancet would not have imagined for a moment acceding to managerial pressure from publishers.
Clearly things have changed, and the current Lancet is happy to operate as a smokescreen for the publishers influence on the medical science literature.
*
Yet the current Lancet editors went one step further than merely acceding to pressure from the publishers, they actually assisted the publishers in over-riding editorial independence in a quasi rational manner.
The Lancet arranged a 'show trial ' whereby the papers which Elsevier management had withdrawn from Medical Hypotheses were 'refereed' by a group of anonymous persons such that it could be claimed that for the papers had been rejected by peer review.
This sham process was implemented by The Lancet, despite the blazingly obvious paradox that the main point of Medical Hypotheses was that it was an editorially reviewed - not peer reviewed; on the rationale that MeHy provided a forum for papers which would probably be rejected by peer review, but which justified publication as hypotheses for other reasons.
There is only one coherent conclusion: that the modern Lancet is a lap-dog of its publisher.
*
What did I conclude from the Dogs That Did Not Bark?
I realized that science was in an even-worse state that I had previously recognized. That the level of corruption and deception went both deeper and further than I had previously recognized.
And that the role of major journals had moved beyond acquiescence with the forces of darkness and into actual collusion.
That, in fact, science was not just sick but in an advanced state of dissolution: and that indeed the head of the fish was by-now dead and already putrefied.
**
Note added: Glen P Campbell - who (seemed to be - the senior Elsevier manager responsible for the Medical Hypotheses Affair, (presumably) including over-riding of editorial autonomy, was subsequently appointed to be American director of the British Medical Journal in late 2013. This is consistent with the above argument.
From: International Association of Scientific, Technical & Medical Publishers- The Voice of Academic and Professional Publishing:
GLEN P. CAMPBELL, US Managing Director, The BMJ Glen has been in STM publishing for more than 34 years, starting with Alan R. Liss in 1980. In 1984, he was appointed an Editor for books and journals in the life sciences, and continued at John Wiley & Sons after they acquired Liss. In 1990, Glen joined Elsevier as a Biomedical Journals Editor. Over more than 23 years at Elsevier, he held a number of positions with responsibility for setting strategies for the growth and development of biomedical journals in print and online. In his roles as EVP, Global Medical Research, he oversaw more than 435 journals in the health science, including The Lancet, and many premier society journals. In his role as EVP, STM Society Publishing, Glen worked with many of the most prestigious societies in the health, life, physical, and social sciences. Glen joined BMJ late in 2013 as Managing Director the US, and is thrilled to be working on the development and growth of The BMJ, BMJ Journals, and BMJ Clinical Improvement Products in North America. Glen is a past Chair of the Executive Council of the Professional and Scholarly Publishing Division (PSP) of the Association of American Publishers (AAP). In addition, he serves on the American Medical Publishers Committee (AMPC) of the PSP and the AMPA/National Library of Medicine Subcommittee of that group. Glen is currently Chair of Board of Directors of the Friends of the National Library of Medicine.
Tuesday, 29 June 2010
This blog is complete
Note:
Since I am no longer the editor of Medical Hypotheses, I now regard this blog as complete, and do not intend to add to it.
I am continuing blogging at Bruce Charlton's Miscellany
http://charltonteaching.blogspot.com/
Since I am no longer the editor of Medical Hypotheses, I now regard this blog as complete, and do not intend to add to it.
I am continuing blogging at Bruce Charlton's Miscellany
http://charltonteaching.blogspot.com/
Tuesday, 11 May 2010
RIP Medical Hypotheses
Just to note that I was sacked from the editorship of Medical Hypotheses today.
Medical Hypotheses was very much a 'one man band' as a journal - its content being selected by the editor (occasionally after seeking advice from a member of the editorial advisory board) over a period of some 35 years.
The journal's essence was that it was editorially reviewed (not peer reviewed), and favoured revolutionary science over normal science; that is, it favoured ideas on the basis that they were (for example) radical, interesting, dissenting, or sometimes amusing in a way likely to stimulate thought.
The journal had just two editors during its lifespan: the founder David Horrobin from 1975 to his death in 2003; and his chosen successor: myself from 2003-2010.
As a consequence of mergers, Medical Hypotheses fell into the hands of Elsevier in 2002.
Aside from a few issues still in the pipeline, the real Medical Hypotheses is now dead: killed by Elsevier 11 May 2010. RIP.
Medical Hypotheses was very much a 'one man band' as a journal - its content being selected by the editor (occasionally after seeking advice from a member of the editorial advisory board) over a period of some 35 years.
The journal's essence was that it was editorially reviewed (not peer reviewed), and favoured revolutionary science over normal science; that is, it favoured ideas on the basis that they were (for example) radical, interesting, dissenting, or sometimes amusing in a way likely to stimulate thought.
The journal had just two editors during its lifespan: the founder David Horrobin from 1975 to his death in 2003; and his chosen successor: myself from 2003-2010.
As a consequence of mergers, Medical Hypotheses fell into the hands of Elsevier in 2002.
Aside from a few issues still in the pipeline, the real Medical Hypotheses is now dead: killed by Elsevier 11 May 2010. RIP.
Thursday, 6 May 2010
The Medical Hypotheses Affair - Times Higher Education
"Without prejudice" - 6 May 2010 - Times Higher Education
Bruce G Charlton
"Bruce Charlton explains why he published a paper by 'perhaps the world's most hated scientist' and the importance of airing radical ideas"
*
On 11 May, Elsevier, the multinational academic publisher, will sack me from my position as editor of Medical Hypotheses. This affair has attracted international coverage in major journals such as Nature, Science and the British Medical Journal.
How did it come to this? Last year I published two papers on Aids that led to a complaint sent to Elsevier.
This was not unexpected. Medical Hypotheses was established with the express intent of allowing ideas outside the mainstream to be aired so that they could be debated openly. Its policy had not changed since its founding more than three decades ago, and it remained unaltered under my editorship, which began in 2003.
Nevertheless, managers at Elsevier sided with those who made the complaints and against Medical Hypotheses. Glen P. Campbell, a senior vice-president at Elsevier, started a managerial process that immediately withdrew the two papers - without consulting me and without gaining editorial consent. After deliberating in private, the management at Elsevier informed me of plans to make Medical Hypotheses into an orthodox, peer-reviewed and censored journal. When I declined to implement the new policy, Elsevier gave notice to kick me out before my contract expired and without compensation.
One of the papers, by Marco Ruggiero's group at the University of Florence, (doi:10.1016/j.mehy.2009.06.002) teased the Italian health ministry that its policies made it seem as if the department did not believe that HIV was the cause of Aids. The other paper, by Peter Duesberg's group at University of California, Berkeley (doi:10.1016/j.mehy.2009.06.024), argued that HIV was not a sufficient cause of Aids.
The Ruggiero paper seems to have been an innocent bystander that was misunderstood both by those who made a complaint and by Elsevier. The real controversy focused on Duesberg's paper.
Why did I publish a paper by Duesberg - perhaps the world's most hated scientist?
Peter Duesberg is a brilliant and highly knowledgeable scientist with a track record of exceptional achievement that includes election to the US National Academy of Sciences. However, his unyielding opposition to the prevailing theory that HIV is a sufficient cause of Aids has made Duesberg an international hate figure, and his glittering career has been pretty much ruined.
I published Duesberg's paper because to do so was clearly in line with the long-term goals, practice and the explicitly stated scope and aims of Medical Hypotheses. We have published many, many such controversial and dissenting papers over the past 35 years. Duesberg is obviously a competent scientist, he is obviously the victim of an orchestrated campaign of intimidation and exclusion, and I interpret his sacrifice of status to principle as prima facie evidence of his sincerity. If I had rejected this paper for fear of the consequences, I would have been betraying the basic ethos of the journal.
Medical Hypotheses was founded 35 years ago by David Horrobin with the purpose of disseminating ideas, theories and hypotheses relating to biomedicine, and of doing so on the basis of editorial review instead of peer review. Horrobin argued that peer review intrinsically tended to exclude radical and revolutionary ideas, and that alternatives were needed. He chose me as his editorial successor because I shared these views.
Both Horrobin and I agreed that the only correct scientific way to deal with dissent was to publish it so that it could be debated, confirmed or refuted in an open and scientific forum. The alternative - suppressing scientific dissent by preventing publication using behind-the-scenes and anonymous procedures - we would both regard as extremely dangerous because it is wide open to serious abuse and manipulation by powerful interest groups.
Did I know that the Duesberg paper would be controversial?
Yes. I knew that Duesberg was being kept out of the mainstream scientific literature, and that breaching this conspiracy would annoy those who had succeeded in excluding him for so long.
When I published the Duesberg article, I envisaged it meeting one of two possible fates.
In the first scenario, the paper would be shunned or simply ignored - dropped down the memory hole. This is what has usually happened in the past when a famous scientist published ideas that their colleagues regarded as misguided or crazy. Linus Pauling (1901-94) was a Nobel prizewinner and one of the most important chemists in history. Yet his views on the medical benefits of vitamin C were regarded as wrong. He was allowed to publish them, but (rightly or wrongly) they were generally ignored in mainstream science.
In the other scenario, Duesberg's paper would attract robust criticism and (apparent) refutation. This happened with Fred Hoyle (1915-2001), a Fellow of the Royal Society whose work on the "steady state" theory of the Universe made him one of the most important cosmologists of the late 20th century. But his views on the origins of life on Earth and the Archaeopteryx fossil were generally regarded as eccentric. Hoyle's ideas were published, attracted much criticism, and were (probably) refuted.
So I expected that Duesberg's paper either would be ignored or would trigger letters and other papers countering the ideas and evidence presented. Medical Hypotheses would have published these counter-arguments, then provided space for Duesberg to respond to the criticisms and later allowed critics to reply to Duesberg's defence. That is, after all, how real science is supposed to work.
What I did not expect was that editors and scientists would be bypassed altogether, and that the matter would be settled by the senior managers of a multinational publishing corporation in consultation with pressure-group activists. Certainly, that would never have happened 25 years ago, when I began research in science.
The success of Medical Hypotheses
Nor did I not expect that I would be sacked, the journal destroyed and plans made to replace it with an impostor of the same name. I did not expect this because I had been doing a good job and Medical Hypotheses was a successful journal.
Elsevier managers in the UK had frequently commended my work, I got a good salary for my work as editor, and I was twice awarded substantial performance-related pay rises. The journal was expanded in size by 50 per cent under my editorship, and a spin-off journal, Bioscience Hypotheses (edited by William Bains), was launched in 2008 on the same principles of editorial review and a radical agenda.
The success of Medical Hypotheses is evidenced by its impact factor (average citations per paper), which under my editorship rose from about 0.6 to 1.4 - an above-average figure for biomedical journals. Download usage was also exceptionally high with considerably more than 1,000 online readers per day (or about half a million papers downloaded per year). This level of internet usage is equivalent to that of a leading title such as Journal of Theoretical Biology.
But Medical Hypotheses was also famous for publishing some rather "eccentric" papers, which were chosen for their tendency to provoke thought, trigger discussion or amuse in a potentially stimulating way. Papers such as Georg Steinhauser's recent analysis of belly-button fluff have polarised opinion and also helped make Medical Hypotheses a cult favourite among people such as Marc Abrahams, the founder of the IgNobel Prizes. But they have also made it the subject of loathing and ridicule among those who demand that science and the bizarre be kept strictly demarcated (to prevent "misunderstanding").
It is hard to measure exactly the influence of a journal, but some recent papers stand out as having had an impact. A report by Lola Cuddy and Jacalyn Duffin discussed the fascinating implications of an old lady with severe Alzheimer's disease who could still recognise tunes such as Oh, What a Beautiful Mornin'. This paper, which was discussed by Oliver Sacks in his book Musicophilia: Tales of Music and the Brain, seems to have helped spark a renewed interest in music in relation to brain disease.
The paper "A tale of two cannabinoids" by E. Russo and G.W. Guy suggested that a combination of marijuana products tetrahydrocannabinol (THC) and cannabidiol (CBD) would be valuable painkillers. This idea has since been widely discussed in the scientific literature.
And in 2005, Eric Altschuler published in Medical Hypotheses a letter outlining his idea that survivors of the 1918 flu epidemic might even now retain immunity to the old virus. A few 1918 flu survivors were found who still had antibodies, and cells from those people were cloned to create an antiserum that protected experimental mice against the flu virus. The work was eventually published in Nature and received wide coverage in the US media.
What is my own position on the cause of Aids?
As an editor of a radical journal, my position was resolutely agnostic - in other words, I was not pursuing an agenda. I would publish papers presenting both sides of the debate. Most of the papers I published on Aids were orthodox ideas relating to HIV as the main cause. However, as well as Duesberg's article, I published some other papers challenging the HIV causal theory and proposing different mechanisms, such as work by Lawrence Broxmeyer arguing that some Aids patients actually have tuberculosis.
As for my personal opinions on the cause of Aids, these are irrelevant to real science because the subject is too far away from my core expertise and I do not work in that area. It is clear that Duesberg understands far more about HIV than I do, and more than at least 99 per cent of his critics do. Therefore, the opinions of most of Duesberg's critics, no matter how vehement, are just as irrelevant to real science as are mine.
But for me to collude with prohibiting Duesberg from publishing, I would have needed to be 100 per cent sure that Duesberg was 100 per cent wrong. Because even if he is mostly wrong, it is possible that someone of his ability may be seeing some kind of problem with the current consensus about Aids that other people of lesser ability (that is, most of us) are missing.
And if Duesberg may be even partially correct, it is extremely dangerous that the proper scientific process has been so ruthlessly distorted and subverted simply to exclude his ideas from the official scientific literature.
Bruce G. Charlton is professor of theoretical medicine, University of Buckingham.
Bruce G Charlton
"Bruce Charlton explains why he published a paper by 'perhaps the world's most hated scientist' and the importance of airing radical ideas"
*
On 11 May, Elsevier, the multinational academic publisher, will sack me from my position as editor of Medical Hypotheses. This affair has attracted international coverage in major journals such as Nature, Science and the British Medical Journal.
How did it come to this? Last year I published two papers on Aids that led to a complaint sent to Elsevier.
This was not unexpected. Medical Hypotheses was established with the express intent of allowing ideas outside the mainstream to be aired so that they could be debated openly. Its policy had not changed since its founding more than three decades ago, and it remained unaltered under my editorship, which began in 2003.
Nevertheless, managers at Elsevier sided with those who made the complaints and against Medical Hypotheses. Glen P. Campbell, a senior vice-president at Elsevier, started a managerial process that immediately withdrew the two papers - without consulting me and without gaining editorial consent. After deliberating in private, the management at Elsevier informed me of plans to make Medical Hypotheses into an orthodox, peer-reviewed and censored journal. When I declined to implement the new policy, Elsevier gave notice to kick me out before my contract expired and without compensation.
One of the papers, by Marco Ruggiero's group at the University of Florence, (doi:10.1016/j.mehy.2009.06.002) teased the Italian health ministry that its policies made it seem as if the department did not believe that HIV was the cause of Aids. The other paper, by Peter Duesberg's group at University of California, Berkeley (doi:10.1016/j.mehy.2009.06.024), argued that HIV was not a sufficient cause of Aids.
The Ruggiero paper seems to have been an innocent bystander that was misunderstood both by those who made a complaint and by Elsevier. The real controversy focused on Duesberg's paper.
Why did I publish a paper by Duesberg - perhaps the world's most hated scientist?
Peter Duesberg is a brilliant and highly knowledgeable scientist with a track record of exceptional achievement that includes election to the US National Academy of Sciences. However, his unyielding opposition to the prevailing theory that HIV is a sufficient cause of Aids has made Duesberg an international hate figure, and his glittering career has been pretty much ruined.
I published Duesberg's paper because to do so was clearly in line with the long-term goals, practice and the explicitly stated scope and aims of Medical Hypotheses. We have published many, many such controversial and dissenting papers over the past 35 years. Duesberg is obviously a competent scientist, he is obviously the victim of an orchestrated campaign of intimidation and exclusion, and I interpret his sacrifice of status to principle as prima facie evidence of his sincerity. If I had rejected this paper for fear of the consequences, I would have been betraying the basic ethos of the journal.
Medical Hypotheses was founded 35 years ago by David Horrobin with the purpose of disseminating ideas, theories and hypotheses relating to biomedicine, and of doing so on the basis of editorial review instead of peer review. Horrobin argued that peer review intrinsically tended to exclude radical and revolutionary ideas, and that alternatives were needed. He chose me as his editorial successor because I shared these views.
Both Horrobin and I agreed that the only correct scientific way to deal with dissent was to publish it so that it could be debated, confirmed or refuted in an open and scientific forum. The alternative - suppressing scientific dissent by preventing publication using behind-the-scenes and anonymous procedures - we would both regard as extremely dangerous because it is wide open to serious abuse and manipulation by powerful interest groups.
Did I know that the Duesberg paper would be controversial?
Yes. I knew that Duesberg was being kept out of the mainstream scientific literature, and that breaching this conspiracy would annoy those who had succeeded in excluding him for so long.
When I published the Duesberg article, I envisaged it meeting one of two possible fates.
In the first scenario, the paper would be shunned or simply ignored - dropped down the memory hole. This is what has usually happened in the past when a famous scientist published ideas that their colleagues regarded as misguided or crazy. Linus Pauling (1901-94) was a Nobel prizewinner and one of the most important chemists in history. Yet his views on the medical benefits of vitamin C were regarded as wrong. He was allowed to publish them, but (rightly or wrongly) they were generally ignored in mainstream science.
In the other scenario, Duesberg's paper would attract robust criticism and (apparent) refutation. This happened with Fred Hoyle (1915-2001), a Fellow of the Royal Society whose work on the "steady state" theory of the Universe made him one of the most important cosmologists of the late 20th century. But his views on the origins of life on Earth and the Archaeopteryx fossil were generally regarded as eccentric. Hoyle's ideas were published, attracted much criticism, and were (probably) refuted.
So I expected that Duesberg's paper either would be ignored or would trigger letters and other papers countering the ideas and evidence presented. Medical Hypotheses would have published these counter-arguments, then provided space for Duesberg to respond to the criticisms and later allowed critics to reply to Duesberg's defence. That is, after all, how real science is supposed to work.
What I did not expect was that editors and scientists would be bypassed altogether, and that the matter would be settled by the senior managers of a multinational publishing corporation in consultation with pressure-group activists. Certainly, that would never have happened 25 years ago, when I began research in science.
The success of Medical Hypotheses
Nor did I not expect that I would be sacked, the journal destroyed and plans made to replace it with an impostor of the same name. I did not expect this because I had been doing a good job and Medical Hypotheses was a successful journal.
Elsevier managers in the UK had frequently commended my work, I got a good salary for my work as editor, and I was twice awarded substantial performance-related pay rises. The journal was expanded in size by 50 per cent under my editorship, and a spin-off journal, Bioscience Hypotheses (edited by William Bains), was launched in 2008 on the same principles of editorial review and a radical agenda.
The success of Medical Hypotheses is evidenced by its impact factor (average citations per paper), which under my editorship rose from about 0.6 to 1.4 - an above-average figure for biomedical journals. Download usage was also exceptionally high with considerably more than 1,000 online readers per day (or about half a million papers downloaded per year). This level of internet usage is equivalent to that of a leading title such as Journal of Theoretical Biology.
But Medical Hypotheses was also famous for publishing some rather "eccentric" papers, which were chosen for their tendency to provoke thought, trigger discussion or amuse in a potentially stimulating way. Papers such as Georg Steinhauser's recent analysis of belly-button fluff have polarised opinion and also helped make Medical Hypotheses a cult favourite among people such as Marc Abrahams, the founder of the IgNobel Prizes. But they have also made it the subject of loathing and ridicule among those who demand that science and the bizarre be kept strictly demarcated (to prevent "misunderstanding").
It is hard to measure exactly the influence of a journal, but some recent papers stand out as having had an impact. A report by Lola Cuddy and Jacalyn Duffin discussed the fascinating implications of an old lady with severe Alzheimer's disease who could still recognise tunes such as Oh, What a Beautiful Mornin'. This paper, which was discussed by Oliver Sacks in his book Musicophilia: Tales of Music and the Brain, seems to have helped spark a renewed interest in music in relation to brain disease.
The paper "A tale of two cannabinoids" by E. Russo and G.W. Guy suggested that a combination of marijuana products tetrahydrocannabinol (THC) and cannabidiol (CBD) would be valuable painkillers. This idea has since been widely discussed in the scientific literature.
And in 2005, Eric Altschuler published in Medical Hypotheses a letter outlining his idea that survivors of the 1918 flu epidemic might even now retain immunity to the old virus. A few 1918 flu survivors were found who still had antibodies, and cells from those people were cloned to create an antiserum that protected experimental mice against the flu virus. The work was eventually published in Nature and received wide coverage in the US media.
What is my own position on the cause of Aids?
As an editor of a radical journal, my position was resolutely agnostic - in other words, I was not pursuing an agenda. I would publish papers presenting both sides of the debate. Most of the papers I published on Aids were orthodox ideas relating to HIV as the main cause. However, as well as Duesberg's article, I published some other papers challenging the HIV causal theory and proposing different mechanisms, such as work by Lawrence Broxmeyer arguing that some Aids patients actually have tuberculosis.
As for my personal opinions on the cause of Aids, these are irrelevant to real science because the subject is too far away from my core expertise and I do not work in that area. It is clear that Duesberg understands far more about HIV than I do, and more than at least 99 per cent of his critics do. Therefore, the opinions of most of Duesberg's critics, no matter how vehement, are just as irrelevant to real science as are mine.
But for me to collude with prohibiting Duesberg from publishing, I would have needed to be 100 per cent sure that Duesberg was 100 per cent wrong. Because even if he is mostly wrong, it is possible that someone of his ability may be seeing some kind of problem with the current consensus about Aids that other people of lesser ability (that is, most of us) are missing.
And if Duesberg may be even partially correct, it is extremely dangerous that the proper scientific process has been so ruthlessly distorted and subverted simply to exclude his ideas from the official scientific literature.
Bruce G. Charlton is professor of theoretical medicine, University of Buckingham.
Sunday, 25 April 2010
Some influential papers from Medical Hypotheses
Some influential papers from the history of Medical Hypotheses
I believe that a journal editor should be ‘agnostic’ about the truth of the papers he publishes, since truth in science is not something that editors ought to guess, but should only be determined after publication by evaluation and testing within the wider scientific community.
There is, at present, no objective method of evaluating a journal's relative or unique influence either quantitatively or qualitatively. To do this would require a great investment of intelligence, time and resources.
After all, papers published in very high impact journals like Nature, Science and PNAS would - if rejected from one of these - almost-certainly have been published in another of these, or elsewhere in a specialist journal with similar impact; and if this had happened the same paper may well have had identical impact.
What is hard to get-at is the _distinctive_ contribution of a _specific_ journal. At present, the best avaiable method may be biographical: asking scientists their opinion about the importance of particular papers in particular journals: http://medicalhypotheses.blogspot.com/2010/02/medical-hypotheses-authors-letters-of.html
However the crude influence of publications can sometimes be estimated using citation analysis – this looks at the number of times a paper has been listed in the reference section by other scientists.
Citations build-up over several years, so that citation analysis is more reliable for older papers. But citations tend to reward mainstream 'methodological' papers - it is hard to estimate the importance of 'ideas' papers such as hypotheses and theories, especially as scientists do not feel obliged to cite the sources of their ideas even if they can remember them (whereas, by contrast, scientists must cite the source of any empirical data on which their own research depends).
Bearing in mind these methods and caveats, I have compiled a short list of some of the papers from Medical Hypotheses which seem to have been most influential.
There were just two editors of Medical Hypotheses in its 35 year history
1975-2003 – David L Horrobin as Editor
In the early days of Medical Hypotheses many of the papers reflected the first Editor’s interest in nutritional topics; and Medical Hypotheses published many ideas that helped launch some of today’s mainstream ideas about diet, such as the benefits of supplementation with ‘omega’ fatty acids and antioxidants.
In 1985 AJ Verlangieri and others outlined the now widely-accepted idea that eating plenty of fruit and vegetables helps prevent heart disease in a widely quoted paper: “Fruit and vegetable consumption and cardiovascular mortality”.
Through the 1980s in Medical Hypotheses, freelance US scientist Mark F McCarty was publishing many of the early and influential papers about the importance of antioxidants in the diet, and their possible role in preventing disease. Over some three decades McCarty has published more papers in Medical Hypotheses than anyone else, and together these papers have been cited thousands of times in the scientific literature.
In 1987 the Medical Hypotheses founding editor David Horrobin published a frequently-referenced paper on the ‘omega-3’ type of essential fatty acid, which so many people now use as dietary supplements: “Low prevalences of coronary heart disease (CHD), psoriasis, asthma and rheumatoid arthritis in Eskimos: Are they caused by high dietary intake of eicosapentaenoic acid (EPA), a genetic variation of essential fatty acid (EFA) metabolism or a combination of both?”
In 1985, Clouston and Kerr published in Medical Hypotheses an influential paper called “Apoptosis, lymphocytotoxicity and the containment of viral infections”. This first described the now widely accepted idea that viruses may be fought by inducing suicide in virus-infected cells.
The most widely cited paper in Medical Hypotheses was published in 1991: The macrophage theory of depression by RS Smith. This is a key paper which argues that immune system chemicals may be a major cause of depression, and has been cited 242 times according to Google Scholar.
2004-2010 Bruce G Charlton as Editor
Here are some recent papers under my editorship which have already had an impact:
In 2005, Lola Cuddy and Jackie Duffin of Queens University Canada published an influential paper in Medical Hypotheses based on an elderly lady with several Alzheimer’s disease who still retained the ability to recognize music. They theorized that this might provide useful information on the nature of brain damage in Alzheimer’s, and suggested that dementia sufferers might benefit from a more musical environment. This paper was awarded the David Horrobin Prize for 2005 for the paper in Medical Hypotheses which best exemplified the intentions of the founding editor – the famous Cambridge transplant surgeon Sir Roy Calne was judge.
In “A tale of two cannabinoids” by E Russo & GW Guy from 2006, the authors presented the rationale for using a combination of marijuana products tetrahydrocannabinol (THC) and cannabidiol (CBD) as useful painkilling drugs and for the treatment of several other medical conditions. This idea has since been widely discussed in the scientific literature.
In 2005 Eric Altschuler published a letter in Medical Hypotheses outlining his idea that survivors of the 1918 flu epidemic might even now retain immunity to the old virus. A few 1918 flu survivors were found who still had antibodies, and cells from these people were cloned to create an antiserum that protected experimental mice against the flu virus. The work was eventually published in Nature and received wide coverage in the media.
I believe that a journal editor should be ‘agnostic’ about the truth of the papers he publishes, since truth in science is not something that editors ought to guess, but should only be determined after publication by evaluation and testing within the wider scientific community.
There is, at present, no objective method of evaluating a journal's relative or unique influence either quantitatively or qualitatively. To do this would require a great investment of intelligence, time and resources.
After all, papers published in very high impact journals like Nature, Science and PNAS would - if rejected from one of these - almost-certainly have been published in another of these, or elsewhere in a specialist journal with similar impact; and if this had happened the same paper may well have had identical impact.
What is hard to get-at is the _distinctive_ contribution of a _specific_ journal. At present, the best avaiable method may be biographical: asking scientists their opinion about the importance of particular papers in particular journals: http://medicalhypotheses.blogspot.com/2010/02/medical-hypotheses-authors-letters-of.html
However the crude influence of publications can sometimes be estimated using citation analysis – this looks at the number of times a paper has been listed in the reference section by other scientists.
Citations build-up over several years, so that citation analysis is more reliable for older papers. But citations tend to reward mainstream 'methodological' papers - it is hard to estimate the importance of 'ideas' papers such as hypotheses and theories, especially as scientists do not feel obliged to cite the sources of their ideas even if they can remember them (whereas, by contrast, scientists must cite the source of any empirical data on which their own research depends).
Bearing in mind these methods and caveats, I have compiled a short list of some of the papers from Medical Hypotheses which seem to have been most influential.
There were just two editors of Medical Hypotheses in its 35 year history
1975-2003 – David L Horrobin as Editor
In the early days of Medical Hypotheses many of the papers reflected the first Editor’s interest in nutritional topics; and Medical Hypotheses published many ideas that helped launch some of today’s mainstream ideas about diet, such as the benefits of supplementation with ‘omega’ fatty acids and antioxidants.
In 1985 AJ Verlangieri and others outlined the now widely-accepted idea that eating plenty of fruit and vegetables helps prevent heart disease in a widely quoted paper: “Fruit and vegetable consumption and cardiovascular mortality”.
Through the 1980s in Medical Hypotheses, freelance US scientist Mark F McCarty was publishing many of the early and influential papers about the importance of antioxidants in the diet, and their possible role in preventing disease. Over some three decades McCarty has published more papers in Medical Hypotheses than anyone else, and together these papers have been cited thousands of times in the scientific literature.
In 1987 the Medical Hypotheses founding editor David Horrobin published a frequently-referenced paper on the ‘omega-3’ type of essential fatty acid, which so many people now use as dietary supplements: “Low prevalences of coronary heart disease (CHD), psoriasis, asthma and rheumatoid arthritis in Eskimos: Are they caused by high dietary intake of eicosapentaenoic acid (EPA), a genetic variation of essential fatty acid (EFA) metabolism or a combination of both?”
In 1985, Clouston and Kerr published in Medical Hypotheses an influential paper called “Apoptosis, lymphocytotoxicity and the containment of viral infections”. This first described the now widely accepted idea that viruses may be fought by inducing suicide in virus-infected cells.
The most widely cited paper in Medical Hypotheses was published in 1991: The macrophage theory of depression by RS Smith. This is a key paper which argues that immune system chemicals may be a major cause of depression, and has been cited 242 times according to Google Scholar.
2004-2010 Bruce G Charlton as Editor
Here are some recent papers under my editorship which have already had an impact:
In 2005, Lola Cuddy and Jackie Duffin of Queens University Canada published an influential paper in Medical Hypotheses based on an elderly lady with several Alzheimer’s disease who still retained the ability to recognize music. They theorized that this might provide useful information on the nature of brain damage in Alzheimer’s, and suggested that dementia sufferers might benefit from a more musical environment. This paper was awarded the David Horrobin Prize for 2005 for the paper in Medical Hypotheses which best exemplified the intentions of the founding editor – the famous Cambridge transplant surgeon Sir Roy Calne was judge.
In “A tale of two cannabinoids” by E Russo & GW Guy from 2006, the authors presented the rationale for using a combination of marijuana products tetrahydrocannabinol (THC) and cannabidiol (CBD) as useful painkilling drugs and for the treatment of several other medical conditions. This idea has since been widely discussed in the scientific literature.
In 2005 Eric Altschuler published a letter in Medical Hypotheses outlining his idea that survivors of the 1918 flu epidemic might even now retain immunity to the old virus. A few 1918 flu survivors were found who still had antibodies, and cells from these people were cloned to create an antiserum that protected experimental mice against the flu virus. The work was eventually published in Nature and received wide coverage in the media.
Tuesday, 13 April 2010
The Cancer of Bureaucracy
Bruce G Charlton
The cancer of bureaucracy: how it will destroy science, medicine, education; and eventually everything else
Medical Hypotheses - 2010; 74: 961-5.
Summary
Everyone living in modernizing ‘Western’ societies will have noticed the long-term, progressive growth and spread of bureaucracy infiltrating all forms of social organization: nobody loves it, many loathe it, yet it keeps expanding. Such unrelenting growth implies that bureaucracy is parasitic and its growth uncontrollable – in other words it is a cancer that eludes the host immune system. Old-fashioned functional, ‘rational’ bureaucracy that incorporated individual decision-making is now all-but extinct, rendered obsolete by computerization. But modern bureaucracy evolved from it, the key ‘parasitic’ mutation being the introduction of committees for major decision-making or decision-ratification. Committees are a fundamentally irrational, incoherent, unpredictable decision-making procedure; which has the twin advantages that it cannot be formalized and replaced by computerization, and that it generates random variation or ‘noise’ which provides the basis for natural selection processes. Modern bureaucracies have simultaneously grown and spread in a positive-feedback cycle; such that interlinking bureaucracies now constitute the major environmental feature of human society which affects organizational survival and reproduction. Individual bureaucracies must become useless parasites which ignore the ‘real world’ in order to adapt to rapidly-changing ‘bureaucratic reality’. Within science, the major manifestation of bureaucracy is peer review, which – cancer-like – has expanded to obliterate individual authority and autonomy. There has been local elaboration of peer review and metastatic spread of peer review to include all major functions such as admissions, appointments, promotions, grant review, project management, research evaluation, journal and book refereeing and the award of prizes. Peer review eludes the immune system of science since it has now been accepted by other bureaucracies as intrinsically valid, such that any residual individual decision-making (no matter how effective in real-world terms) is regarded as intrinsically unreliable (self-interested and corrupt). Thus the endemic failures of peer review merely trigger demands for ever-more elaborate and widespread peer review. Just as peer review is killing science with its inefficiency and ineffectiveness, so parasitic bureaucracy is an un-containable phenomenon; dangerous to the extent that it cannot be allowed to exist unmolested, but must be utterly extirpated. Or else modernizing societies will themselves be destroyed by sclerosis, resource misallocation, incorrigibly-wrong decisions and the distortions of ‘bureaucratic reality’. However, unfortunately, social collapse is the more probable outcome, since parasites can evolve more rapidly than host immune systems.
***
Everyone in modernizing ‘Western’ societies (roughly the USA, UK, Western and Central Europe) will, no doubt, have noticed that there has been a long-term, progressive growth and spread of bureaucracy. Except during major war; this has not been a matter of pendulum swings, with sometimes less and sometimes more bureaucracy, but instead of relentless overall expansion – albeit sometimes faster and at other times slower.
The bureaucratic takeover applies to science, medicine, education, law, police, the media – indeed to almost all social functions. Such unrelenting growth implies either that 1. Bureaucracy is vital to societal functioning and the more bureaucracy we have the better for us; or that 2. Bureaucracy is parasitic and its growth is uncontrollable. Since the first alternative has become obviously absurd, I am assuming the second alternative is correct: that bureaucracy is like a cancer of modernizing societies – i.e. its expansion is malignant and its effect is first parasitic, then eventually fatal.
While it is generally recognized that modern societies are being bled-dry by the expense, delays, demoralization and reality-blindness imposed by multiple expanding and interacting bureaucracies, it is not properly recognized that bureaucratic decision-making is not merely flawed by its expense and sluggishness but also by its tendency to generate wrong answers. Modern bureaucracy, indeed, leads to irrational and unpredictable decisions; to indefensible decisions which are barely comprehensible, and cannot be justified, even by the people directly involved in them.
In what follows, I will make a distinction between, on the one hand, Weberian, functional, ‘rational’ bureaucracy which (in its ideal type, as derived from the work of Max Weber; 1864-1920) incorporated individual decision-making and was evaluated externally in terms of results and efficiency; and, on the other hand, modern ‘parasitic’ bureaucracy which (in its ideal type) deploys majority-vote committees for its major decision-making, is orientated purely towards its own growth, and which by means of its capacity to frame ‘reality’ - has become self-validating.
I will argue that parasitic bureaucracy evolved from rational bureaucracy in response to the rapidly changeable selection pressures imposed by modern society, especially the selection pressure from other bureaucracies having constructed a encompassing, virtual but dominant system of ‘bureaucratic reality’; and that the system of rational bureaucracy is by now all-but extinct – having been rendered obsolete by computerization.
The problem of parasitic bureaucracy
It is a striking feature of modern bureaucracy that nobody loves it, many loathe it (even, or especially, the bureaucrats themselves), yet it keeps growing and spreading. One reason is that bureaucracy is able to frame reality, such that the more that bureaucracy dominates society, the more bureaucracy seems to be needed; hence the response to any bureaucracy-generated problem is always to make more and bigger bureaucracies. It is this positive feedback system which is so overwhelming. Mere human willpower is now clearly inadequate to combat bureaucratic expansionism. Bureaucracy has become like The Borg on Star Trek: the next generation: it feeds-upon and assimilates opposition.
Bureaucracies are indeed no longer separable but form a linked web; such that to cut one bureaucracy seems always to imply another, and larger, bureaucracy to do the cutting. When the dust has settled, it is invariably found that the total sum and scope of societal bureaucratic activity has increased. And it is well recognized that modern bureaucracies tend to discourse-about, but never to eradicate, problems – it is as-if the abstract bureaucratic system somehow knew that its survival depended upon continually working-on, but never actually solving problems... Indeed, ‘problems’ seldom even get called problems nowadays, since problems imply the need and expectation for solutions; instead problems get called ‘issues’, a term which implies merely the need to ‘work-on’ them indefinitely. To talk in terms of solving problems is actually regarded as naïve and ‘simplistic’; even when, as a matter of empirical observation, these exact same problems were easily solved in the past, as a matter of record.
Over much of the world, public life is now mostly a matter of ‘bureaucracy speaking unto bureaucracy’. Observations and opinions from individual humans simply don’t register – unless, of course, individual communications happen to provide inputs which bureaucracies can use to create more regulations, more oversight, hence create more work for themselves. So individual complaints which can be used to trigger bureaucratic activity may be noted and acted-upon, or personal calls for more bureaucratic oversight may be amplified, elaborated and implemented. But anything which threatens the growth and spread of bureaucracy (i.e. anything simple that is also worryingly swift, efficient or effective) is ignored; or in extremis attacked with lethal intent.
The main self-defence of modern bureaucracy, however, is to frame reality. Since bureaucracies now dominate society, that which bureaucracies recognize and act-upon is ‘reality’; while that which bureaucracies do not recognize does not, for practical purposes, exist. Bureaucracy-as-a-system, therefore constructs a 'reality' which is conducive to the thriving of bureaucracy-as-a-system.
When a powerful bureaucracy does not recognize a communication as an input, then that communication is rendered anecdotal and irrelevant. Information which the bureaucracy rejects takes-on an unreal, subjective quality. Even if everybody, qua individual, knows that some thing is real and true – it becomes possible for modern bureaucracy implicitly to deny that thing's existence simply by disregarding it as an input, and instead responding to different inputs that are more conducive to expansion, and these are then rendered more significant and 'realer' than actual reality.
For many people, the key defining feature of a bureaucracy (as described by Weber) is that ideally it is an information-processing organization that has established objective procedures which it implements impartially. It is these quasi-mechanical procedures which are supposed to link aims to outcomes; and to ensure that, given appropriate inputs a bureaucracy almost-automatically generate predictable and specific outputs and outcomes.
However modern bureaucracies do not work like that. Indeed, such has been the breakdown in relationship between input and output that modern bureaucracies devote immense resources to change pure-and-simple; for example continually changing the recognition of input measures (i.e. continually redefining 'reality') and re-defining an organization’s mission and aims (i.e. rendering the nature of the organization different-from and incommensurable-with the past organization) and repeatedly altering the organizational outcomes regarded as relevant (re-defining making any decline in the efficiency of the organization formally un-measurable).
Such change may be externally- or internally-triggered: either triggered by the external demands of other bureaucracies which constitute the organizational environment, or triggered by the innate noise-generating tendencies of committees.
With endlessly-altering inputs, processes and outputs, bureaucratically-dominated organizations are impossible to critique in terms of functionality: their effectiveness is impossible to measure, and if or when they may be counter-productive (in terms of their original real world purpose) this will also be unknowable. Individual functional organizations disappear and all bureaucracies blend into a Borg-like web of interdependent growth.
The nature of bureaucracy: rational versus parasitic
What is bureaucracy? The traditional definition emphasises that bureaucracy entails a rational human organization which is characterized by hierarchy and specialization of function, and that the organization deploys explicit procedures or regulations that are impartially administered by the personnel. A rational ‘Weberian’ bureaucracy was probably, on the whole, performing a useful function reasonably efficiently – in other words its effectiveness was perceived in terms of externally-pre-decided criteria, and its growth and spread were circumscribed.
In medical terms, Weberian bureaucracy was therefore – at worst - a benign tumour; potentially able to overgrow locally and exert pressure on its surroundings; but still under control from, and held in check by, the larger host organism of society.
But, just as cancers usually evolve from benign precursors, so it was that modern parasitic and useless bureaucracies evolved from the rational and functional bureaucracies of an earlier era. Probably the key trigger factor in accelerating the rate of this evolution has been the development of computers, which have the potential to do – almost instantly, and at near zero cost – exactly the kind of rational information processing which in the past could only be done (much more slowly, expensively, and erratically) by Weberian bureaucracy. My contention is that large scale rational, functional bureaucracies are now all-but extinct, destroyed by computerization.
I assume that, when rational bureaucracy was facing extinction from computerization, there was a powerful selection pressure for the evolution of new forms of irrational bureaucracy – since rational procedures could be converted into algorithms, formalized and done mechanically; while irrational procedures were immune from this competition.
The outcome is that, despite retaining a vast structure of procedure and regulation, and the organizational principles of hierarchy and specialization, those powerful modern bureaucracies that survived the challenge of computerization and are still alive and growing nowadays are non-rational in their core attributes. Irrationality is indeed an essential aspect of a modern bureaucracy’s ability to survive and thrive. Those bureaucracies which remain and are expanding in this post-computerization era are neither rational nor functional.
This evolution towards pure parasitism – with no performance of a substantive real-world function - is only possible because, for any specific bureaucracy, its relevant environment now substantially consists of other bureaucracies. It is 'other bureaucracies' that are the main selection pressure: other bureaucracies pose the main threat to survival and reproduction. A modern bureaucracy therefore must respond primarily to ‘bureaucratic reality’ – and any engagement with ‘real life’ (e.g. life as it is perceived by alert and informed individual human beings) simply stands in the way of this primary survival task.
So, the best adapted modern bureaucracies are those which most efficiently play the game of satisfying the constantly-and rapidly-changing requirements of other major bureaucracies. Success brings expansion by local growth and metastatic spread. But, in contrast, satisfying the stable requirements of ‘real life’ and human nature, by contrast, brings a bureaucracy little or no rewards, and a greater possibility of extinction from the actions of other bureaucracies.
The role of committees in the evolution of bureaucracy
I will argue that the major mechanism by which irrationality has been introduced into bureaucracies is the committee which makes decisions by majority voting.
Committees now dominate almost all the major decision-making in modernizing societies – whether in the mass committee of eligible voters in elections, or such smaller committees as exist in corporations, government or in the US Supreme Court: it seems that modern societies always deploy a majority vote to decide or ratify all questions of importance. Indeed, it is all-but-inconceivable that any important decision be made by an individual person – it seems both natural and inevitable that such judgments be made by group vote.
Yet although nearly universal among Western ruling elites, this fetishizing of committees is a truly bizarre attitude; since there is essentially zero evidence that group voting leads to good, or even adequate, decisions – and much evidence that group voting leads to unpredictable, irrational and bad decisions.
The nonsense of majority voting was formally described by Nobel economics laureate Kenneth Arrow (1921-) in the 1960s, but it is surely obvious to anyone who has had dealings with committees and maintains independent judgement. It can be demonstrated using simple mathematical formulations that a majority vote may lead to unstable cycles of decisions, or a decision which not one single member of the committee would regard as optimal. For example, in a job appointments panel, it sometimes happens that there are two strong candidates who split the panel, so the winner is a third choice candidate whom no panel member would regard as the best candidate. In other words any individual panel member would make a better choice than derives from majority voting.
Furthermore, because of this type of phenomenon, and the way that majority decisions do not necessarily reflect any individual's opinion, committee decisions carry no responsibility. After all, how could anyone be held responsible for outcomes which nobody intended and to which nobody agrees? So that committees exert de facto power without responsibility. Indeed most modern committees are typically composed of a variable selection from a number of eligible personnel, so that it is possible that the same committee may never contain the same personnel twice. The charade is kept going by the necessary but meaningless fiction of ‘committee responsibility’, maintained by the enforcement of a weird rule that committee members must undertake, in advance of decisions, to abide by whatever outcome (however irrational, unpredictable, unjustified and indefensible) the actual contingent committee deliberations happen to lead-to. This near-universal rule and practice simply takes ‘irresponsibility’ and re-names it ‘responsibility’…
Given that committee decisions are neither rational nor coherent, and are therefore radically unpredictable, what is their effect? In a nutshell the short answer is that committees – overall and in the long term – generate random ‘noise’. Committees almost certainly increase the chances that a decision is wrong – but overall they probably do not have lead to any specifically biased direction of wrongness. While some committees using some procedures are biased in one direction, others are biased in other directions, and in the end I think the only thing that we can be sure about is that committees widen the range of unpredictability of decisions.
Now, if we ask what is the role of randomness in complex systems? - the answer is that random noise provides the variations which are the subject of selection processes. For example, in biology the random errors of genetic replication provide genetic variation which affects traits that are then subjected to natural selection. So, it seems reasonable to infer that committees generate random changes that generate variations in organizational characteristics which are then acted-upon by selection mechanisms. Some organizational variations are amplified and thrive, while other variations are suppressed and dwindle. Overall, this enables bureaucracies rapidly to evolve – to survive, to grow and to spread.
How much random noise is needed in a bureaucracy (or any evolving system)? The short answer is that the stronger is the selection pressure, the greater is the necessity for rapid evolution, then the more noise is needed; bearing in mind the trade-off by which an increased error rate in reproduction also reduces the ability of an evolving system accurately to reproduce itself. A system under strong selection pressure (e.g. a bureaucracy in a rapidly-changing modernizing society) tends to allow or generate more noise to create a wider range of variation for selection to act upon and thereby enable faster evolution – at the expense of less exact replication. By contrast, a system under weaker selection pressure (such as the Weberian bureaucracies of the early 20th century – for instance the British Civil Service) have greater fidelity of replication (less noise), but at the expense of a reduced ability to change rapidly in response to changing selection pressures.
I am saying here that committees using majority voting are responsible for the evolution of malignant bureaucratic growth in modern bureaucracies, and that this is why majority-vote decision-making permeates modern societies from the top to the bottom.
Although almost all major decision-making in the ‘Western’ world is now by majority voting there may be two significant exceptions: firstly military decision-making in time of war; secondly the personal authority of the Pope in the Roman Catholic Church. In both these types of organization there seems to be a greater emphasis on individual decision-making than on committee voting. Military command structures and the Roman Catholic hierarchy are therefore probably both closer to the ideal type of a Weberian rational bureaucracy than to the ideal type of a modern parasitic bureaucracy.
If so, the only major exceptions to majority rule decision-making at a world level, and probably not by coincidence, are the oldest and longest-enduring bureaucratic structures: that is, organizations which have retained functionality and have not themselves been destroyed by bureaucratic cancer.
Why are there committees at all?
Although they may nowadays be almost wholly damaging, committees cannot in their origins have been entirely useless or harmful; or else the form would never have survived its first appearance. If we acknowledge that individuals have the potential for better (i.e. more rational and coherent) decision-making than committees, then the decline of individual decision-making must not be due to the lack of advantages so much as the perceived problems of individual decision-making.
The problems of individual decision-making are the same as the problems of individual power: in essence these problems are self-interest (i.e. the observation that power will be deployed differentially to benefit the power-holder) and corruption (i.e. the observation that over time power will corrupt, making the individual progressively a worse-and-worse decision-maker until he us note merely self-interested but progressively driven mad: power mad).
Since humans are self-centred beings living in an imperfect world, all individuals tend to be both self-interested and corruptible (albeit to widely-varying degrees!). Of course, self-interest and corruptibility applies equally to people 'serving' on committees - each of whom is wielding lesser but anonymous and irresponsible power. Nonetheless, it seems to me that committees are mostly favoured because they are seen as a solution to these intrinsic problems of individual power. The implicit assumption is that when a committee is run by majority voting then individual self-interests will cancel-out. Furthermore, that since power is spread-around more people on a committee, then the inevitably corrupting effect of power will be similarly diluted.
In reality, committees mostly solve the problems of power to the extent that they reduce the effective deployment of power. So that, if committees are indeed less self-interested and less prone to corruption than individuals, this is achieved mainly because the committee structure and procedures make decision-making so unpredictable and incoherent that committees are rendered ineffective: ineffective to such an extent that committees cannot even manage consistently to be self-interested or corrupt! Therefore, the problems of power are ‘solved’, not by reducing the biases or corruptions of power, but simply by reducing the effectiveness of power; by introducing inefficiencies and obscuring the clarity of self-interest with the labile confusions of group dynamics. Power is not controlled but destroyed…
Therefore, if committees were introduced to reduce the abuse of power, then instead of achieving this, their actual outcome is that committees reduce power itself, and society is made docile when confronted by significant problems which could be solved, but are not. And surely this is precisely what we observe in the West, on an hourly basis?
Because committee-based bureaucracy is predicated on an ethic of power as evil: it functions as a sort of unilateral disarmament that would be immediately obvious as self-defeating or maladaptive unless arising in a context of already-existing domination. And a system of committee-based bureaucracy can only survive for as long as it its opponents can be rendered even-weaker by even-more virulent affliction with the same disease: which perhaps explains the extra-ordinarily venomous and dishonest pseudo-moralizing aggression which committee bureaucracy adopts towards other simpler, more-efficient or more-effective organizational systems that still use individual decision-making.
If we assume that committees were indeed introduced as a purported solution to (real or imagined, actual or potential) abuses of individual power; then committees will therefore usually achieve this goal. So long as the quality of decision-making is ignored, then the committees seem to be successful. Committees can therefore be seen as a typical product of one-sided and unbalanced moralism that has discarded the Aristotelian maxim of moderation in all things. Bureaucracy adopts instead unilateral moralism which aims at the complete avoidance of one kind of sin, even at the cost of falling into another contrasting kind of sin (so pride is avoided by encouraging submission, and aggression is avoided by imposing sloth).
However the subject matter of ‘trade-offs’ is avoided; and the inevitable self-created problems of single issue moral action are instead fed-upon by bureaucracy, leading (of course!) to further expansion.
Hence, modern decision-making means that societal capability has declined in many areas. It has become at best slow and expensive, and at worst impossible, to achieve things which were done quickly, efficiently and effectively under systems based on individual decision-making. To avoid the corruption of individual authority, society has been rendered helpless in the face of threats which could have been combated.
Bureaucracy in science – the cancer of peer review
This situation can readily be seen in science. Although modern science is massively distorted and infiltrated by the action of external bureaucracies in politics, public administration, law, business and the media (for example), the major manifestation of bureaucracy actually within science is of course peer review.
Over the last half-century or so, the growth and metastatic spread of peer review as a method of decision-making in science has been truly amazing. Individual decision-making has been all-but obliterated at every level and for almost every task. The elaborateness of peer review has increased (e.g. the number of referees, the number of personnel on evaluating panels, the amount of information input demanded by these groups). And peer review or other types of committee are now used for admissions, appointments, promotions, grant review, project management, research evaluation, journal and book refereeing, the award of prizes… the list just goes on and on. Clearly, peer review fits the pattern of malignant expansion of bureaucracy that is seen in the rest of modern society.
And, as with the rest of society, the cancer of bureaucratic peer review eludes the immune system of science. It has now been widely accepted, by the other bureaucracies of modern society in particular, that peer review is intrinsically valid; and that any other form of decision-making is intrinsically corrupt or unreliable. This belief is not merely implicit, but frequently explicit: with ignorant and nonsensical statements about the vital and defining role of peer review in science being the norm in mainstream communication.
The irresistible rise of peer review can be seen most starkly in that any deficiencies in peer review triggers demands (especially from other bureaucracies) for more elaborate and widespread peer review. So that the endemic failure of increased journal peer review to maintain quality, or to eliminate what it is purported to detect; such as deliberate fraud, or multiple publication, or serious error - leads inevitably leads to plans for further increases in peer review. So there is peer review of greater elaborateness, with further steps added to the process, and extra layers of monitoring by new types of larger committees. The ultimate validity of peer review is simply an assumption; and no amount of contrary evidence of its stultifying inefficiency, its harmful biases, and distorting exclusions can ever prove anything except the need for more of the same.
Yet the role of peer review in the progress of science remains, as it always has been, conjectural and unverified. The processes of gathering and collating peer opinion as a method of decision-making are neither rational nor transparent – and indeed (as argued above) this irrationality and unpredictability is in fact a necessary factor in the ability of committee systems such as peer review to expand without limit.
In the past; the ultimate, bottom-line, within-science validation of science came not from the committee opinions of peer reviewers but from the emergent phenomenon of peer usage – which refers to the actual deployment of previous science (theories, facts, techniques) in the ongoing work of later scientists. This was an implicit, aggregate but not quantified outcome of a multitude of individual-decisions among peers (co-workers in the same domain) about what aspects of previous science they would use in their own research: each user of earlier work was betting their time, effort and reputation on the validity of the previous research which they chose to use. When their work bore fruit, this a validation of previous research (in the sense that having survived this attempt at refutation the old science now commanded greater confidence); but when previous research was faulty it 'sabotaged' any later research building upon it in terms of correctly predicting or effectively-intervening-in the natural world. Beyond this lies the commonsensical evaluation of science in terms of ‘what works’ – especially what works outside of science, by people such as engineers and doctors whose job is to apply science in the natural world.
But now that committee-based peer review has been explicitly accepted as the ‘gold standard’ of scientific validity, we see the bizarre situation that actual scientific usage and even what works is regarded as less important than the ‘bureaucratic reality’ of peer review evaluations. Mere opinions trump observations of objective reality. Since ‘bureaucratic reality’ is merely a construct of interacting bureaucracies, this carries the implication that scientific reality is now, to an ever-increasing extent, simply just another aspect of, and seamlessly-continuous-with, mainstream 'bureaucratic reality'. Science is merely a subdivision of that same bureaucratic reality seen in politics, public administration, law, the media and business. The whole thing is just one gigantic virtual world. It seems probable that much of peer reviewed ‘science’ nowadays therefore carries no implications of being useful in understanding, predicting or intervening-on the natural world.
In other words, when science operates on the basis of peer review and committee decision, it is not really science at all. The cancer of bureaucracy has killed real science wherever it dominates. Much of mainstream science is now ‘Zombie Science’: that is, something which superficially looks-like science, but which is actually dead inside, and kept-moving only by continuous infusion of research funds. So far as bureaucratic reality is concerned, i.e. the reality as acknowledged among the major bureaucracies; real science likely now exists at an unofficial, unacknowledged level, below the radar; only among that minority of scholars and researchers who still deploy the original scientific evaluation mechanisms such as individual judgement, peer usage and real-world effectiveness.
What will happen?
The above analysis suggests that parasitic bureaucracy is so dangerous in the context of a modernizing society that it cannot be allowed to exist; it simply must be destroyed in its entirety or else any residuum will re-grow, metastasize and colonize society all over again. The implication is that a future society which intends to survive in the long-term would need to be one that prevents parasitic bureaucracy from even getting a toe-hold.
The power of parasitic bureaucracy to expand and to trigger further parasitic bureaucracies is now rendered de facto un-stoppable by the power of interacting bureaucracies to frame and construct perceived reality in bureaucratic terms. Since bureaucratic failure is eliminated by continual re-definition of success, and the since any threats of to bureaucratic expansion are eliminated by exclusion or lethal attack; the scope of bureaucratic takeover from now can be limited only by collapse of the social system as a whole.
So, if the above analysis is correct, there can be only two outcomes. Either that the cancer of modern bureaucracy will be extirpated: destroyed utterly. In other words, the host immune system will evolve the ability to destroy the parasite. Maybe, all majority voting committees will coercively be replaced by individuals who have the authority to make decisions and responsibility for those decisions.
Or that the cancer of bureaucracy will kill the host. In other words, the parasite will continue to elude the immune system. Modernizing societies will sooner-or-later be destroyed by a combination of resource starvation plus accumulative damage from delayed and wrong decisions based on the exclusions and distortions of ‘bureaucratic reality’.
Then the most complex rapidly-growing modernizing Western societies will be replaced by, or will regress into, zero-growth societies with a lower level of complexity - probably about the level of the agrarian societies of the European or Asian Middle Ages.
My prediction is that outcome two – societal collapse - is at present the more probable, on the basis that parasites can evolve more rapidly than host immune systems. Although as individuals we can observe the reality of approaching disaster, to modern parasitic bureaucracies the relevant data is either trivial or simply invisible.
***
Further reading: Although I do not mention it specifically above, the stimulus to writing this essay came from Mark A Notturno’s Science and the open society: the future of Karl Popper’s philosophy (Central European University Press: Budapest, 2000) – in particular the account of Popper’s views on induction. It struck me that committee decision-making by majority vote is a form of inductive reasoning, hence non-valid; and that inductive reasoning is in practice no more than a form of ‘authoritarianism’ (as Notturno terms it). In the event, I decided to exclude this line of argument from the essay because I found it too hard to make the point interesting and accessible. Nonetheless, I am very grateful to have had it explained to me.
I should also mention that various analyses of the pseudonymous blogger Mencius Moldbug, who writes at Unqualified Reservations, likely had a significant role in developing the above ideas.
This argument builds upon several previous pieces of mine including: Conflicts of interest in medical science: peer usage, peer review and ‘CoI consultancy' (Medical Hypotheses 2004; 63: 181-186); Charlton BG, Andras P. What is management and what do managers do? A systems theory account. (Philosophy of Management. 2004; 3: 3-15); Peer usage versus peer review (BMJ 2007; 335: 451); Charlton BG, Andras P. Medical research funding may have over-expanded and be due for collapse (QJM 2005; 98: 53–55); Figureheads, ghost-writers and pseudonymous quant bloggers: the recent evolution of authorship in science publishing (Medical Hypotheses. 2008; 71: 475–480); Zombie science’ (Medical Hypotheses 2008; 71:327–329); The vital role of transcendental truth in science’ (Medical Hypotheses. 2009; 72: 373–376); Are you an honest scientist? Truthfulness in science should be an iron law, not a vague aspiration (Medical Hypotheses. 2009; Volume 73: 633-635); and, After science: has the tradition been broken? Medical Hypotheses, in the press.
The cancer of bureaucracy: how it will destroy science, medicine, education; and eventually everything else
Medical Hypotheses - 2010; 74: 961-5.
Summary
Everyone living in modernizing ‘Western’ societies will have noticed the long-term, progressive growth and spread of bureaucracy infiltrating all forms of social organization: nobody loves it, many loathe it, yet it keeps expanding. Such unrelenting growth implies that bureaucracy is parasitic and its growth uncontrollable – in other words it is a cancer that eludes the host immune system. Old-fashioned functional, ‘rational’ bureaucracy that incorporated individual decision-making is now all-but extinct, rendered obsolete by computerization. But modern bureaucracy evolved from it, the key ‘parasitic’ mutation being the introduction of committees for major decision-making or decision-ratification. Committees are a fundamentally irrational, incoherent, unpredictable decision-making procedure; which has the twin advantages that it cannot be formalized and replaced by computerization, and that it generates random variation or ‘noise’ which provides the basis for natural selection processes. Modern bureaucracies have simultaneously grown and spread in a positive-feedback cycle; such that interlinking bureaucracies now constitute the major environmental feature of human society which affects organizational survival and reproduction. Individual bureaucracies must become useless parasites which ignore the ‘real world’ in order to adapt to rapidly-changing ‘bureaucratic reality’. Within science, the major manifestation of bureaucracy is peer review, which – cancer-like – has expanded to obliterate individual authority and autonomy. There has been local elaboration of peer review and metastatic spread of peer review to include all major functions such as admissions, appointments, promotions, grant review, project management, research evaluation, journal and book refereeing and the award of prizes. Peer review eludes the immune system of science since it has now been accepted by other bureaucracies as intrinsically valid, such that any residual individual decision-making (no matter how effective in real-world terms) is regarded as intrinsically unreliable (self-interested and corrupt). Thus the endemic failures of peer review merely trigger demands for ever-more elaborate and widespread peer review. Just as peer review is killing science with its inefficiency and ineffectiveness, so parasitic bureaucracy is an un-containable phenomenon; dangerous to the extent that it cannot be allowed to exist unmolested, but must be utterly extirpated. Or else modernizing societies will themselves be destroyed by sclerosis, resource misallocation, incorrigibly-wrong decisions and the distortions of ‘bureaucratic reality’. However, unfortunately, social collapse is the more probable outcome, since parasites can evolve more rapidly than host immune systems.
***
Everyone in modernizing ‘Western’ societies (roughly the USA, UK, Western and Central Europe) will, no doubt, have noticed that there has been a long-term, progressive growth and spread of bureaucracy. Except during major war; this has not been a matter of pendulum swings, with sometimes less and sometimes more bureaucracy, but instead of relentless overall expansion – albeit sometimes faster and at other times slower.
The bureaucratic takeover applies to science, medicine, education, law, police, the media – indeed to almost all social functions. Such unrelenting growth implies either that 1. Bureaucracy is vital to societal functioning and the more bureaucracy we have the better for us; or that 2. Bureaucracy is parasitic and its growth is uncontrollable. Since the first alternative has become obviously absurd, I am assuming the second alternative is correct: that bureaucracy is like a cancer of modernizing societies – i.e. its expansion is malignant and its effect is first parasitic, then eventually fatal.
While it is generally recognized that modern societies are being bled-dry by the expense, delays, demoralization and reality-blindness imposed by multiple expanding and interacting bureaucracies, it is not properly recognized that bureaucratic decision-making is not merely flawed by its expense and sluggishness but also by its tendency to generate wrong answers. Modern bureaucracy, indeed, leads to irrational and unpredictable decisions; to indefensible decisions which are barely comprehensible, and cannot be justified, even by the people directly involved in them.
In what follows, I will make a distinction between, on the one hand, Weberian, functional, ‘rational’ bureaucracy which (in its ideal type, as derived from the work of Max Weber; 1864-1920) incorporated individual decision-making and was evaluated externally in terms of results and efficiency; and, on the other hand, modern ‘parasitic’ bureaucracy which (in its ideal type) deploys majority-vote committees for its major decision-making, is orientated purely towards its own growth, and which by means of its capacity to frame ‘reality’ - has become self-validating.
I will argue that parasitic bureaucracy evolved from rational bureaucracy in response to the rapidly changeable selection pressures imposed by modern society, especially the selection pressure from other bureaucracies having constructed a encompassing, virtual but dominant system of ‘bureaucratic reality’; and that the system of rational bureaucracy is by now all-but extinct – having been rendered obsolete by computerization.
The problem of parasitic bureaucracy
It is a striking feature of modern bureaucracy that nobody loves it, many loathe it (even, or especially, the bureaucrats themselves), yet it keeps growing and spreading. One reason is that bureaucracy is able to frame reality, such that the more that bureaucracy dominates society, the more bureaucracy seems to be needed; hence the response to any bureaucracy-generated problem is always to make more and bigger bureaucracies. It is this positive feedback system which is so overwhelming. Mere human willpower is now clearly inadequate to combat bureaucratic expansionism. Bureaucracy has become like The Borg on Star Trek: the next generation: it feeds-upon and assimilates opposition.
Bureaucracies are indeed no longer separable but form a linked web; such that to cut one bureaucracy seems always to imply another, and larger, bureaucracy to do the cutting. When the dust has settled, it is invariably found that the total sum and scope of societal bureaucratic activity has increased. And it is well recognized that modern bureaucracies tend to discourse-about, but never to eradicate, problems – it is as-if the abstract bureaucratic system somehow knew that its survival depended upon continually working-on, but never actually solving problems... Indeed, ‘problems’ seldom even get called problems nowadays, since problems imply the need and expectation for solutions; instead problems get called ‘issues’, a term which implies merely the need to ‘work-on’ them indefinitely. To talk in terms of solving problems is actually regarded as naïve and ‘simplistic’; even when, as a matter of empirical observation, these exact same problems were easily solved in the past, as a matter of record.
Over much of the world, public life is now mostly a matter of ‘bureaucracy speaking unto bureaucracy’. Observations and opinions from individual humans simply don’t register – unless, of course, individual communications happen to provide inputs which bureaucracies can use to create more regulations, more oversight, hence create more work for themselves. So individual complaints which can be used to trigger bureaucratic activity may be noted and acted-upon, or personal calls for more bureaucratic oversight may be amplified, elaborated and implemented. But anything which threatens the growth and spread of bureaucracy (i.e. anything simple that is also worryingly swift, efficient or effective) is ignored; or in extremis attacked with lethal intent.
The main self-defence of modern bureaucracy, however, is to frame reality. Since bureaucracies now dominate society, that which bureaucracies recognize and act-upon is ‘reality’; while that which bureaucracies do not recognize does not, for practical purposes, exist. Bureaucracy-as-a-system, therefore constructs a 'reality' which is conducive to the thriving of bureaucracy-as-a-system.
When a powerful bureaucracy does not recognize a communication as an input, then that communication is rendered anecdotal and irrelevant. Information which the bureaucracy rejects takes-on an unreal, subjective quality. Even if everybody, qua individual, knows that some thing is real and true – it becomes possible for modern bureaucracy implicitly to deny that thing's existence simply by disregarding it as an input, and instead responding to different inputs that are more conducive to expansion, and these are then rendered more significant and 'realer' than actual reality.
For many people, the key defining feature of a bureaucracy (as described by Weber) is that ideally it is an information-processing organization that has established objective procedures which it implements impartially. It is these quasi-mechanical procedures which are supposed to link aims to outcomes; and to ensure that, given appropriate inputs a bureaucracy almost-automatically generate predictable and specific outputs and outcomes.
However modern bureaucracies do not work like that. Indeed, such has been the breakdown in relationship between input and output that modern bureaucracies devote immense resources to change pure-and-simple; for example continually changing the recognition of input measures (i.e. continually redefining 'reality') and re-defining an organization’s mission and aims (i.e. rendering the nature of the organization different-from and incommensurable-with the past organization) and repeatedly altering the organizational outcomes regarded as relevant (re-defining making any decline in the efficiency of the organization formally un-measurable).
Such change may be externally- or internally-triggered: either triggered by the external demands of other bureaucracies which constitute the organizational environment, or triggered by the innate noise-generating tendencies of committees.
With endlessly-altering inputs, processes and outputs, bureaucratically-dominated organizations are impossible to critique in terms of functionality: their effectiveness is impossible to measure, and if or when they may be counter-productive (in terms of their original real world purpose) this will also be unknowable. Individual functional organizations disappear and all bureaucracies blend into a Borg-like web of interdependent growth.
The nature of bureaucracy: rational versus parasitic
What is bureaucracy? The traditional definition emphasises that bureaucracy entails a rational human organization which is characterized by hierarchy and specialization of function, and that the organization deploys explicit procedures or regulations that are impartially administered by the personnel. A rational ‘Weberian’ bureaucracy was probably, on the whole, performing a useful function reasonably efficiently – in other words its effectiveness was perceived in terms of externally-pre-decided criteria, and its growth and spread were circumscribed.
In medical terms, Weberian bureaucracy was therefore – at worst - a benign tumour; potentially able to overgrow locally and exert pressure on its surroundings; but still under control from, and held in check by, the larger host organism of society.
But, just as cancers usually evolve from benign precursors, so it was that modern parasitic and useless bureaucracies evolved from the rational and functional bureaucracies of an earlier era. Probably the key trigger factor in accelerating the rate of this evolution has been the development of computers, which have the potential to do – almost instantly, and at near zero cost – exactly the kind of rational information processing which in the past could only be done (much more slowly, expensively, and erratically) by Weberian bureaucracy. My contention is that large scale rational, functional bureaucracies are now all-but extinct, destroyed by computerization.
I assume that, when rational bureaucracy was facing extinction from computerization, there was a powerful selection pressure for the evolution of new forms of irrational bureaucracy – since rational procedures could be converted into algorithms, formalized and done mechanically; while irrational procedures were immune from this competition.
The outcome is that, despite retaining a vast structure of procedure and regulation, and the organizational principles of hierarchy and specialization, those powerful modern bureaucracies that survived the challenge of computerization and are still alive and growing nowadays are non-rational in their core attributes. Irrationality is indeed an essential aspect of a modern bureaucracy’s ability to survive and thrive. Those bureaucracies which remain and are expanding in this post-computerization era are neither rational nor functional.
This evolution towards pure parasitism – with no performance of a substantive real-world function - is only possible because, for any specific bureaucracy, its relevant environment now substantially consists of other bureaucracies. It is 'other bureaucracies' that are the main selection pressure: other bureaucracies pose the main threat to survival and reproduction. A modern bureaucracy therefore must respond primarily to ‘bureaucratic reality’ – and any engagement with ‘real life’ (e.g. life as it is perceived by alert and informed individual human beings) simply stands in the way of this primary survival task.
So, the best adapted modern bureaucracies are those which most efficiently play the game of satisfying the constantly-and rapidly-changing requirements of other major bureaucracies. Success brings expansion by local growth and metastatic spread. But, in contrast, satisfying the stable requirements of ‘real life’ and human nature, by contrast, brings a bureaucracy little or no rewards, and a greater possibility of extinction from the actions of other bureaucracies.
The role of committees in the evolution of bureaucracy
I will argue that the major mechanism by which irrationality has been introduced into bureaucracies is the committee which makes decisions by majority voting.
Committees now dominate almost all the major decision-making in modernizing societies – whether in the mass committee of eligible voters in elections, or such smaller committees as exist in corporations, government or in the US Supreme Court: it seems that modern societies always deploy a majority vote to decide or ratify all questions of importance. Indeed, it is all-but-inconceivable that any important decision be made by an individual person – it seems both natural and inevitable that such judgments be made by group vote.
Yet although nearly universal among Western ruling elites, this fetishizing of committees is a truly bizarre attitude; since there is essentially zero evidence that group voting leads to good, or even adequate, decisions – and much evidence that group voting leads to unpredictable, irrational and bad decisions.
The nonsense of majority voting was formally described by Nobel economics laureate Kenneth Arrow (1921-) in the 1960s, but it is surely obvious to anyone who has had dealings with committees and maintains independent judgement. It can be demonstrated using simple mathematical formulations that a majority vote may lead to unstable cycles of decisions, or a decision which not one single member of the committee would regard as optimal. For example, in a job appointments panel, it sometimes happens that there are two strong candidates who split the panel, so the winner is a third choice candidate whom no panel member would regard as the best candidate. In other words any individual panel member would make a better choice than derives from majority voting.
Furthermore, because of this type of phenomenon, and the way that majority decisions do not necessarily reflect any individual's opinion, committee decisions carry no responsibility. After all, how could anyone be held responsible for outcomes which nobody intended and to which nobody agrees? So that committees exert de facto power without responsibility. Indeed most modern committees are typically composed of a variable selection from a number of eligible personnel, so that it is possible that the same committee may never contain the same personnel twice. The charade is kept going by the necessary but meaningless fiction of ‘committee responsibility’, maintained by the enforcement of a weird rule that committee members must undertake, in advance of decisions, to abide by whatever outcome (however irrational, unpredictable, unjustified and indefensible) the actual contingent committee deliberations happen to lead-to. This near-universal rule and practice simply takes ‘irresponsibility’ and re-names it ‘responsibility’…
Given that committee decisions are neither rational nor coherent, and are therefore radically unpredictable, what is their effect? In a nutshell the short answer is that committees – overall and in the long term – generate random ‘noise’. Committees almost certainly increase the chances that a decision is wrong – but overall they probably do not have lead to any specifically biased direction of wrongness. While some committees using some procedures are biased in one direction, others are biased in other directions, and in the end I think the only thing that we can be sure about is that committees widen the range of unpredictability of decisions.
Now, if we ask what is the role of randomness in complex systems? - the answer is that random noise provides the variations which are the subject of selection processes. For example, in biology the random errors of genetic replication provide genetic variation which affects traits that are then subjected to natural selection. So, it seems reasonable to infer that committees generate random changes that generate variations in organizational characteristics which are then acted-upon by selection mechanisms. Some organizational variations are amplified and thrive, while other variations are suppressed and dwindle. Overall, this enables bureaucracies rapidly to evolve – to survive, to grow and to spread.
How much random noise is needed in a bureaucracy (or any evolving system)? The short answer is that the stronger is the selection pressure, the greater is the necessity for rapid evolution, then the more noise is needed; bearing in mind the trade-off by which an increased error rate in reproduction also reduces the ability of an evolving system accurately to reproduce itself. A system under strong selection pressure (e.g. a bureaucracy in a rapidly-changing modernizing society) tends to allow or generate more noise to create a wider range of variation for selection to act upon and thereby enable faster evolution – at the expense of less exact replication. By contrast, a system under weaker selection pressure (such as the Weberian bureaucracies of the early 20th century – for instance the British Civil Service) have greater fidelity of replication (less noise), but at the expense of a reduced ability to change rapidly in response to changing selection pressures.
I am saying here that committees using majority voting are responsible for the evolution of malignant bureaucratic growth in modern bureaucracies, and that this is why majority-vote decision-making permeates modern societies from the top to the bottom.
Although almost all major decision-making in the ‘Western’ world is now by majority voting there may be two significant exceptions: firstly military decision-making in time of war; secondly the personal authority of the Pope in the Roman Catholic Church. In both these types of organization there seems to be a greater emphasis on individual decision-making than on committee voting. Military command structures and the Roman Catholic hierarchy are therefore probably both closer to the ideal type of a Weberian rational bureaucracy than to the ideal type of a modern parasitic bureaucracy.
If so, the only major exceptions to majority rule decision-making at a world level, and probably not by coincidence, are the oldest and longest-enduring bureaucratic structures: that is, organizations which have retained functionality and have not themselves been destroyed by bureaucratic cancer.
Why are there committees at all?
Although they may nowadays be almost wholly damaging, committees cannot in their origins have been entirely useless or harmful; or else the form would never have survived its first appearance. If we acknowledge that individuals have the potential for better (i.e. more rational and coherent) decision-making than committees, then the decline of individual decision-making must not be due to the lack of advantages so much as the perceived problems of individual decision-making.
The problems of individual decision-making are the same as the problems of individual power: in essence these problems are self-interest (i.e. the observation that power will be deployed differentially to benefit the power-holder) and corruption (i.e. the observation that over time power will corrupt, making the individual progressively a worse-and-worse decision-maker until he us note merely self-interested but progressively driven mad: power mad).
Since humans are self-centred beings living in an imperfect world, all individuals tend to be both self-interested and corruptible (albeit to widely-varying degrees!). Of course, self-interest and corruptibility applies equally to people 'serving' on committees - each of whom is wielding lesser but anonymous and irresponsible power. Nonetheless, it seems to me that committees are mostly favoured because they are seen as a solution to these intrinsic problems of individual power. The implicit assumption is that when a committee is run by majority voting then individual self-interests will cancel-out. Furthermore, that since power is spread-around more people on a committee, then the inevitably corrupting effect of power will be similarly diluted.
In reality, committees mostly solve the problems of power to the extent that they reduce the effective deployment of power. So that, if committees are indeed less self-interested and less prone to corruption than individuals, this is achieved mainly because the committee structure and procedures make decision-making so unpredictable and incoherent that committees are rendered ineffective: ineffective to such an extent that committees cannot even manage consistently to be self-interested or corrupt! Therefore, the problems of power are ‘solved’, not by reducing the biases or corruptions of power, but simply by reducing the effectiveness of power; by introducing inefficiencies and obscuring the clarity of self-interest with the labile confusions of group dynamics. Power is not controlled but destroyed…
Therefore, if committees were introduced to reduce the abuse of power, then instead of achieving this, their actual outcome is that committees reduce power itself, and society is made docile when confronted by significant problems which could be solved, but are not. And surely this is precisely what we observe in the West, on an hourly basis?
Because committee-based bureaucracy is predicated on an ethic of power as evil: it functions as a sort of unilateral disarmament that would be immediately obvious as self-defeating or maladaptive unless arising in a context of already-existing domination. And a system of committee-based bureaucracy can only survive for as long as it its opponents can be rendered even-weaker by even-more virulent affliction with the same disease: which perhaps explains the extra-ordinarily venomous and dishonest pseudo-moralizing aggression which committee bureaucracy adopts towards other simpler, more-efficient or more-effective organizational systems that still use individual decision-making.
If we assume that committees were indeed introduced as a purported solution to (real or imagined, actual or potential) abuses of individual power; then committees will therefore usually achieve this goal. So long as the quality of decision-making is ignored, then the committees seem to be successful. Committees can therefore be seen as a typical product of one-sided and unbalanced moralism that has discarded the Aristotelian maxim of moderation in all things. Bureaucracy adopts instead unilateral moralism which aims at the complete avoidance of one kind of sin, even at the cost of falling into another contrasting kind of sin (so pride is avoided by encouraging submission, and aggression is avoided by imposing sloth).
However the subject matter of ‘trade-offs’ is avoided; and the inevitable self-created problems of single issue moral action are instead fed-upon by bureaucracy, leading (of course!) to further expansion.
Hence, modern decision-making means that societal capability has declined in many areas. It has become at best slow and expensive, and at worst impossible, to achieve things which were done quickly, efficiently and effectively under systems based on individual decision-making. To avoid the corruption of individual authority, society has been rendered helpless in the face of threats which could have been combated.
Bureaucracy in science – the cancer of peer review
This situation can readily be seen in science. Although modern science is massively distorted and infiltrated by the action of external bureaucracies in politics, public administration, law, business and the media (for example), the major manifestation of bureaucracy actually within science is of course peer review.
Over the last half-century or so, the growth and metastatic spread of peer review as a method of decision-making in science has been truly amazing. Individual decision-making has been all-but obliterated at every level and for almost every task. The elaborateness of peer review has increased (e.g. the number of referees, the number of personnel on evaluating panels, the amount of information input demanded by these groups). And peer review or other types of committee are now used for admissions, appointments, promotions, grant review, project management, research evaluation, journal and book refereeing, the award of prizes… the list just goes on and on. Clearly, peer review fits the pattern of malignant expansion of bureaucracy that is seen in the rest of modern society.
And, as with the rest of society, the cancer of bureaucratic peer review eludes the immune system of science. It has now been widely accepted, by the other bureaucracies of modern society in particular, that peer review is intrinsically valid; and that any other form of decision-making is intrinsically corrupt or unreliable. This belief is not merely implicit, but frequently explicit: with ignorant and nonsensical statements about the vital and defining role of peer review in science being the norm in mainstream communication.
The irresistible rise of peer review can be seen most starkly in that any deficiencies in peer review triggers demands (especially from other bureaucracies) for more elaborate and widespread peer review. So that the endemic failure of increased journal peer review to maintain quality, or to eliminate what it is purported to detect; such as deliberate fraud, or multiple publication, or serious error - leads inevitably leads to plans for further increases in peer review. So there is peer review of greater elaborateness, with further steps added to the process, and extra layers of monitoring by new types of larger committees. The ultimate validity of peer review is simply an assumption; and no amount of contrary evidence of its stultifying inefficiency, its harmful biases, and distorting exclusions can ever prove anything except the need for more of the same.
Yet the role of peer review in the progress of science remains, as it always has been, conjectural and unverified. The processes of gathering and collating peer opinion as a method of decision-making are neither rational nor transparent – and indeed (as argued above) this irrationality and unpredictability is in fact a necessary factor in the ability of committee systems such as peer review to expand without limit.
In the past; the ultimate, bottom-line, within-science validation of science came not from the committee opinions of peer reviewers but from the emergent phenomenon of peer usage – which refers to the actual deployment of previous science (theories, facts, techniques) in the ongoing work of later scientists. This was an implicit, aggregate but not quantified outcome of a multitude of individual-decisions among peers (co-workers in the same domain) about what aspects of previous science they would use in their own research: each user of earlier work was betting their time, effort and reputation on the validity of the previous research which they chose to use. When their work bore fruit, this a validation of previous research (in the sense that having survived this attempt at refutation the old science now commanded greater confidence); but when previous research was faulty it 'sabotaged' any later research building upon it in terms of correctly predicting or effectively-intervening-in the natural world. Beyond this lies the commonsensical evaluation of science in terms of ‘what works’ – especially what works outside of science, by people such as engineers and doctors whose job is to apply science in the natural world.
But now that committee-based peer review has been explicitly accepted as the ‘gold standard’ of scientific validity, we see the bizarre situation that actual scientific usage and even what works is regarded as less important than the ‘bureaucratic reality’ of peer review evaluations. Mere opinions trump observations of objective reality. Since ‘bureaucratic reality’ is merely a construct of interacting bureaucracies, this carries the implication that scientific reality is now, to an ever-increasing extent, simply just another aspect of, and seamlessly-continuous-with, mainstream 'bureaucratic reality'. Science is merely a subdivision of that same bureaucratic reality seen in politics, public administration, law, the media and business. The whole thing is just one gigantic virtual world. It seems probable that much of peer reviewed ‘science’ nowadays therefore carries no implications of being useful in understanding, predicting or intervening-on the natural world.
In other words, when science operates on the basis of peer review and committee decision, it is not really science at all. The cancer of bureaucracy has killed real science wherever it dominates. Much of mainstream science is now ‘Zombie Science’: that is, something which superficially looks-like science, but which is actually dead inside, and kept-moving only by continuous infusion of research funds. So far as bureaucratic reality is concerned, i.e. the reality as acknowledged among the major bureaucracies; real science likely now exists at an unofficial, unacknowledged level, below the radar; only among that minority of scholars and researchers who still deploy the original scientific evaluation mechanisms such as individual judgement, peer usage and real-world effectiveness.
What will happen?
The above analysis suggests that parasitic bureaucracy is so dangerous in the context of a modernizing society that it cannot be allowed to exist; it simply must be destroyed in its entirety or else any residuum will re-grow, metastasize and colonize society all over again. The implication is that a future society which intends to survive in the long-term would need to be one that prevents parasitic bureaucracy from even getting a toe-hold.
The power of parasitic bureaucracy to expand and to trigger further parasitic bureaucracies is now rendered de facto un-stoppable by the power of interacting bureaucracies to frame and construct perceived reality in bureaucratic terms. Since bureaucratic failure is eliminated by continual re-definition of success, and the since any threats of to bureaucratic expansion are eliminated by exclusion or lethal attack; the scope of bureaucratic takeover from now can be limited only by collapse of the social system as a whole.
So, if the above analysis is correct, there can be only two outcomes. Either that the cancer of modern bureaucracy will be extirpated: destroyed utterly. In other words, the host immune system will evolve the ability to destroy the parasite. Maybe, all majority voting committees will coercively be replaced by individuals who have the authority to make decisions and responsibility for those decisions.
Or that the cancer of bureaucracy will kill the host. In other words, the parasite will continue to elude the immune system. Modernizing societies will sooner-or-later be destroyed by a combination of resource starvation plus accumulative damage from delayed and wrong decisions based on the exclusions and distortions of ‘bureaucratic reality’.
Then the most complex rapidly-growing modernizing Western societies will be replaced by, or will regress into, zero-growth societies with a lower level of complexity - probably about the level of the agrarian societies of the European or Asian Middle Ages.
My prediction is that outcome two – societal collapse - is at present the more probable, on the basis that parasites can evolve more rapidly than host immune systems. Although as individuals we can observe the reality of approaching disaster, to modern parasitic bureaucracies the relevant data is either trivial or simply invisible.
***
Further reading: Although I do not mention it specifically above, the stimulus to writing this essay came from Mark A Notturno’s Science and the open society: the future of Karl Popper’s philosophy (Central European University Press: Budapest, 2000) – in particular the account of Popper’s views on induction. It struck me that committee decision-making by majority vote is a form of inductive reasoning, hence non-valid; and that inductive reasoning is in practice no more than a form of ‘authoritarianism’ (as Notturno terms it). In the event, I decided to exclude this line of argument from the essay because I found it too hard to make the point interesting and accessible. Nonetheless, I am very grateful to have had it explained to me.
I should also mention that various analyses of the pseudonymous blogger Mencius Moldbug, who writes at Unqualified Reservations, likely had a significant role in developing the above ideas.
This argument builds upon several previous pieces of mine including: Conflicts of interest in medical science: peer usage, peer review and ‘CoI consultancy' (Medical Hypotheses 2004; 63: 181-186); Charlton BG, Andras P. What is management and what do managers do? A systems theory account. (Philosophy of Management. 2004; 3: 3-15); Peer usage versus peer review (BMJ 2007; 335: 451); Charlton BG, Andras P. Medical research funding may have over-expanded and be due for collapse (QJM 2005; 98: 53–55); Figureheads, ghost-writers and pseudonymous quant bloggers: the recent evolution of authorship in science publishing (Medical Hypotheses. 2008; 71: 475–480); Zombie science’ (Medical Hypotheses 2008; 71:327–329); The vital role of transcendental truth in science’ (Medical Hypotheses. 2009; 72: 373–376); Are you an honest scientist? Truthfulness in science should be an iron law, not a vague aspiration (Medical Hypotheses. 2009; Volume 73: 633-635); and, After science: has the tradition been broken? Medical Hypotheses, in the press.
Sunday, 4 April 2010
Covert drug dependence
Covert drug dependence should be the null hypothesis for explaining drug-withdrawal-induced clinical deterioration: The necessity for placebo versus drug withdrawal trials on normal control subjects
Bruce G. Charlton
Medical Hypotheses. 2010; 74: 761-763.
***
Summary
Just as a placebo can mimic an immediately effective drug so chronic drug dependence may mimic an effective long-term or preventive treatment. The discovery of the placebo had a profound result upon medical practice, since it became recognized that it was much harder to determine the therapeutic value of an intervention than was previously assumed. Placebo is now the null hypothesis for therapeutic improvement. As David Healy describes in the accompanying editorial on treatment induced stress syndromes [1], an analogous recognition of the effect of drug dependence is now overdue. Drug dependence and withdrawal effects should in future become the null hypothesis when there is clinical deterioration following cessation of treatment. The ideal methodology for detecting drug dependence and withdrawal is a double-blind placebo controlled and randomized trial using disease-free normal control subjects. Normal controls are necessary to ensure that the possibility of underlying chronic disease is eliminated: so long as subjects begin the trial as ‘normal controls’ it is reasonable to infer that any clinical or psychological problems (above placebo levels) which they experience following drug withdrawal can reasonably be attributed to the effects of the drug. This is important because the consequences of failing to detect the risk of covert drug dependence may be considerably worse than failing to detect a placebo effect. Drug dependent patients not only fail to receive benefit and suffer continued of inconvenience, expense and side effects; but the drug has actually created and sustained a covert chronic pathology. However, the current situation for drug evaluation is so irrational that it would allow chronic alcohol treatment to be regarded as a cure for alcoholism on the basis that delirium tremens follows alcohol withdrawal and alcohol can be used to treat delirium tremens! Therefore, just as placebo controlled trials of drugs are necessary to detect ineffective drugs, so drug withdrawal trials on normal control subjects should be regarded as necessary to detect dependence-producing drugs.
***
Just as a placebo can mimic an immediately effective drug, so chronic drug dependence may mimic an effective long-term or preventive treatment
The discovery of the placebo had a profound result upon medical practice. After the placebo effect was discovered it was recognized that it was much harder to determine the therapeutic value of an intervention than previously assumed. As David Healy describes in the accompanying editorial on treatment induced stress syndromes [1], an analogous recognition of the effect of drug dependence is now overdue, especially in relation to psychoactive drugs.
Therefore, just as placebo controlled trials of drugs are regarded as necessary to detect ineffective drugs, so drug withdrawal trials on normal control subjects should be regarded as necessary to detect dependence-producing drugs.
Determining the specific benefit of a drug
Throughout most of the history of medicine it was naively assumed that when a patient improved following a specific therapy, then this positive change could confidently be attributed to the beneficial effects of that specific therapy. But it is now recognized that clinical improvement may have nothing to do with the specific treatment but may instead have general psychological causes to do with a patient’s expectations. So that when a drug treatment is begun and the patient gets better, the change may not be due to the drug but some or all of the observed benefit could be due to the placebo effect.
Indeed, nowadays the placebo effect is routinely assumed to be the cause of patient improvement unless proven otherwise. Placebo effect is therefore the null hypothesis used to explain therapeutic improvements.
This tendency to regard the placebo effect as the default explanation for clinical improvement has led to major methodological changes in the evaluation of putative drug therapies; because the first aim of drug evaluation is now to show that measured benefits cannot wholly be explained by placebo. This has led to widespread adoption of placebo controlled trials which compare the effect of the putative drug with a placebo. Only when the drug produces a greater effect than placebo alone, is it recognized as a potentially effective therapy.
The effect of withdrawing a drug upon which a subject has become dependent can be regarded as analogous to the placebo effect, in the sense that drug dependence resembles the placebo effect in being able to mislead concerning clinical effectiveness.
It may routinely be assumed that if a patient gets worse when drug treatment is stopped, then this change is due to the patient losing the beneficial effects of the drug, so that the underlying disease (for which the drug was being prescribed) has re-emerged. That is, when a patient does better when taking a drug than after cessation, it seems apparent that the patient benefits from this drug. So the naïve assumption would be that worsening of a patient’s condition on withdrawal implies that the patient had a long-term illness which was being treated by the drug, and the chronic illness was revealed when drug treatment was withdrawn.
However, this naïve assumption is certainly unjustified as a general rule because drug dependence produces exactly the same effect. When a patient has become dependent on a drug, then adverse consequences following withdrawal may have nothing to do with revealing an underlying, long-term illness. Instead, chronic drug use has actually made the patient ill, the drug has created a new but covert pathology; the body has adapted to the presence of the drug and now needs the drug in order to function normally such that the covert pathology only emerges when the drug is removed and body systems are disrupted by its absence.
In other words, the drug dependent patient may have had independent pathology which has disappeared, or else drug treatment may have been the sole cause of pathology. But either way, clinical deterioration following withdrawal is mainly or wholly a consequence of drug dependence and not a consequence of underlying independent chronic pathology.
So, before assuming that the patient benefits from a drug the possibility of covert drug dependence must first be eliminated as an explanation. Healy’s argument is that drug dependence and withdrawal effects should in future become the null hypothesis in evaluating the chronic need for therapy in the same way as placebo is now a null hypothesis for clinical improvement following drug therapy. Worsening of the patient’s condition following cessation or dose reduction of a drug should therefore be assumed to be caused by withdrawal unless otherwise proven.
However, current methods of therapeutic evaluation cannot reliably detect stress induced drug dependence. This implies that a new kind of clinical trial is required explicitly to test for covert drug dependence and withdrawal effects in a manner analogous to the placebo controlled therapeutic trial.
Assumptions about the cause of post-withdrawal clinical deterioration
It has not yet been generally recognized that eliminating drug dependence as an explanation for withdrawal effects cannot be achieved in the context of normal clinical practice, nor by the standard formal methodologies of controlled clinical trials.
Just as eliminating the possibility of placebo effects requires specially designed placebo controlled therapeutic trails, so eliminating the occurrence of covert drug dependence requires also specially designed withdrawal trials on normal control subjects.
At present, it is usual to assume a drug does not cause dependence, except when it is proved that a specific drug does cause dependence. This means that when no information on dependence is available, or when the information about dependence on a particular drug is either incomplete or inconclusive, then the standard accepted inference is that the drug does not cause dependence. In effect, the onus of proof is currently upon those who are trying to argue that a drug causes dependence.
The situation for withdrawal trials testing for dependence is therefore exactly the opposite of that applying to therapeutic trials and the placebo effect. Consequently, as Healy describes, prevailing clinical evaluation procedures may systematically be incapable of detecting withdrawal effects. Even worse, current procedures systematically tend to misattribute the creation of dependence and harm following withdrawal, as instead being evidence of drug benefit with implication of the necessity for continued treatment of a supposed chronic illness.
The currently prevailing presumption therefore favours new drugs about which little is known; and it favours a perpetuation of the state of ignorance, since no evidence of dependence is almost invariably being interpreted as evidence of no dependence. In other words, as things stand; a drug that actually creates chronic dependence is instead credited with curing a chronic disease; despite that the chronic disease is actually a stress syndrome disease state which that same drug has actually caused.
The current situation is equivalent to chronic alcohol treatment being regarded as a cure for alcoholism on the (warped) basis that delirium tremens follows alcohol withdrawal and alcohol can be used to treat delirium tremens!
When to suspect covert dependence
The almost-total lack of awareness of covert drug dependence and withdrawal problems need not be accidental, but could be a consequence of the fact that unrecognized drug dependence is financially advantageous for the pharmaceutical companies who fund and conduct most clinical trials.
Although there are signs which may warn of dependence on a drug, and the possibility of withdrawal effects (e.g. dwindling effects of a drug, or the need for escalating doses in order to maintain its effect) – none of these are easy to discriminate from therapeutic effects.
But dependence may be suspected when what was perceived as an acute and self-limiting illness requiring a time-limited course of treatment, gradually becomes perceived as a chronic disorder requiring long-term drug treatment. This has been a pattern observed for several psychiatric conditions including depression and acute psychosis. Naturally, there can be rationalizations for this – for example, that the disease was previously unrecognized or under-treated.
Nonetheless, the difficulty of resolving such disputes serves to make clear the need for establishing a presumption of drugs being dependence-producing, and the necessity that this possibility be eliminated by withdrawal trials at an early stage in the evaluation of the drug.
Covert dependence generates a long-term demand for drugs by converting acute into chronic disease among the legitimate therapeutic target community. For example, acute and self-limiting depressive illness can be made into an apparent chronic disease if antidepressants create dependence such that drug withdrawal provokes depressed mood – such that a lifetime of antidepressant treatment can then be justified as ‘preventing’ a supposed chronic recurrent depressive disorder which is actually itself a product of drug administration.
Another way in which covert dependence is advantageous for pharmaceutical companies happens when the inclusiveness of diagnostic criteria are expanded. Because the more patients that are treated (on whatever excuse), the more dependence is produced and the more people who then require chronic drug administration.
Possible examples are when the threshold sensitivity for prescribing is reduced for a dependence-producing drug, such as the suggestion that early or preventive treatment of psychosis is beneficial, using an ‘atypical’ or traditional antipsychotic/neuroleptic. And because withdrawal of antipsychotics causes an increased likelihood of psychotic breakdown, preventive drug treatment is an apparently self-fulfilling prophesy. Or when a new and allegedly high prevalence disease category such as ‘bipolar disorder’ is created along with indications for treatment by dependence-producing drugs; this will tend to generate a new cohort of drug dependent patients whose long-term dependence on drugs can be disguised as a newly-discovered and previously-unsuspected type of severe and chronic psychiatric pathology.
In other words, under currently prevailing research standards, mass creation and exploitation of drug dependence may actually be spun as evidence of medical progress!
The necessity for drug-withdrawal trials on normal control subjects
Drug dependence needs a level of recognition comparable to the placebo effect because it is more damaging than the placebo effect. The main problem of failing to detect a placebo effect is that patients may be unnecessarily exposed to the expense and side effects of a drug. So the placebo effect may be clinically desirable, so long as the placebo is inexpensive and harmless.
But the consequences of failing to detect covert drug dependence may be considerably worse than this. When dependence is a problem, patients who receive chronic drug treatment may not only fail to receive any benefit (and thereby suffer unnecessary risk of side effects and expense) but the drug may actually create increasingly severe covert pathology. If a patient is prescribed a drug inappropriately, then they may become drug dependent even when ineffectiveness, inconvenience, expense or treatment side effects mean that they wish (or need) to stop.
In a nutshell, the problem with placebos is merely that a drug fails to treat pathology, but the problem with dependence is that a drug has created pathology.
Clearly, the ideal – and perhaps indispensable – methodology for detecting covert drug dependence is a double-blind placebo controlled and randomized trial using disease-free normal control subjects. Normal controls are necessary to ensure that the possibility of chronic disease is eliminated: since controls begin the trial as ‘normal’ it is reasonable to infer that any clinical or psychological problems (above placebo levels) which they experience following drug withdrawal can reasonably be attributed to the effects of the drug.
A withdrawal trial needs to be prolonged to include not just sufficient chronicity of treatment by the active drug or placebo; but also a sufficient follow-up period after stopping the drug or placebo, during which it can be discovered whether there is any worsening of conditions following withdrawal and an increase in new pathologies. Specifically, what needs to be measured is a comparison of the frequency of post-withdrawal problems in the two randomly-assigned placebo and active drug groups.
Since the nature of withdrawal effects will not be known in advance, such a trial cannot rely upon highly focused and pre-specified questionnaires but would need to include very general questioning about more general feelings of well-being and quality of life; and any signs of problems as perceived by observers. Follow-up could include measures such as all-cause mortality, all source morbidity; and measures of the frequency of adverse events such as suicide, accidents, medical contacts and hospital admissions.
In conclusion, covert drug dependence should be the null hypothesis explanation for post-withdrawal clinical deterioration, especially for new drugs and even more so for drugs acting on the brain. A default assumption is required that lack of evidence concerning drug dependence implies that a drug is dependence-producing.
Because unless covert drug dependence becomes a default assumption, then it remains advantageous for pharmaceutical companies self-servingly to maintain the current state of ignorance in which recommendations for chronic drug treatment are enforced by drug dependence that is systematically misinterpreted as therapeutic effectiveness.
References
[1] Healy D. Treatment induced stress syndromes. Med Hypotheses, in press. doi:10.1016/j.mehy.2010.01.038.
Bruce G. Charlton
Medical Hypotheses. 2010; 74: 761-763.
***
Summary
Just as a placebo can mimic an immediately effective drug so chronic drug dependence may mimic an effective long-term or preventive treatment. The discovery of the placebo had a profound result upon medical practice, since it became recognized that it was much harder to determine the therapeutic value of an intervention than was previously assumed. Placebo is now the null hypothesis for therapeutic improvement. As David Healy describes in the accompanying editorial on treatment induced stress syndromes [1], an analogous recognition of the effect of drug dependence is now overdue. Drug dependence and withdrawal effects should in future become the null hypothesis when there is clinical deterioration following cessation of treatment. The ideal methodology for detecting drug dependence and withdrawal is a double-blind placebo controlled and randomized trial using disease-free normal control subjects. Normal controls are necessary to ensure that the possibility of underlying chronic disease is eliminated: so long as subjects begin the trial as ‘normal controls’ it is reasonable to infer that any clinical or psychological problems (above placebo levels) which they experience following drug withdrawal can reasonably be attributed to the effects of the drug. This is important because the consequences of failing to detect the risk of covert drug dependence may be considerably worse than failing to detect a placebo effect. Drug dependent patients not only fail to receive benefit and suffer continued of inconvenience, expense and side effects; but the drug has actually created and sustained a covert chronic pathology. However, the current situation for drug evaluation is so irrational that it would allow chronic alcohol treatment to be regarded as a cure for alcoholism on the basis that delirium tremens follows alcohol withdrawal and alcohol can be used to treat delirium tremens! Therefore, just as placebo controlled trials of drugs are necessary to detect ineffective drugs, so drug withdrawal trials on normal control subjects should be regarded as necessary to detect dependence-producing drugs.
***
Just as a placebo can mimic an immediately effective drug, so chronic drug dependence may mimic an effective long-term or preventive treatment
The discovery of the placebo had a profound result upon medical practice. After the placebo effect was discovered it was recognized that it was much harder to determine the therapeutic value of an intervention than previously assumed. As David Healy describes in the accompanying editorial on treatment induced stress syndromes [1], an analogous recognition of the effect of drug dependence is now overdue, especially in relation to psychoactive drugs.
Therefore, just as placebo controlled trials of drugs are regarded as necessary to detect ineffective drugs, so drug withdrawal trials on normal control subjects should be regarded as necessary to detect dependence-producing drugs.
Determining the specific benefit of a drug
Throughout most of the history of medicine it was naively assumed that when a patient improved following a specific therapy, then this positive change could confidently be attributed to the beneficial effects of that specific therapy. But it is now recognized that clinical improvement may have nothing to do with the specific treatment but may instead have general psychological causes to do with a patient’s expectations. So that when a drug treatment is begun and the patient gets better, the change may not be due to the drug but some or all of the observed benefit could be due to the placebo effect.
Indeed, nowadays the placebo effect is routinely assumed to be the cause of patient improvement unless proven otherwise. Placebo effect is therefore the null hypothesis used to explain therapeutic improvements.
This tendency to regard the placebo effect as the default explanation for clinical improvement has led to major methodological changes in the evaluation of putative drug therapies; because the first aim of drug evaluation is now to show that measured benefits cannot wholly be explained by placebo. This has led to widespread adoption of placebo controlled trials which compare the effect of the putative drug with a placebo. Only when the drug produces a greater effect than placebo alone, is it recognized as a potentially effective therapy.
The effect of withdrawing a drug upon which a subject has become dependent can be regarded as analogous to the placebo effect, in the sense that drug dependence resembles the placebo effect in being able to mislead concerning clinical effectiveness.
It may routinely be assumed that if a patient gets worse when drug treatment is stopped, then this change is due to the patient losing the beneficial effects of the drug, so that the underlying disease (for which the drug was being prescribed) has re-emerged. That is, when a patient does better when taking a drug than after cessation, it seems apparent that the patient benefits from this drug. So the naïve assumption would be that worsening of a patient’s condition on withdrawal implies that the patient had a long-term illness which was being treated by the drug, and the chronic illness was revealed when drug treatment was withdrawn.
However, this naïve assumption is certainly unjustified as a general rule because drug dependence produces exactly the same effect. When a patient has become dependent on a drug, then adverse consequences following withdrawal may have nothing to do with revealing an underlying, long-term illness. Instead, chronic drug use has actually made the patient ill, the drug has created a new but covert pathology; the body has adapted to the presence of the drug and now needs the drug in order to function normally such that the covert pathology only emerges when the drug is removed and body systems are disrupted by its absence.
In other words, the drug dependent patient may have had independent pathology which has disappeared, or else drug treatment may have been the sole cause of pathology. But either way, clinical deterioration following withdrawal is mainly or wholly a consequence of drug dependence and not a consequence of underlying independent chronic pathology.
So, before assuming that the patient benefits from a drug the possibility of covert drug dependence must first be eliminated as an explanation. Healy’s argument is that drug dependence and withdrawal effects should in future become the null hypothesis in evaluating the chronic need for therapy in the same way as placebo is now a null hypothesis for clinical improvement following drug therapy. Worsening of the patient’s condition following cessation or dose reduction of a drug should therefore be assumed to be caused by withdrawal unless otherwise proven.
However, current methods of therapeutic evaluation cannot reliably detect stress induced drug dependence. This implies that a new kind of clinical trial is required explicitly to test for covert drug dependence and withdrawal effects in a manner analogous to the placebo controlled therapeutic trial.
Assumptions about the cause of post-withdrawal clinical deterioration
It has not yet been generally recognized that eliminating drug dependence as an explanation for withdrawal effects cannot be achieved in the context of normal clinical practice, nor by the standard formal methodologies of controlled clinical trials.
Just as eliminating the possibility of placebo effects requires specially designed placebo controlled therapeutic trails, so eliminating the occurrence of covert drug dependence requires also specially designed withdrawal trials on normal control subjects.
At present, it is usual to assume a drug does not cause dependence, except when it is proved that a specific drug does cause dependence. This means that when no information on dependence is available, or when the information about dependence on a particular drug is either incomplete or inconclusive, then the standard accepted inference is that the drug does not cause dependence. In effect, the onus of proof is currently upon those who are trying to argue that a drug causes dependence.
The situation for withdrawal trials testing for dependence is therefore exactly the opposite of that applying to therapeutic trials and the placebo effect. Consequently, as Healy describes, prevailing clinical evaluation procedures may systematically be incapable of detecting withdrawal effects. Even worse, current procedures systematically tend to misattribute the creation of dependence and harm following withdrawal, as instead being evidence of drug benefit with implication of the necessity for continued treatment of a supposed chronic illness.
The currently prevailing presumption therefore favours new drugs about which little is known; and it favours a perpetuation of the state of ignorance, since no evidence of dependence is almost invariably being interpreted as evidence of no dependence. In other words, as things stand; a drug that actually creates chronic dependence is instead credited with curing a chronic disease; despite that the chronic disease is actually a stress syndrome disease state which that same drug has actually caused.
The current situation is equivalent to chronic alcohol treatment being regarded as a cure for alcoholism on the (warped) basis that delirium tremens follows alcohol withdrawal and alcohol can be used to treat delirium tremens!
When to suspect covert dependence
The almost-total lack of awareness of covert drug dependence and withdrawal problems need not be accidental, but could be a consequence of the fact that unrecognized drug dependence is financially advantageous for the pharmaceutical companies who fund and conduct most clinical trials.
Although there are signs which may warn of dependence on a drug, and the possibility of withdrawal effects (e.g. dwindling effects of a drug, or the need for escalating doses in order to maintain its effect) – none of these are easy to discriminate from therapeutic effects.
But dependence may be suspected when what was perceived as an acute and self-limiting illness requiring a time-limited course of treatment, gradually becomes perceived as a chronic disorder requiring long-term drug treatment. This has been a pattern observed for several psychiatric conditions including depression and acute psychosis. Naturally, there can be rationalizations for this – for example, that the disease was previously unrecognized or under-treated.
Nonetheless, the difficulty of resolving such disputes serves to make clear the need for establishing a presumption of drugs being dependence-producing, and the necessity that this possibility be eliminated by withdrawal trials at an early stage in the evaluation of the drug.
Covert dependence generates a long-term demand for drugs by converting acute into chronic disease among the legitimate therapeutic target community. For example, acute and self-limiting depressive illness can be made into an apparent chronic disease if antidepressants create dependence such that drug withdrawal provokes depressed mood – such that a lifetime of antidepressant treatment can then be justified as ‘preventing’ a supposed chronic recurrent depressive disorder which is actually itself a product of drug administration.
Another way in which covert dependence is advantageous for pharmaceutical companies happens when the inclusiveness of diagnostic criteria are expanded. Because the more patients that are treated (on whatever excuse), the more dependence is produced and the more people who then require chronic drug administration.
Possible examples are when the threshold sensitivity for prescribing is reduced for a dependence-producing drug, such as the suggestion that early or preventive treatment of psychosis is beneficial, using an ‘atypical’ or traditional antipsychotic/neuroleptic. And because withdrawal of antipsychotics causes an increased likelihood of psychotic breakdown, preventive drug treatment is an apparently self-fulfilling prophesy. Or when a new and allegedly high prevalence disease category such as ‘bipolar disorder’ is created along with indications for treatment by dependence-producing drugs; this will tend to generate a new cohort of drug dependent patients whose long-term dependence on drugs can be disguised as a newly-discovered and previously-unsuspected type of severe and chronic psychiatric pathology.
In other words, under currently prevailing research standards, mass creation and exploitation of drug dependence may actually be spun as evidence of medical progress!
The necessity for drug-withdrawal trials on normal control subjects
Drug dependence needs a level of recognition comparable to the placebo effect because it is more damaging than the placebo effect. The main problem of failing to detect a placebo effect is that patients may be unnecessarily exposed to the expense and side effects of a drug. So the placebo effect may be clinically desirable, so long as the placebo is inexpensive and harmless.
But the consequences of failing to detect covert drug dependence may be considerably worse than this. When dependence is a problem, patients who receive chronic drug treatment may not only fail to receive any benefit (and thereby suffer unnecessary risk of side effects and expense) but the drug may actually create increasingly severe covert pathology. If a patient is prescribed a drug inappropriately, then they may become drug dependent even when ineffectiveness, inconvenience, expense or treatment side effects mean that they wish (or need) to stop.
In a nutshell, the problem with placebos is merely that a drug fails to treat pathology, but the problem with dependence is that a drug has created pathology.
Clearly, the ideal – and perhaps indispensable – methodology for detecting covert drug dependence is a double-blind placebo controlled and randomized trial using disease-free normal control subjects. Normal controls are necessary to ensure that the possibility of chronic disease is eliminated: since controls begin the trial as ‘normal’ it is reasonable to infer that any clinical or psychological problems (above placebo levels) which they experience following drug withdrawal can reasonably be attributed to the effects of the drug.
A withdrawal trial needs to be prolonged to include not just sufficient chronicity of treatment by the active drug or placebo; but also a sufficient follow-up period after stopping the drug or placebo, during which it can be discovered whether there is any worsening of conditions following withdrawal and an increase in new pathologies. Specifically, what needs to be measured is a comparison of the frequency of post-withdrawal problems in the two randomly-assigned placebo and active drug groups.
Since the nature of withdrawal effects will not be known in advance, such a trial cannot rely upon highly focused and pre-specified questionnaires but would need to include very general questioning about more general feelings of well-being and quality of life; and any signs of problems as perceived by observers. Follow-up could include measures such as all-cause mortality, all source morbidity; and measures of the frequency of adverse events such as suicide, accidents, medical contacts and hospital admissions.
In conclusion, covert drug dependence should be the null hypothesis explanation for post-withdrawal clinical deterioration, especially for new drugs and even more so for drugs acting on the brain. A default assumption is required that lack of evidence concerning drug dependence implies that a drug is dependence-producing.
Because unless covert drug dependence becomes a default assumption, then it remains advantageous for pharmaceutical companies self-servingly to maintain the current state of ignorance in which recommendations for chronic drug treatment are enforced by drug dependence that is systematically misinterpreted as therapeutic effectiveness.
References
[1] Healy D. Treatment induced stress syndromes. Med Hypotheses, in press. doi:10.1016/j.mehy.2010.01.038.
Sunday, 21 March 2010
After science: Has the tradition been broken?
After science: Has the tradition been broken?
Bruce G. Charlton
Medical Hypotheses. 2010; 74: 623-625
Summary
The majority of professional scientists make use of the artefacts of science but lack understanding of what these mean; raising the question: has the tradition of science been broken? Explicit knowledge is only a selective summary but practical capability derives from implicit, traditional or ‘tacit’ knowledge that is handed on between- and across-generations by slow, assimilative processes requiring extended human contact through a wide range of situations. This was achieved mainly by prolonged apprenticeship to a Master. Such methods recognize the gulf between being able to do something and knowing how you have done it; and the further gap between knowing how you have done something and being able to teach it by explicit instructions. Yet the ‘Master–apprentice’ model of education has been almost discarded from science over recent decades and replaced with bureaucratic regulation. The main reason is probably that scientific manpower has expanded so rapidly and over such a long period as to overwhelm the slow, sure and thorough traditional methods. In their innocence of scientific culture, the younger generation of scientists are like children who have been raised by wolves; they do not talk science but spout bureaucratic procedures. It has now become accepted among the mass of professional ‘scientists’ that the decisions which matter most in science are those imposed upon science by outside forces: for example by employers, funders, publishers, regulators, and the law courts. It is these bureaucratic mechanisms that now constitute the ‘bottom line’ for scientific practice. Most of modern science is therefore apparently in the post-holocaust situation described in A canticle for Liebowitz and After Virtue, but the catastrophe was bureaucratic, rather than violent. So, the tradition has indeed been broken. However, for as long as the fact is known that the tradition has been broken, and living representatives of the tradition are still alive and active, there still exists a remote possibility that the tradition could be revived.
***
After science: has the tradition been broken?
Imagine that the natural sciences were to suffer the effects of a catastrophe. A series of environmental disasters are blamed by the general public on the scientists. Widespread riots occur, laboratories are burnt down, physicists are lynched, books and instruments are destroyed. Finally a know-nothing political movement takes power and successfully abolishes science teaching in schools and universities, imprisoning and executing the remaining scientists. Later still there is a reaction against this destructive movement and enlightened people seek to revive science, although they have largely forgotten what it was. But all that they possess are fragments: a knowledge of experiments detached from any knowledge of the theoretical context which gave them significance; parts of theories unrelated either to the other bits and pieces of theory which they possess or to experiment; instruments whose use has been forgotten; half-chapters from books, single pages from articles, not always fully legible because torn and charred. Nonetheless all these fragments are re-embodied in a set of practices which go under the revived names of physics, chemistry and biology. Adults argue with each other about the respective merits of relativity theory, evolutionary theory and phlogiston theory, although they possess only a very partial knowledge of each. Children learn by heart the surviving portions of the periodic table and recite as incantations some of the theorems of Euclid. Nobody, or almost nobody, realizes that what they are doing is not natural science in any proper sense at all. For everything that they do and say conforms to certain canons of consistency and coherence and those contexts which would be needed to make sense of what they are doing have been lost, perhaps irretrievably.
From After Virtue – Alasdair MacIntyre [1]
The classic science fiction novel A canticle for Liebowitz by Walter Miller [2] portrays a post-nuclear-holocaust world in which the tradition of scientific practice – previously handed-down from one generation of scientists to the next – has been broken. Only a few scientific artefacts remain, such as fragments of electronic equipment. It turns out that after the tradition has been broken, the scientific artefacts make no sense and are wildly misinterpreted. For instance a blueprint is regarded as if it was a beautiful illuminated manuscript, and components such as diodes are regarded as magical talismans or pills.
I will argue that modern science may have entered a similar state in which for the majority of professional scientists the artefacts of science remain – such as the academic hierarchy, laboratory techniques and machines, statistical methods, and the peer review mechanism – but understanding of what these mean has apparently been lost; raising the question: has the tradition been broken?
A theme associated with philosophers such as Polanyi [3] and Oakeshott [4] is that explicit knowledge – such as is found in textbooks and scientific articles – is only a selective summary that misses that the most important capability derives from implicit, traditional or ‘tacit’ knowledge. It is this un-articulated knowledge that leads to genuine human understanding of the natural world, accurate prediction and the capacity to make effective interventions.
Tacit knowledge is handed on between- and across-generations by slow, assimilative processes which require extended, relatively unstructured and only semi-purposive human contact. What is being transmitted and inculcated is an over-arching purpose, a style of thought, a learned but then spontaneous framing of reality, a sense of how problems should be tackled, and a gut-feeling for evaluating the work or oneself, as well as others.
This kind of process was in the past achieved by such means as familial vocations, prolonged apprenticeship, co-residence and extended time spent in association with a Master – and by the fact that the Master and apprentice personally selected each other. The pattern was seen in all areas of life where independence, skill and depth of knowledge were expected: crafts, arts, music, scholarship – and science.
Although such methods sound a bit mysterious, not to say obscurationist, to modern ears – in fact they are solid realism and common sense. Such methods for ensuring the transmission of subtle knowledge recognize the gulf between being able to do something and knowing how you have done it; and the further gap between knowing how you have done something and being able to teach it by explicit instructions.
Such systems as apprenticeship recognize that the most important aspects of knowledge may be those which are not known or understood to be the most important, or may even be in opposition to that which is believed or supposed to be important. The educational ‘method’ was that an apprentice should spend a lot of time with the Master in many situations; and as for educational evaluation, the best way for a Master to know that his skill really has been passed-on, is for him to spend a lot of time with the apprentice in many situations.
Imperfect as it inevitably was, traditions were maintained and often improved over centuries by means of apprenticeship – which was regarded as the safest and surest way of ensuring that the knowledge and skills could be sustained and developed.
However, priorities have changed. The preservation and development of high-level human skills and expertise is no longer regarded as a priority, something to which many other concerns will inevitably need to be subordinated. And the ‘Master–apprentice’ model of education, which stretches back in human history as far as we know, has been all-but discarded from science (and much of mainstream culture) over recent decades. Indeed the assumptions have been reversed.
It is important to recognize that the discarding of traditions of apprenticeship and prolonged human contact in science was not due to any new discovery that apprenticeship was – after all – unnecessary, let alone that the new bureaucratic systems of free-standing explicit aims and objectives, summaries and lists of core knowledge and competencies etc. were superior to apprenticeship. Indeed there is nothing to suggest that they are remotely the equal of apprenticeship. Rather, the Master–apprentice system has been discarded despite the evidence of its superiority; and has been replaced by the growth of bureaucratic regulation.
The main reason is probably that scientific manpower, personnel or ‘human resources’ (as they are now termed) have expanded vastly over the past 60 years – probably about tenfold. So there was no possibility of such rapid and sustained quantitative expansion (accompanied, almost-inevitably, by massive decline in average quality) being achieved using the labour-intensive apprenticeship methods of the past. The tradition was discarded because it stood in the path of the expansion of scientific manpower.
Among the mass of mainstream professional scientists, science – as a distinctive mode of human enquiry – now has no meaning whatsoever. Among these same scientists, who dominate the social system of science both in terms of power and numbers, the resolution of scientific disputes and disagreements is a matter of power, not reason – and relevant ‘evidence’ is narrowly restricted to bureaucratically-enforced operational variables. The tradition seems to have been broken.
I first observed this when I worked in epidemiology, and I realized that most epidemiologists did not understand science and were not scientists – but they did not realize it [5]. They believed that what they did was science, since it had many of the explicit characteristics of science, it involved making measurements and doing statistics, it was accepted as science by many other people, and (most importantly!) epidemiology got funded as science. But most epidemiology was not science, as any real scientist could easily recognize – it was no more science than were those market researchers with clipboards who question pedestrians on the high street. I saw a similar picture in almost all the vast amount of ‘functional brain imagining’ which was the dominant and most prestigious type of Neuroscience. And again in the people who were mapping the average human genome – then (presumably) going on to map the genome of every individual human, then perhaps every creature on the planet?
As Jacob Bronowski once remarked: science is not a loose leaf folder of ‘facts’; not the kind of thing which can be expanded ad infinitum – simply by iterative addition of ever-more observations. Science is instead the creation of structured knowledge, with the emphasis on structure [6]. The modern scientific literature is ballooning exponentially with published stuff and ever-inflated claims about its significance – but, lacking structure, this malignantly-expanding mass adds-up to less-and-less. Meanwhile, understanding, prediction and the ability to intervene on the natural world to attain pre-specified objectives all dwindle; because real science is a living tradition not a dead archive.
The younger generation of scientists are like children who have been raised by wolves. They have learned the techniques but have no feel for the proper aims, attitudes and evaluations of science. What little culture they have comes not from science but from bureaucrats: they utterly lack scientific culture; they do not talk science, instead they spout procedures.
It has now become implicitly accepted among the mass of professional ‘scientists’ that the decisions which matter most in science are those imposed upon science by outside forces: by employers (who gets the jobs, who gets promotion), funders (who gets the big money), publishers (who gets their work in the big journals), bureaucratic regulators (who gets allowed to do work), and the law courts (whose ideas get backed-up, or criminalized, by the courts). It is these bureaucratic mechanisms that constitute ‘real life’ and the ‘bottom line’ for scientific practice. The tradition has been broken.
A minority of young scientists have, by dedication or luck, absorbed the tradition of real science, yet because their wisdom is tacit and is not shared by the majority of the bureaucratically-minded, they will almost-inevitably be held back from status and excluded from influence. It is bureaucracy that now controls ‘science’, and that which bureaucracy cannot or will not acknowledge might as well not exist, so far as the direction of ‘science’ is concerned.
Most of modern science is therefore apparently in pretty much the post-holocaust situation described in A canticle for Liebowitz and After Virtue – the transmission of tacit knowledge has been broken. But the catastrophe was bureaucratic, rather than violent – and few seem to have noticed the scale of destruction.
But, it might be asked, supposing the tradition had indeed been broken; if this was true, then how would we know it was true? – given that the point of MacIntyre’s and Miller’s fables was that when a tradition is broken people do not realize it. The answer is that we know at the moment that the tradition has been broken, but this knowledge is on the verge of extinction.
The sources of evidence are at least fourfold. If we judge the rate of scientific progress by individualistic common sense criteria (rather than bureaucratic indices), it is obvious that the rate of progress has declined in at least two major areas: physics and medical research [7], [8] and [9]. Furthermore there has been a decline in the number of scientific geniuses, which is now near-zero [10]. If geniuses are vital to overall scientific progress, then progress probably stopped a while ago [11].
In addition, the actual practice of science has transformed profoundly [12] – the explicit aims of scientists, their truthfulness, what scientists do on a day by day basis, the procedures by which their work is evaluated… all of these have changed so much over the past 50 years that it is reasonable to conclude that science now is performing an almost completely different function than it was 50 years ago. After all, if modern science neither looks nor quacks like a duck, why should we believe it is a duck? Just because science has the same name, does not mean it is them same thing when almost-everything about it has been transformed!
And finally we might believe that the tradition has been broken because this has been a frequently implicit, sometimes explicit, theme of some of the most original and informed scientists for several decades: from the Feynman and Crick through to Brenner – take your pick. It seems to me that they have for many years been warning us that science was on a wrong track, and the warnings have not been heeded.
So: the tradition has been broken. However, for as long as the fact is known that the tradition has been broken, and representatives of the tradition are still alive and active, there still exists a remote possibility that the tradition could be revived.
Acknowledgement
Some of these ideas emerged in conversations with Jonathan Rees, and quite a few were derived from him.
References
[1] A. MacIntyre, After virtue: a study in moral theory, Duckworth, London (1981).
[2] W.M. Miller, A canticle for Liebowitz, Weidenfeld and Nicholson, London (1960).
[3] M. Polanyi, Personal knowledge: towards a post-critical philosophy, University of Chicago Press, Chicago, USA (1958).
[4] M. Oakeshott, Rationalism in politics and other essays, Methuen, London (1962).
[5] B.G. Charlton, Should epidemiologists be pragmatists, biostatisticians or clinical scientists?, Epidemiology 7 (1996), pp. 552–554. View Record in Scopus | Cited By in Scopus (8)
[6] J. Bronowski, Science and human values, Harper Colophon, New York (1975).
[7] L. Smolin, The trouble with physics, Penguin, London (2006).
[8] D.F. Horrobin, Scientific medicine – success or failure?. In: D.J. Weatherall, J.G.G. Ledingham and D.A. Warrell, Editors, Oxford textbook of medicine (2nd ed.), Oxford University Press, Oxford (1987), pp. 2.1–2.3.
[9] B.G. Charlton and P. Andras, Medical research funding may have over-expanded and be due for collapse, QJM 98 (2005), pp. 53–55. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (15)
[10] B.G. Charlton, The last genius? – reflections on the death of Francis Crick, Med Hypotheses 63 (2004), pp. 923–924. Article | PDF (209 K) | View Record in Scopus | Cited By in Scopus (2)
[11] C. Murray, Human accomplishment. The pursuit of excellence in the arts and sciences 800 BC to 1950, HarperCollins, New York (2003).
[12] J. Ziman, Real science, Cambridge University Press, Cambridge, UK (2000).
Bruce G. Charlton
Medical Hypotheses. 2010; 74: 623-625
Summary
The majority of professional scientists make use of the artefacts of science but lack understanding of what these mean; raising the question: has the tradition of science been broken? Explicit knowledge is only a selective summary but practical capability derives from implicit, traditional or ‘tacit’ knowledge that is handed on between- and across-generations by slow, assimilative processes requiring extended human contact through a wide range of situations. This was achieved mainly by prolonged apprenticeship to a Master. Such methods recognize the gulf between being able to do something and knowing how you have done it; and the further gap between knowing how you have done something and being able to teach it by explicit instructions. Yet the ‘Master–apprentice’ model of education has been almost discarded from science over recent decades and replaced with bureaucratic regulation. The main reason is probably that scientific manpower has expanded so rapidly and over such a long period as to overwhelm the slow, sure and thorough traditional methods. In their innocence of scientific culture, the younger generation of scientists are like children who have been raised by wolves; they do not talk science but spout bureaucratic procedures. It has now become accepted among the mass of professional ‘scientists’ that the decisions which matter most in science are those imposed upon science by outside forces: for example by employers, funders, publishers, regulators, and the law courts. It is these bureaucratic mechanisms that now constitute the ‘bottom line’ for scientific practice. Most of modern science is therefore apparently in the post-holocaust situation described in A canticle for Liebowitz and After Virtue, but the catastrophe was bureaucratic, rather than violent. So, the tradition has indeed been broken. However, for as long as the fact is known that the tradition has been broken, and living representatives of the tradition are still alive and active, there still exists a remote possibility that the tradition could be revived.
***
After science: has the tradition been broken?
Imagine that the natural sciences were to suffer the effects of a catastrophe. A series of environmental disasters are blamed by the general public on the scientists. Widespread riots occur, laboratories are burnt down, physicists are lynched, books and instruments are destroyed. Finally a know-nothing political movement takes power and successfully abolishes science teaching in schools and universities, imprisoning and executing the remaining scientists. Later still there is a reaction against this destructive movement and enlightened people seek to revive science, although they have largely forgotten what it was. But all that they possess are fragments: a knowledge of experiments detached from any knowledge of the theoretical context which gave them significance; parts of theories unrelated either to the other bits and pieces of theory which they possess or to experiment; instruments whose use has been forgotten; half-chapters from books, single pages from articles, not always fully legible because torn and charred. Nonetheless all these fragments are re-embodied in a set of practices which go under the revived names of physics, chemistry and biology. Adults argue with each other about the respective merits of relativity theory, evolutionary theory and phlogiston theory, although they possess only a very partial knowledge of each. Children learn by heart the surviving portions of the periodic table and recite as incantations some of the theorems of Euclid. Nobody, or almost nobody, realizes that what they are doing is not natural science in any proper sense at all. For everything that they do and say conforms to certain canons of consistency and coherence and those contexts which would be needed to make sense of what they are doing have been lost, perhaps irretrievably.
From After Virtue – Alasdair MacIntyre [1]
The classic science fiction novel A canticle for Liebowitz by Walter Miller [2] portrays a post-nuclear-holocaust world in which the tradition of scientific practice – previously handed-down from one generation of scientists to the next – has been broken. Only a few scientific artefacts remain, such as fragments of electronic equipment. It turns out that after the tradition has been broken, the scientific artefacts make no sense and are wildly misinterpreted. For instance a blueprint is regarded as if it was a beautiful illuminated manuscript, and components such as diodes are regarded as magical talismans or pills.
I will argue that modern science may have entered a similar state in which for the majority of professional scientists the artefacts of science remain – such as the academic hierarchy, laboratory techniques and machines, statistical methods, and the peer review mechanism – but understanding of what these mean has apparently been lost; raising the question: has the tradition been broken?
A theme associated with philosophers such as Polanyi [3] and Oakeshott [4] is that explicit knowledge – such as is found in textbooks and scientific articles – is only a selective summary that misses that the most important capability derives from implicit, traditional or ‘tacit’ knowledge. It is this un-articulated knowledge that leads to genuine human understanding of the natural world, accurate prediction and the capacity to make effective interventions.
Tacit knowledge is handed on between- and across-generations by slow, assimilative processes which require extended, relatively unstructured and only semi-purposive human contact. What is being transmitted and inculcated is an over-arching purpose, a style of thought, a learned but then spontaneous framing of reality, a sense of how problems should be tackled, and a gut-feeling for evaluating the work or oneself, as well as others.
This kind of process was in the past achieved by such means as familial vocations, prolonged apprenticeship, co-residence and extended time spent in association with a Master – and by the fact that the Master and apprentice personally selected each other. The pattern was seen in all areas of life where independence, skill and depth of knowledge were expected: crafts, arts, music, scholarship – and science.
Although such methods sound a bit mysterious, not to say obscurationist, to modern ears – in fact they are solid realism and common sense. Such methods for ensuring the transmission of subtle knowledge recognize the gulf between being able to do something and knowing how you have done it; and the further gap between knowing how you have done something and being able to teach it by explicit instructions.
Such systems as apprenticeship recognize that the most important aspects of knowledge may be those which are not known or understood to be the most important, or may even be in opposition to that which is believed or supposed to be important. The educational ‘method’ was that an apprentice should spend a lot of time with the Master in many situations; and as for educational evaluation, the best way for a Master to know that his skill really has been passed-on, is for him to spend a lot of time with the apprentice in many situations.
Imperfect as it inevitably was, traditions were maintained and often improved over centuries by means of apprenticeship – which was regarded as the safest and surest way of ensuring that the knowledge and skills could be sustained and developed.
However, priorities have changed. The preservation and development of high-level human skills and expertise is no longer regarded as a priority, something to which many other concerns will inevitably need to be subordinated. And the ‘Master–apprentice’ model of education, which stretches back in human history as far as we know, has been all-but discarded from science (and much of mainstream culture) over recent decades. Indeed the assumptions have been reversed.
It is important to recognize that the discarding of traditions of apprenticeship and prolonged human contact in science was not due to any new discovery that apprenticeship was – after all – unnecessary, let alone that the new bureaucratic systems of free-standing explicit aims and objectives, summaries and lists of core knowledge and competencies etc. were superior to apprenticeship. Indeed there is nothing to suggest that they are remotely the equal of apprenticeship. Rather, the Master–apprentice system has been discarded despite the evidence of its superiority; and has been replaced by the growth of bureaucratic regulation.
The main reason is probably that scientific manpower, personnel or ‘human resources’ (as they are now termed) have expanded vastly over the past 60 years – probably about tenfold. So there was no possibility of such rapid and sustained quantitative expansion (accompanied, almost-inevitably, by massive decline in average quality) being achieved using the labour-intensive apprenticeship methods of the past. The tradition was discarded because it stood in the path of the expansion of scientific manpower.
Among the mass of mainstream professional scientists, science – as a distinctive mode of human enquiry – now has no meaning whatsoever. Among these same scientists, who dominate the social system of science both in terms of power and numbers, the resolution of scientific disputes and disagreements is a matter of power, not reason – and relevant ‘evidence’ is narrowly restricted to bureaucratically-enforced operational variables. The tradition seems to have been broken.
I first observed this when I worked in epidemiology, and I realized that most epidemiologists did not understand science and were not scientists – but they did not realize it [5]. They believed that what they did was science, since it had many of the explicit characteristics of science, it involved making measurements and doing statistics, it was accepted as science by many other people, and (most importantly!) epidemiology got funded as science. But most epidemiology was not science, as any real scientist could easily recognize – it was no more science than were those market researchers with clipboards who question pedestrians on the high street. I saw a similar picture in almost all the vast amount of ‘functional brain imagining’ which was the dominant and most prestigious type of Neuroscience. And again in the people who were mapping the average human genome – then (presumably) going on to map the genome of every individual human, then perhaps every creature on the planet?
As Jacob Bronowski once remarked: science is not a loose leaf folder of ‘facts’; not the kind of thing which can be expanded ad infinitum – simply by iterative addition of ever-more observations. Science is instead the creation of structured knowledge, with the emphasis on structure [6]. The modern scientific literature is ballooning exponentially with published stuff and ever-inflated claims about its significance – but, lacking structure, this malignantly-expanding mass adds-up to less-and-less. Meanwhile, understanding, prediction and the ability to intervene on the natural world to attain pre-specified objectives all dwindle; because real science is a living tradition not a dead archive.
The younger generation of scientists are like children who have been raised by wolves. They have learned the techniques but have no feel for the proper aims, attitudes and evaluations of science. What little culture they have comes not from science but from bureaucrats: they utterly lack scientific culture; they do not talk science, instead they spout procedures.
It has now become implicitly accepted among the mass of professional ‘scientists’ that the decisions which matter most in science are those imposed upon science by outside forces: by employers (who gets the jobs, who gets promotion), funders (who gets the big money), publishers (who gets their work in the big journals), bureaucratic regulators (who gets allowed to do work), and the law courts (whose ideas get backed-up, or criminalized, by the courts). It is these bureaucratic mechanisms that constitute ‘real life’ and the ‘bottom line’ for scientific practice. The tradition has been broken.
A minority of young scientists have, by dedication or luck, absorbed the tradition of real science, yet because their wisdom is tacit and is not shared by the majority of the bureaucratically-minded, they will almost-inevitably be held back from status and excluded from influence. It is bureaucracy that now controls ‘science’, and that which bureaucracy cannot or will not acknowledge might as well not exist, so far as the direction of ‘science’ is concerned.
Most of modern science is therefore apparently in pretty much the post-holocaust situation described in A canticle for Liebowitz and After Virtue – the transmission of tacit knowledge has been broken. But the catastrophe was bureaucratic, rather than violent – and few seem to have noticed the scale of destruction.
But, it might be asked, supposing the tradition had indeed been broken; if this was true, then how would we know it was true? – given that the point of MacIntyre’s and Miller’s fables was that when a tradition is broken people do not realize it. The answer is that we know at the moment that the tradition has been broken, but this knowledge is on the verge of extinction.
The sources of evidence are at least fourfold. If we judge the rate of scientific progress by individualistic common sense criteria (rather than bureaucratic indices), it is obvious that the rate of progress has declined in at least two major areas: physics and medical research [7], [8] and [9]. Furthermore there has been a decline in the number of scientific geniuses, which is now near-zero [10]. If geniuses are vital to overall scientific progress, then progress probably stopped a while ago [11].
In addition, the actual practice of science has transformed profoundly [12] – the explicit aims of scientists, their truthfulness, what scientists do on a day by day basis, the procedures by which their work is evaluated… all of these have changed so much over the past 50 years that it is reasonable to conclude that science now is performing an almost completely different function than it was 50 years ago. After all, if modern science neither looks nor quacks like a duck, why should we believe it is a duck? Just because science has the same name, does not mean it is them same thing when almost-everything about it has been transformed!
And finally we might believe that the tradition has been broken because this has been a frequently implicit, sometimes explicit, theme of some of the most original and informed scientists for several decades: from the Feynman and Crick through to Brenner – take your pick. It seems to me that they have for many years been warning us that science was on a wrong track, and the warnings have not been heeded.
So: the tradition has been broken. However, for as long as the fact is known that the tradition has been broken, and representatives of the tradition are still alive and active, there still exists a remote possibility that the tradition could be revived.
Acknowledgement
Some of these ideas emerged in conversations with Jonathan Rees, and quite a few were derived from him.
References
[1] A. MacIntyre, After virtue: a study in moral theory, Duckworth, London (1981).
[2] W.M. Miller, A canticle for Liebowitz, Weidenfeld and Nicholson, London (1960).
[3] M. Polanyi, Personal knowledge: towards a post-critical philosophy, University of Chicago Press, Chicago, USA (1958).
[4] M. Oakeshott, Rationalism in politics and other essays, Methuen, London (1962).
[5] B.G. Charlton, Should epidemiologists be pragmatists, biostatisticians or clinical scientists?, Epidemiology 7 (1996), pp. 552–554. View Record in Scopus | Cited By in Scopus (8)
[6] J. Bronowski, Science and human values, Harper Colophon, New York (1975).
[7] L. Smolin, The trouble with physics, Penguin, London (2006).
[8] D.F. Horrobin, Scientific medicine – success or failure?. In: D.J. Weatherall, J.G.G. Ledingham and D.A. Warrell, Editors, Oxford textbook of medicine (2nd ed.), Oxford University Press, Oxford (1987), pp. 2.1–2.3.
[9] B.G. Charlton and P. Andras, Medical research funding may have over-expanded and be due for collapse, QJM 98 (2005), pp. 53–55. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (15)
[10] B.G. Charlton, The last genius? – reflections on the death of Francis Crick, Med Hypotheses 63 (2004), pp. 923–924. Article | PDF (209 K) | View Record in Scopus | Cited By in Scopus (2)
[11] C. Murray, Human accomplishment. The pursuit of excellence in the arts and sciences 800 BC to 1950, HarperCollins, New York (2003).
[12] J. Ziman, Real science, Cambridge University Press, Cambridge, UK (2000).
Subscribe to:
Posts (Atom)