Some influential papers from the history of Medical Hypotheses
I believe that a journal editor should be ‘agnostic’ about the truth of the papers he publishes, since truth in science is not something that editors ought to guess, but should only be determined after publication by evaluation and testing within the wider scientific community.
There is, at present, no objective method of evaluating a journal's relative or unique influence either quantitatively or qualitatively. To do this would require a great investment of intelligence, time and resources.
After all, papers published in very high impact journals like Nature, Science and PNAS would - if rejected from one of these - almost-certainly have been published in another of these, or elsewhere in a specialist journal with similar impact; and if this had happened the same paper may well have had identical impact.
What is hard to get-at is the _distinctive_ contribution of a _specific_ journal. At present, the best avaiable method may be biographical: asking scientists their opinion about the importance of particular papers in particular journals: http://medicalhypotheses.blogspot.com/2010/02/medical-hypotheses-authors-letters-of.html
However the crude influence of publications can sometimes be estimated using citation analysis – this looks at the number of times a paper has been listed in the reference section by other scientists.
Citations build-up over several years, so that citation analysis is more reliable for older papers. But citations tend to reward mainstream 'methodological' papers - it is hard to estimate the importance of 'ideas' papers such as hypotheses and theories, especially as scientists do not feel obliged to cite the sources of their ideas even if they can remember them (whereas, by contrast, scientists must cite the source of any empirical data on which their own research depends).
Bearing in mind these methods and caveats, I have compiled a short list of some of the papers from Medical Hypotheses which seem to have been most influential.
There were just two editors of Medical Hypotheses in its 35 year history
1975-2003 – David L Horrobin as Editor
In the early days of Medical Hypotheses many of the papers reflected the first Editor’s interest in nutritional topics; and Medical Hypotheses published many ideas that helped launch some of today’s mainstream ideas about diet, such as the benefits of supplementation with ‘omega’ fatty acids and antioxidants.
In 1985 AJ Verlangieri and others outlined the now widely-accepted idea that eating plenty of fruit and vegetables helps prevent heart disease in a widely quoted paper: “Fruit and vegetable consumption and cardiovascular mortality”.
Through the 1980s in Medical Hypotheses, freelance US scientist Mark F McCarty was publishing many of the early and influential papers about the importance of antioxidants in the diet, and their possible role in preventing disease. Over some three decades McCarty has published more papers in Medical Hypotheses than anyone else, and together these papers have been cited thousands of times in the scientific literature.
In 1987 the Medical Hypotheses founding editor David Horrobin published a frequently-referenced paper on the ‘omega-3’ type of essential fatty acid, which so many people now use as dietary supplements: “Low prevalences of coronary heart disease (CHD), psoriasis, asthma and rheumatoid arthritis in Eskimos: Are they caused by high dietary intake of eicosapentaenoic acid (EPA), a genetic variation of essential fatty acid (EFA) metabolism or a combination of both?”
In 1985, Clouston and Kerr published in Medical Hypotheses an influential paper called “Apoptosis, lymphocytotoxicity and the containment of viral infections”. This first described the now widely accepted idea that viruses may be fought by inducing suicide in virus-infected cells.
The most widely cited paper in Medical Hypotheses was published in 1991: The macrophage theory of depression by RS Smith. This is a key paper which argues that immune system chemicals may be a major cause of depression, and has been cited 242 times according to Google Scholar.
2004-2010 Bruce G Charlton as Editor
Here are some recent papers under my editorship which have already had an impact:
In 2005, Lola Cuddy and Jackie Duffin of Queens University Canada published an influential paper in Medical Hypotheses based on an elderly lady with several Alzheimer’s disease who still retained the ability to recognize music. They theorized that this might provide useful information on the nature of brain damage in Alzheimer’s, and suggested that dementia sufferers might benefit from a more musical environment. This paper was awarded the David Horrobin Prize for 2005 for the paper in Medical Hypotheses which best exemplified the intentions of the founding editor – the famous Cambridge transplant surgeon Sir Roy Calne was judge.
In “A tale of two cannabinoids” by E Russo & GW Guy from 2006, the authors presented the rationale for using a combination of marijuana products tetrahydrocannabinol (THC) and cannabidiol (CBD) as useful painkilling drugs and for the treatment of several other medical conditions. This idea has since been widely discussed in the scientific literature.
In 2005 Eric Altschuler published a letter in Medical Hypotheses outlining his idea that survivors of the 1918 flu epidemic might even now retain immunity to the old virus. A few 1918 flu survivors were found who still had antibodies, and cells from these people were cloned to create an antiserum that protected experimental mice against the flu virus. The work was eventually published in Nature and received wide coverage in the media.
Sunday, 25 April 2010
Tuesday, 13 April 2010
The Cancer of Bureaucracy
Bruce G Charlton
The cancer of bureaucracy: how it will destroy science, medicine, education; and eventually everything else
Medical Hypotheses - 2010; 74: 961-5.
Summary
Everyone living in modernizing ‘Western’ societies will have noticed the long-term, progressive growth and spread of bureaucracy infiltrating all forms of social organization: nobody loves it, many loathe it, yet it keeps expanding. Such unrelenting growth implies that bureaucracy is parasitic and its growth uncontrollable – in other words it is a cancer that eludes the host immune system. Old-fashioned functional, ‘rational’ bureaucracy that incorporated individual decision-making is now all-but extinct, rendered obsolete by computerization. But modern bureaucracy evolved from it, the key ‘parasitic’ mutation being the introduction of committees for major decision-making or decision-ratification. Committees are a fundamentally irrational, incoherent, unpredictable decision-making procedure; which has the twin advantages that it cannot be formalized and replaced by computerization, and that it generates random variation or ‘noise’ which provides the basis for natural selection processes. Modern bureaucracies have simultaneously grown and spread in a positive-feedback cycle; such that interlinking bureaucracies now constitute the major environmental feature of human society which affects organizational survival and reproduction. Individual bureaucracies must become useless parasites which ignore the ‘real world’ in order to adapt to rapidly-changing ‘bureaucratic reality’. Within science, the major manifestation of bureaucracy is peer review, which – cancer-like – has expanded to obliterate individual authority and autonomy. There has been local elaboration of peer review and metastatic spread of peer review to include all major functions such as admissions, appointments, promotions, grant review, project management, research evaluation, journal and book refereeing and the award of prizes. Peer review eludes the immune system of science since it has now been accepted by other bureaucracies as intrinsically valid, such that any residual individual decision-making (no matter how effective in real-world terms) is regarded as intrinsically unreliable (self-interested and corrupt). Thus the endemic failures of peer review merely trigger demands for ever-more elaborate and widespread peer review. Just as peer review is killing science with its inefficiency and ineffectiveness, so parasitic bureaucracy is an un-containable phenomenon; dangerous to the extent that it cannot be allowed to exist unmolested, but must be utterly extirpated. Or else modernizing societies will themselves be destroyed by sclerosis, resource misallocation, incorrigibly-wrong decisions and the distortions of ‘bureaucratic reality’. However, unfortunately, social collapse is the more probable outcome, since parasites can evolve more rapidly than host immune systems.
***
Everyone in modernizing ‘Western’ societies (roughly the USA, UK, Western and Central Europe) will, no doubt, have noticed that there has been a long-term, progressive growth and spread of bureaucracy. Except during major war; this has not been a matter of pendulum swings, with sometimes less and sometimes more bureaucracy, but instead of relentless overall expansion – albeit sometimes faster and at other times slower.
The bureaucratic takeover applies to science, medicine, education, law, police, the media – indeed to almost all social functions. Such unrelenting growth implies either that 1. Bureaucracy is vital to societal functioning and the more bureaucracy we have the better for us; or that 2. Bureaucracy is parasitic and its growth is uncontrollable. Since the first alternative has become obviously absurd, I am assuming the second alternative is correct: that bureaucracy is like a cancer of modernizing societies – i.e. its expansion is malignant and its effect is first parasitic, then eventually fatal.
While it is generally recognized that modern societies are being bled-dry by the expense, delays, demoralization and reality-blindness imposed by multiple expanding and interacting bureaucracies, it is not properly recognized that bureaucratic decision-making is not merely flawed by its expense and sluggishness but also by its tendency to generate wrong answers. Modern bureaucracy, indeed, leads to irrational and unpredictable decisions; to indefensible decisions which are barely comprehensible, and cannot be justified, even by the people directly involved in them.
In what follows, I will make a distinction between, on the one hand, Weberian, functional, ‘rational’ bureaucracy which (in its ideal type, as derived from the work of Max Weber; 1864-1920) incorporated individual decision-making and was evaluated externally in terms of results and efficiency; and, on the other hand, modern ‘parasitic’ bureaucracy which (in its ideal type) deploys majority-vote committees for its major decision-making, is orientated purely towards its own growth, and which by means of its capacity to frame ‘reality’ - has become self-validating.
I will argue that parasitic bureaucracy evolved from rational bureaucracy in response to the rapidly changeable selection pressures imposed by modern society, especially the selection pressure from other bureaucracies having constructed a encompassing, virtual but dominant system of ‘bureaucratic reality’; and that the system of rational bureaucracy is by now all-but extinct – having been rendered obsolete by computerization.
The problem of parasitic bureaucracy
It is a striking feature of modern bureaucracy that nobody loves it, many loathe it (even, or especially, the bureaucrats themselves), yet it keeps growing and spreading. One reason is that bureaucracy is able to frame reality, such that the more that bureaucracy dominates society, the more bureaucracy seems to be needed; hence the response to any bureaucracy-generated problem is always to make more and bigger bureaucracies. It is this positive feedback system which is so overwhelming. Mere human willpower is now clearly inadequate to combat bureaucratic expansionism. Bureaucracy has become like The Borg on Star Trek: the next generation: it feeds-upon and assimilates opposition.
Bureaucracies are indeed no longer separable but form a linked web; such that to cut one bureaucracy seems always to imply another, and larger, bureaucracy to do the cutting. When the dust has settled, it is invariably found that the total sum and scope of societal bureaucratic activity has increased. And it is well recognized that modern bureaucracies tend to discourse-about, but never to eradicate, problems – it is as-if the abstract bureaucratic system somehow knew that its survival depended upon continually working-on, but never actually solving problems... Indeed, ‘problems’ seldom even get called problems nowadays, since problems imply the need and expectation for solutions; instead problems get called ‘issues’, a term which implies merely the need to ‘work-on’ them indefinitely. To talk in terms of solving problems is actually regarded as naïve and ‘simplistic’; even when, as a matter of empirical observation, these exact same problems were easily solved in the past, as a matter of record.
Over much of the world, public life is now mostly a matter of ‘bureaucracy speaking unto bureaucracy’. Observations and opinions from individual humans simply don’t register – unless, of course, individual communications happen to provide inputs which bureaucracies can use to create more regulations, more oversight, hence create more work for themselves. So individual complaints which can be used to trigger bureaucratic activity may be noted and acted-upon, or personal calls for more bureaucratic oversight may be amplified, elaborated and implemented. But anything which threatens the growth and spread of bureaucracy (i.e. anything simple that is also worryingly swift, efficient or effective) is ignored; or in extremis attacked with lethal intent.
The main self-defence of modern bureaucracy, however, is to frame reality. Since bureaucracies now dominate society, that which bureaucracies recognize and act-upon is ‘reality’; while that which bureaucracies do not recognize does not, for practical purposes, exist. Bureaucracy-as-a-system, therefore constructs a 'reality' which is conducive to the thriving of bureaucracy-as-a-system.
When a powerful bureaucracy does not recognize a communication as an input, then that communication is rendered anecdotal and irrelevant. Information which the bureaucracy rejects takes-on an unreal, subjective quality. Even if everybody, qua individual, knows that some thing is real and true – it becomes possible for modern bureaucracy implicitly to deny that thing's existence simply by disregarding it as an input, and instead responding to different inputs that are more conducive to expansion, and these are then rendered more significant and 'realer' than actual reality.
For many people, the key defining feature of a bureaucracy (as described by Weber) is that ideally it is an information-processing organization that has established objective procedures which it implements impartially. It is these quasi-mechanical procedures which are supposed to link aims to outcomes; and to ensure that, given appropriate inputs a bureaucracy almost-automatically generate predictable and specific outputs and outcomes.
However modern bureaucracies do not work like that. Indeed, such has been the breakdown in relationship between input and output that modern bureaucracies devote immense resources to change pure-and-simple; for example continually changing the recognition of input measures (i.e. continually redefining 'reality') and re-defining an organization’s mission and aims (i.e. rendering the nature of the organization different-from and incommensurable-with the past organization) and repeatedly altering the organizational outcomes regarded as relevant (re-defining making any decline in the efficiency of the organization formally un-measurable).
Such change may be externally- or internally-triggered: either triggered by the external demands of other bureaucracies which constitute the organizational environment, or triggered by the innate noise-generating tendencies of committees.
With endlessly-altering inputs, processes and outputs, bureaucratically-dominated organizations are impossible to critique in terms of functionality: their effectiveness is impossible to measure, and if or when they may be counter-productive (in terms of their original real world purpose) this will also be unknowable. Individual functional organizations disappear and all bureaucracies blend into a Borg-like web of interdependent growth.
The nature of bureaucracy: rational versus parasitic
What is bureaucracy? The traditional definition emphasises that bureaucracy entails a rational human organization which is characterized by hierarchy and specialization of function, and that the organization deploys explicit procedures or regulations that are impartially administered by the personnel. A rational ‘Weberian’ bureaucracy was probably, on the whole, performing a useful function reasonably efficiently – in other words its effectiveness was perceived in terms of externally-pre-decided criteria, and its growth and spread were circumscribed.
In medical terms, Weberian bureaucracy was therefore – at worst - a benign tumour; potentially able to overgrow locally and exert pressure on its surroundings; but still under control from, and held in check by, the larger host organism of society.
But, just as cancers usually evolve from benign precursors, so it was that modern parasitic and useless bureaucracies evolved from the rational and functional bureaucracies of an earlier era. Probably the key trigger factor in accelerating the rate of this evolution has been the development of computers, which have the potential to do – almost instantly, and at near zero cost – exactly the kind of rational information processing which in the past could only be done (much more slowly, expensively, and erratically) by Weberian bureaucracy. My contention is that large scale rational, functional bureaucracies are now all-but extinct, destroyed by computerization.
I assume that, when rational bureaucracy was facing extinction from computerization, there was a powerful selection pressure for the evolution of new forms of irrational bureaucracy – since rational procedures could be converted into algorithms, formalized and done mechanically; while irrational procedures were immune from this competition.
The outcome is that, despite retaining a vast structure of procedure and regulation, and the organizational principles of hierarchy and specialization, those powerful modern bureaucracies that survived the challenge of computerization and are still alive and growing nowadays are non-rational in their core attributes. Irrationality is indeed an essential aspect of a modern bureaucracy’s ability to survive and thrive. Those bureaucracies which remain and are expanding in this post-computerization era are neither rational nor functional.
This evolution towards pure parasitism – with no performance of a substantive real-world function - is only possible because, for any specific bureaucracy, its relevant environment now substantially consists of other bureaucracies. It is 'other bureaucracies' that are the main selection pressure: other bureaucracies pose the main threat to survival and reproduction. A modern bureaucracy therefore must respond primarily to ‘bureaucratic reality’ – and any engagement with ‘real life’ (e.g. life as it is perceived by alert and informed individual human beings) simply stands in the way of this primary survival task.
So, the best adapted modern bureaucracies are those which most efficiently play the game of satisfying the constantly-and rapidly-changing requirements of other major bureaucracies. Success brings expansion by local growth and metastatic spread. But, in contrast, satisfying the stable requirements of ‘real life’ and human nature, by contrast, brings a bureaucracy little or no rewards, and a greater possibility of extinction from the actions of other bureaucracies.
The role of committees in the evolution of bureaucracy
I will argue that the major mechanism by which irrationality has been introduced into bureaucracies is the committee which makes decisions by majority voting.
Committees now dominate almost all the major decision-making in modernizing societies – whether in the mass committee of eligible voters in elections, or such smaller committees as exist in corporations, government or in the US Supreme Court: it seems that modern societies always deploy a majority vote to decide or ratify all questions of importance. Indeed, it is all-but-inconceivable that any important decision be made by an individual person – it seems both natural and inevitable that such judgments be made by group vote.
Yet although nearly universal among Western ruling elites, this fetishizing of committees is a truly bizarre attitude; since there is essentially zero evidence that group voting leads to good, or even adequate, decisions – and much evidence that group voting leads to unpredictable, irrational and bad decisions.
The nonsense of majority voting was formally described by Nobel economics laureate Kenneth Arrow (1921-) in the 1960s, but it is surely obvious to anyone who has had dealings with committees and maintains independent judgement. It can be demonstrated using simple mathematical formulations that a majority vote may lead to unstable cycles of decisions, or a decision which not one single member of the committee would regard as optimal. For example, in a job appointments panel, it sometimes happens that there are two strong candidates who split the panel, so the winner is a third choice candidate whom no panel member would regard as the best candidate. In other words any individual panel member would make a better choice than derives from majority voting.
Furthermore, because of this type of phenomenon, and the way that majority decisions do not necessarily reflect any individual's opinion, committee decisions carry no responsibility. After all, how could anyone be held responsible for outcomes which nobody intended and to which nobody agrees? So that committees exert de facto power without responsibility. Indeed most modern committees are typically composed of a variable selection from a number of eligible personnel, so that it is possible that the same committee may never contain the same personnel twice. The charade is kept going by the necessary but meaningless fiction of ‘committee responsibility’, maintained by the enforcement of a weird rule that committee members must undertake, in advance of decisions, to abide by whatever outcome (however irrational, unpredictable, unjustified and indefensible) the actual contingent committee deliberations happen to lead-to. This near-universal rule and practice simply takes ‘irresponsibility’ and re-names it ‘responsibility’…
Given that committee decisions are neither rational nor coherent, and are therefore radically unpredictable, what is their effect? In a nutshell the short answer is that committees – overall and in the long term – generate random ‘noise’. Committees almost certainly increase the chances that a decision is wrong – but overall they probably do not have lead to any specifically biased direction of wrongness. While some committees using some procedures are biased in one direction, others are biased in other directions, and in the end I think the only thing that we can be sure about is that committees widen the range of unpredictability of decisions.
Now, if we ask what is the role of randomness in complex systems? - the answer is that random noise provides the variations which are the subject of selection processes. For example, in biology the random errors of genetic replication provide genetic variation which affects traits that are then subjected to natural selection. So, it seems reasonable to infer that committees generate random changes that generate variations in organizational characteristics which are then acted-upon by selection mechanisms. Some organizational variations are amplified and thrive, while other variations are suppressed and dwindle. Overall, this enables bureaucracies rapidly to evolve – to survive, to grow and to spread.
How much random noise is needed in a bureaucracy (or any evolving system)? The short answer is that the stronger is the selection pressure, the greater is the necessity for rapid evolution, then the more noise is needed; bearing in mind the trade-off by which an increased error rate in reproduction also reduces the ability of an evolving system accurately to reproduce itself. A system under strong selection pressure (e.g. a bureaucracy in a rapidly-changing modernizing society) tends to allow or generate more noise to create a wider range of variation for selection to act upon and thereby enable faster evolution – at the expense of less exact replication. By contrast, a system under weaker selection pressure (such as the Weberian bureaucracies of the early 20th century – for instance the British Civil Service) have greater fidelity of replication (less noise), but at the expense of a reduced ability to change rapidly in response to changing selection pressures.
I am saying here that committees using majority voting are responsible for the evolution of malignant bureaucratic growth in modern bureaucracies, and that this is why majority-vote decision-making permeates modern societies from the top to the bottom.
Although almost all major decision-making in the ‘Western’ world is now by majority voting there may be two significant exceptions: firstly military decision-making in time of war; secondly the personal authority of the Pope in the Roman Catholic Church. In both these types of organization there seems to be a greater emphasis on individual decision-making than on committee voting. Military command structures and the Roman Catholic hierarchy are therefore probably both closer to the ideal type of a Weberian rational bureaucracy than to the ideal type of a modern parasitic bureaucracy.
If so, the only major exceptions to majority rule decision-making at a world level, and probably not by coincidence, are the oldest and longest-enduring bureaucratic structures: that is, organizations which have retained functionality and have not themselves been destroyed by bureaucratic cancer.
Why are there committees at all?
Although they may nowadays be almost wholly damaging, committees cannot in their origins have been entirely useless or harmful; or else the form would never have survived its first appearance. If we acknowledge that individuals have the potential for better (i.e. more rational and coherent) decision-making than committees, then the decline of individual decision-making must not be due to the lack of advantages so much as the perceived problems of individual decision-making.
The problems of individual decision-making are the same as the problems of individual power: in essence these problems are self-interest (i.e. the observation that power will be deployed differentially to benefit the power-holder) and corruption (i.e. the observation that over time power will corrupt, making the individual progressively a worse-and-worse decision-maker until he us note merely self-interested but progressively driven mad: power mad).
Since humans are self-centred beings living in an imperfect world, all individuals tend to be both self-interested and corruptible (albeit to widely-varying degrees!). Of course, self-interest and corruptibility applies equally to people 'serving' on committees - each of whom is wielding lesser but anonymous and irresponsible power. Nonetheless, it seems to me that committees are mostly favoured because they are seen as a solution to these intrinsic problems of individual power. The implicit assumption is that when a committee is run by majority voting then individual self-interests will cancel-out. Furthermore, that since power is spread-around more people on a committee, then the inevitably corrupting effect of power will be similarly diluted.
In reality, committees mostly solve the problems of power to the extent that they reduce the effective deployment of power. So that, if committees are indeed less self-interested and less prone to corruption than individuals, this is achieved mainly because the committee structure and procedures make decision-making so unpredictable and incoherent that committees are rendered ineffective: ineffective to such an extent that committees cannot even manage consistently to be self-interested or corrupt! Therefore, the problems of power are ‘solved’, not by reducing the biases or corruptions of power, but simply by reducing the effectiveness of power; by introducing inefficiencies and obscuring the clarity of self-interest with the labile confusions of group dynamics. Power is not controlled but destroyed…
Therefore, if committees were introduced to reduce the abuse of power, then instead of achieving this, their actual outcome is that committees reduce power itself, and society is made docile when confronted by significant problems which could be solved, but are not. And surely this is precisely what we observe in the West, on an hourly basis?
Because committee-based bureaucracy is predicated on an ethic of power as evil: it functions as a sort of unilateral disarmament that would be immediately obvious as self-defeating or maladaptive unless arising in a context of already-existing domination. And a system of committee-based bureaucracy can only survive for as long as it its opponents can be rendered even-weaker by even-more virulent affliction with the same disease: which perhaps explains the extra-ordinarily venomous and dishonest pseudo-moralizing aggression which committee bureaucracy adopts towards other simpler, more-efficient or more-effective organizational systems that still use individual decision-making.
If we assume that committees were indeed introduced as a purported solution to (real or imagined, actual or potential) abuses of individual power; then committees will therefore usually achieve this goal. So long as the quality of decision-making is ignored, then the committees seem to be successful. Committees can therefore be seen as a typical product of one-sided and unbalanced moralism that has discarded the Aristotelian maxim of moderation in all things. Bureaucracy adopts instead unilateral moralism which aims at the complete avoidance of one kind of sin, even at the cost of falling into another contrasting kind of sin (so pride is avoided by encouraging submission, and aggression is avoided by imposing sloth).
However the subject matter of ‘trade-offs’ is avoided; and the inevitable self-created problems of single issue moral action are instead fed-upon by bureaucracy, leading (of course!) to further expansion.
Hence, modern decision-making means that societal capability has declined in many areas. It has become at best slow and expensive, and at worst impossible, to achieve things which were done quickly, efficiently and effectively under systems based on individual decision-making. To avoid the corruption of individual authority, society has been rendered helpless in the face of threats which could have been combated.
Bureaucracy in science – the cancer of peer review
This situation can readily be seen in science. Although modern science is massively distorted and infiltrated by the action of external bureaucracies in politics, public administration, law, business and the media (for example), the major manifestation of bureaucracy actually within science is of course peer review.
Over the last half-century or so, the growth and metastatic spread of peer review as a method of decision-making in science has been truly amazing. Individual decision-making has been all-but obliterated at every level and for almost every task. The elaborateness of peer review has increased (e.g. the number of referees, the number of personnel on evaluating panels, the amount of information input demanded by these groups). And peer review or other types of committee are now used for admissions, appointments, promotions, grant review, project management, research evaluation, journal and book refereeing, the award of prizes… the list just goes on and on. Clearly, peer review fits the pattern of malignant expansion of bureaucracy that is seen in the rest of modern society.
And, as with the rest of society, the cancer of bureaucratic peer review eludes the immune system of science. It has now been widely accepted, by the other bureaucracies of modern society in particular, that peer review is intrinsically valid; and that any other form of decision-making is intrinsically corrupt or unreliable. This belief is not merely implicit, but frequently explicit: with ignorant and nonsensical statements about the vital and defining role of peer review in science being the norm in mainstream communication.
The irresistible rise of peer review can be seen most starkly in that any deficiencies in peer review triggers demands (especially from other bureaucracies) for more elaborate and widespread peer review. So that the endemic failure of increased journal peer review to maintain quality, or to eliminate what it is purported to detect; such as deliberate fraud, or multiple publication, or serious error - leads inevitably leads to plans for further increases in peer review. So there is peer review of greater elaborateness, with further steps added to the process, and extra layers of monitoring by new types of larger committees. The ultimate validity of peer review is simply an assumption; and no amount of contrary evidence of its stultifying inefficiency, its harmful biases, and distorting exclusions can ever prove anything except the need for more of the same.
Yet the role of peer review in the progress of science remains, as it always has been, conjectural and unverified. The processes of gathering and collating peer opinion as a method of decision-making are neither rational nor transparent – and indeed (as argued above) this irrationality and unpredictability is in fact a necessary factor in the ability of committee systems such as peer review to expand without limit.
In the past; the ultimate, bottom-line, within-science validation of science came not from the committee opinions of peer reviewers but from the emergent phenomenon of peer usage – which refers to the actual deployment of previous science (theories, facts, techniques) in the ongoing work of later scientists. This was an implicit, aggregate but not quantified outcome of a multitude of individual-decisions among peers (co-workers in the same domain) about what aspects of previous science they would use in their own research: each user of earlier work was betting their time, effort and reputation on the validity of the previous research which they chose to use. When their work bore fruit, this a validation of previous research (in the sense that having survived this attempt at refutation the old science now commanded greater confidence); but when previous research was faulty it 'sabotaged' any later research building upon it in terms of correctly predicting or effectively-intervening-in the natural world. Beyond this lies the commonsensical evaluation of science in terms of ‘what works’ – especially what works outside of science, by people such as engineers and doctors whose job is to apply science in the natural world.
But now that committee-based peer review has been explicitly accepted as the ‘gold standard’ of scientific validity, we see the bizarre situation that actual scientific usage and even what works is regarded as less important than the ‘bureaucratic reality’ of peer review evaluations. Mere opinions trump observations of objective reality. Since ‘bureaucratic reality’ is merely a construct of interacting bureaucracies, this carries the implication that scientific reality is now, to an ever-increasing extent, simply just another aspect of, and seamlessly-continuous-with, mainstream 'bureaucratic reality'. Science is merely a subdivision of that same bureaucratic reality seen in politics, public administration, law, the media and business. The whole thing is just one gigantic virtual world. It seems probable that much of peer reviewed ‘science’ nowadays therefore carries no implications of being useful in understanding, predicting or intervening-on the natural world.
In other words, when science operates on the basis of peer review and committee decision, it is not really science at all. The cancer of bureaucracy has killed real science wherever it dominates. Much of mainstream science is now ‘Zombie Science’: that is, something which superficially looks-like science, but which is actually dead inside, and kept-moving only by continuous infusion of research funds. So far as bureaucratic reality is concerned, i.e. the reality as acknowledged among the major bureaucracies; real science likely now exists at an unofficial, unacknowledged level, below the radar; only among that minority of scholars and researchers who still deploy the original scientific evaluation mechanisms such as individual judgement, peer usage and real-world effectiveness.
What will happen?
The above analysis suggests that parasitic bureaucracy is so dangerous in the context of a modernizing society that it cannot be allowed to exist; it simply must be destroyed in its entirety or else any residuum will re-grow, metastasize and colonize society all over again. The implication is that a future society which intends to survive in the long-term would need to be one that prevents parasitic bureaucracy from even getting a toe-hold.
The power of parasitic bureaucracy to expand and to trigger further parasitic bureaucracies is now rendered de facto un-stoppable by the power of interacting bureaucracies to frame and construct perceived reality in bureaucratic terms. Since bureaucratic failure is eliminated by continual re-definition of success, and the since any threats of to bureaucratic expansion are eliminated by exclusion or lethal attack; the scope of bureaucratic takeover from now can be limited only by collapse of the social system as a whole.
So, if the above analysis is correct, there can be only two outcomes. Either that the cancer of modern bureaucracy will be extirpated: destroyed utterly. In other words, the host immune system will evolve the ability to destroy the parasite. Maybe, all majority voting committees will coercively be replaced by individuals who have the authority to make decisions and responsibility for those decisions.
Or that the cancer of bureaucracy will kill the host. In other words, the parasite will continue to elude the immune system. Modernizing societies will sooner-or-later be destroyed by a combination of resource starvation plus accumulative damage from delayed and wrong decisions based on the exclusions and distortions of ‘bureaucratic reality’.
Then the most complex rapidly-growing modernizing Western societies will be replaced by, or will regress into, zero-growth societies with a lower level of complexity - probably about the level of the agrarian societies of the European or Asian Middle Ages.
My prediction is that outcome two – societal collapse - is at present the more probable, on the basis that parasites can evolve more rapidly than host immune systems. Although as individuals we can observe the reality of approaching disaster, to modern parasitic bureaucracies the relevant data is either trivial or simply invisible.
***
Further reading: Although I do not mention it specifically above, the stimulus to writing this essay came from Mark A Notturno’s Science and the open society: the future of Karl Popper’s philosophy (Central European University Press: Budapest, 2000) – in particular the account of Popper’s views on induction. It struck me that committee decision-making by majority vote is a form of inductive reasoning, hence non-valid; and that inductive reasoning is in practice no more than a form of ‘authoritarianism’ (as Notturno terms it). In the event, I decided to exclude this line of argument from the essay because I found it too hard to make the point interesting and accessible. Nonetheless, I am very grateful to have had it explained to me.
I should also mention that various analyses of the pseudonymous blogger Mencius Moldbug, who writes at Unqualified Reservations, likely had a significant role in developing the above ideas.
This argument builds upon several previous pieces of mine including: Conflicts of interest in medical science: peer usage, peer review and ‘CoI consultancy' (Medical Hypotheses 2004; 63: 181-186); Charlton BG, Andras P. What is management and what do managers do? A systems theory account. (Philosophy of Management. 2004; 3: 3-15); Peer usage versus peer review (BMJ 2007; 335: 451); Charlton BG, Andras P. Medical research funding may have over-expanded and be due for collapse (QJM 2005; 98: 53–55); Figureheads, ghost-writers and pseudonymous quant bloggers: the recent evolution of authorship in science publishing (Medical Hypotheses. 2008; 71: 475–480); Zombie science’ (Medical Hypotheses 2008; 71:327–329); The vital role of transcendental truth in science’ (Medical Hypotheses. 2009; 72: 373–376); Are you an honest scientist? Truthfulness in science should be an iron law, not a vague aspiration (Medical Hypotheses. 2009; Volume 73: 633-635); and, After science: has the tradition been broken? Medical Hypotheses, in the press.
The cancer of bureaucracy: how it will destroy science, medicine, education; and eventually everything else
Medical Hypotheses - 2010; 74: 961-5.
Summary
Everyone living in modernizing ‘Western’ societies will have noticed the long-term, progressive growth and spread of bureaucracy infiltrating all forms of social organization: nobody loves it, many loathe it, yet it keeps expanding. Such unrelenting growth implies that bureaucracy is parasitic and its growth uncontrollable – in other words it is a cancer that eludes the host immune system. Old-fashioned functional, ‘rational’ bureaucracy that incorporated individual decision-making is now all-but extinct, rendered obsolete by computerization. But modern bureaucracy evolved from it, the key ‘parasitic’ mutation being the introduction of committees for major decision-making or decision-ratification. Committees are a fundamentally irrational, incoherent, unpredictable decision-making procedure; which has the twin advantages that it cannot be formalized and replaced by computerization, and that it generates random variation or ‘noise’ which provides the basis for natural selection processes. Modern bureaucracies have simultaneously grown and spread in a positive-feedback cycle; such that interlinking bureaucracies now constitute the major environmental feature of human society which affects organizational survival and reproduction. Individual bureaucracies must become useless parasites which ignore the ‘real world’ in order to adapt to rapidly-changing ‘bureaucratic reality’. Within science, the major manifestation of bureaucracy is peer review, which – cancer-like – has expanded to obliterate individual authority and autonomy. There has been local elaboration of peer review and metastatic spread of peer review to include all major functions such as admissions, appointments, promotions, grant review, project management, research evaluation, journal and book refereeing and the award of prizes. Peer review eludes the immune system of science since it has now been accepted by other bureaucracies as intrinsically valid, such that any residual individual decision-making (no matter how effective in real-world terms) is regarded as intrinsically unreliable (self-interested and corrupt). Thus the endemic failures of peer review merely trigger demands for ever-more elaborate and widespread peer review. Just as peer review is killing science with its inefficiency and ineffectiveness, so parasitic bureaucracy is an un-containable phenomenon; dangerous to the extent that it cannot be allowed to exist unmolested, but must be utterly extirpated. Or else modernizing societies will themselves be destroyed by sclerosis, resource misallocation, incorrigibly-wrong decisions and the distortions of ‘bureaucratic reality’. However, unfortunately, social collapse is the more probable outcome, since parasites can evolve more rapidly than host immune systems.
***
Everyone in modernizing ‘Western’ societies (roughly the USA, UK, Western and Central Europe) will, no doubt, have noticed that there has been a long-term, progressive growth and spread of bureaucracy. Except during major war; this has not been a matter of pendulum swings, with sometimes less and sometimes more bureaucracy, but instead of relentless overall expansion – albeit sometimes faster and at other times slower.
The bureaucratic takeover applies to science, medicine, education, law, police, the media – indeed to almost all social functions. Such unrelenting growth implies either that 1. Bureaucracy is vital to societal functioning and the more bureaucracy we have the better for us; or that 2. Bureaucracy is parasitic and its growth is uncontrollable. Since the first alternative has become obviously absurd, I am assuming the second alternative is correct: that bureaucracy is like a cancer of modernizing societies – i.e. its expansion is malignant and its effect is first parasitic, then eventually fatal.
While it is generally recognized that modern societies are being bled-dry by the expense, delays, demoralization and reality-blindness imposed by multiple expanding and interacting bureaucracies, it is not properly recognized that bureaucratic decision-making is not merely flawed by its expense and sluggishness but also by its tendency to generate wrong answers. Modern bureaucracy, indeed, leads to irrational and unpredictable decisions; to indefensible decisions which are barely comprehensible, and cannot be justified, even by the people directly involved in them.
In what follows, I will make a distinction between, on the one hand, Weberian, functional, ‘rational’ bureaucracy which (in its ideal type, as derived from the work of Max Weber; 1864-1920) incorporated individual decision-making and was evaluated externally in terms of results and efficiency; and, on the other hand, modern ‘parasitic’ bureaucracy which (in its ideal type) deploys majority-vote committees for its major decision-making, is orientated purely towards its own growth, and which by means of its capacity to frame ‘reality’ - has become self-validating.
I will argue that parasitic bureaucracy evolved from rational bureaucracy in response to the rapidly changeable selection pressures imposed by modern society, especially the selection pressure from other bureaucracies having constructed a encompassing, virtual but dominant system of ‘bureaucratic reality’; and that the system of rational bureaucracy is by now all-but extinct – having been rendered obsolete by computerization.
The problem of parasitic bureaucracy
It is a striking feature of modern bureaucracy that nobody loves it, many loathe it (even, or especially, the bureaucrats themselves), yet it keeps growing and spreading. One reason is that bureaucracy is able to frame reality, such that the more that bureaucracy dominates society, the more bureaucracy seems to be needed; hence the response to any bureaucracy-generated problem is always to make more and bigger bureaucracies. It is this positive feedback system which is so overwhelming. Mere human willpower is now clearly inadequate to combat bureaucratic expansionism. Bureaucracy has become like The Borg on Star Trek: the next generation: it feeds-upon and assimilates opposition.
Bureaucracies are indeed no longer separable but form a linked web; such that to cut one bureaucracy seems always to imply another, and larger, bureaucracy to do the cutting. When the dust has settled, it is invariably found that the total sum and scope of societal bureaucratic activity has increased. And it is well recognized that modern bureaucracies tend to discourse-about, but never to eradicate, problems – it is as-if the abstract bureaucratic system somehow knew that its survival depended upon continually working-on, but never actually solving problems... Indeed, ‘problems’ seldom even get called problems nowadays, since problems imply the need and expectation for solutions; instead problems get called ‘issues’, a term which implies merely the need to ‘work-on’ them indefinitely. To talk in terms of solving problems is actually regarded as naïve and ‘simplistic’; even when, as a matter of empirical observation, these exact same problems were easily solved in the past, as a matter of record.
Over much of the world, public life is now mostly a matter of ‘bureaucracy speaking unto bureaucracy’. Observations and opinions from individual humans simply don’t register – unless, of course, individual communications happen to provide inputs which bureaucracies can use to create more regulations, more oversight, hence create more work for themselves. So individual complaints which can be used to trigger bureaucratic activity may be noted and acted-upon, or personal calls for more bureaucratic oversight may be amplified, elaborated and implemented. But anything which threatens the growth and spread of bureaucracy (i.e. anything simple that is also worryingly swift, efficient or effective) is ignored; or in extremis attacked with lethal intent.
The main self-defence of modern bureaucracy, however, is to frame reality. Since bureaucracies now dominate society, that which bureaucracies recognize and act-upon is ‘reality’; while that which bureaucracies do not recognize does not, for practical purposes, exist. Bureaucracy-as-a-system, therefore constructs a 'reality' which is conducive to the thriving of bureaucracy-as-a-system.
When a powerful bureaucracy does not recognize a communication as an input, then that communication is rendered anecdotal and irrelevant. Information which the bureaucracy rejects takes-on an unreal, subjective quality. Even if everybody, qua individual, knows that some thing is real and true – it becomes possible for modern bureaucracy implicitly to deny that thing's existence simply by disregarding it as an input, and instead responding to different inputs that are more conducive to expansion, and these are then rendered more significant and 'realer' than actual reality.
For many people, the key defining feature of a bureaucracy (as described by Weber) is that ideally it is an information-processing organization that has established objective procedures which it implements impartially. It is these quasi-mechanical procedures which are supposed to link aims to outcomes; and to ensure that, given appropriate inputs a bureaucracy almost-automatically generate predictable and specific outputs and outcomes.
However modern bureaucracies do not work like that. Indeed, such has been the breakdown in relationship between input and output that modern bureaucracies devote immense resources to change pure-and-simple; for example continually changing the recognition of input measures (i.e. continually redefining 'reality') and re-defining an organization’s mission and aims (i.e. rendering the nature of the organization different-from and incommensurable-with the past organization) and repeatedly altering the organizational outcomes regarded as relevant (re-defining making any decline in the efficiency of the organization formally un-measurable).
Such change may be externally- or internally-triggered: either triggered by the external demands of other bureaucracies which constitute the organizational environment, or triggered by the innate noise-generating tendencies of committees.
With endlessly-altering inputs, processes and outputs, bureaucratically-dominated organizations are impossible to critique in terms of functionality: their effectiveness is impossible to measure, and if or when they may be counter-productive (in terms of their original real world purpose) this will also be unknowable. Individual functional organizations disappear and all bureaucracies blend into a Borg-like web of interdependent growth.
The nature of bureaucracy: rational versus parasitic
What is bureaucracy? The traditional definition emphasises that bureaucracy entails a rational human organization which is characterized by hierarchy and specialization of function, and that the organization deploys explicit procedures or regulations that are impartially administered by the personnel. A rational ‘Weberian’ bureaucracy was probably, on the whole, performing a useful function reasonably efficiently – in other words its effectiveness was perceived in terms of externally-pre-decided criteria, and its growth and spread were circumscribed.
In medical terms, Weberian bureaucracy was therefore – at worst - a benign tumour; potentially able to overgrow locally and exert pressure on its surroundings; but still under control from, and held in check by, the larger host organism of society.
But, just as cancers usually evolve from benign precursors, so it was that modern parasitic and useless bureaucracies evolved from the rational and functional bureaucracies of an earlier era. Probably the key trigger factor in accelerating the rate of this evolution has been the development of computers, which have the potential to do – almost instantly, and at near zero cost – exactly the kind of rational information processing which in the past could only be done (much more slowly, expensively, and erratically) by Weberian bureaucracy. My contention is that large scale rational, functional bureaucracies are now all-but extinct, destroyed by computerization.
I assume that, when rational bureaucracy was facing extinction from computerization, there was a powerful selection pressure for the evolution of new forms of irrational bureaucracy – since rational procedures could be converted into algorithms, formalized and done mechanically; while irrational procedures were immune from this competition.
The outcome is that, despite retaining a vast structure of procedure and regulation, and the organizational principles of hierarchy and specialization, those powerful modern bureaucracies that survived the challenge of computerization and are still alive and growing nowadays are non-rational in their core attributes. Irrationality is indeed an essential aspect of a modern bureaucracy’s ability to survive and thrive. Those bureaucracies which remain and are expanding in this post-computerization era are neither rational nor functional.
This evolution towards pure parasitism – with no performance of a substantive real-world function - is only possible because, for any specific bureaucracy, its relevant environment now substantially consists of other bureaucracies. It is 'other bureaucracies' that are the main selection pressure: other bureaucracies pose the main threat to survival and reproduction. A modern bureaucracy therefore must respond primarily to ‘bureaucratic reality’ – and any engagement with ‘real life’ (e.g. life as it is perceived by alert and informed individual human beings) simply stands in the way of this primary survival task.
So, the best adapted modern bureaucracies are those which most efficiently play the game of satisfying the constantly-and rapidly-changing requirements of other major bureaucracies. Success brings expansion by local growth and metastatic spread. But, in contrast, satisfying the stable requirements of ‘real life’ and human nature, by contrast, brings a bureaucracy little or no rewards, and a greater possibility of extinction from the actions of other bureaucracies.
The role of committees in the evolution of bureaucracy
I will argue that the major mechanism by which irrationality has been introduced into bureaucracies is the committee which makes decisions by majority voting.
Committees now dominate almost all the major decision-making in modernizing societies – whether in the mass committee of eligible voters in elections, or such smaller committees as exist in corporations, government or in the US Supreme Court: it seems that modern societies always deploy a majority vote to decide or ratify all questions of importance. Indeed, it is all-but-inconceivable that any important decision be made by an individual person – it seems both natural and inevitable that such judgments be made by group vote.
Yet although nearly universal among Western ruling elites, this fetishizing of committees is a truly bizarre attitude; since there is essentially zero evidence that group voting leads to good, or even adequate, decisions – and much evidence that group voting leads to unpredictable, irrational and bad decisions.
The nonsense of majority voting was formally described by Nobel economics laureate Kenneth Arrow (1921-) in the 1960s, but it is surely obvious to anyone who has had dealings with committees and maintains independent judgement. It can be demonstrated using simple mathematical formulations that a majority vote may lead to unstable cycles of decisions, or a decision which not one single member of the committee would regard as optimal. For example, in a job appointments panel, it sometimes happens that there are two strong candidates who split the panel, so the winner is a third choice candidate whom no panel member would regard as the best candidate. In other words any individual panel member would make a better choice than derives from majority voting.
Furthermore, because of this type of phenomenon, and the way that majority decisions do not necessarily reflect any individual's opinion, committee decisions carry no responsibility. After all, how could anyone be held responsible for outcomes which nobody intended and to which nobody agrees? So that committees exert de facto power without responsibility. Indeed most modern committees are typically composed of a variable selection from a number of eligible personnel, so that it is possible that the same committee may never contain the same personnel twice. The charade is kept going by the necessary but meaningless fiction of ‘committee responsibility’, maintained by the enforcement of a weird rule that committee members must undertake, in advance of decisions, to abide by whatever outcome (however irrational, unpredictable, unjustified and indefensible) the actual contingent committee deliberations happen to lead-to. This near-universal rule and practice simply takes ‘irresponsibility’ and re-names it ‘responsibility’…
Given that committee decisions are neither rational nor coherent, and are therefore radically unpredictable, what is their effect? In a nutshell the short answer is that committees – overall and in the long term – generate random ‘noise’. Committees almost certainly increase the chances that a decision is wrong – but overall they probably do not have lead to any specifically biased direction of wrongness. While some committees using some procedures are biased in one direction, others are biased in other directions, and in the end I think the only thing that we can be sure about is that committees widen the range of unpredictability of decisions.
Now, if we ask what is the role of randomness in complex systems? - the answer is that random noise provides the variations which are the subject of selection processes. For example, in biology the random errors of genetic replication provide genetic variation which affects traits that are then subjected to natural selection. So, it seems reasonable to infer that committees generate random changes that generate variations in organizational characteristics which are then acted-upon by selection mechanisms. Some organizational variations are amplified and thrive, while other variations are suppressed and dwindle. Overall, this enables bureaucracies rapidly to evolve – to survive, to grow and to spread.
How much random noise is needed in a bureaucracy (or any evolving system)? The short answer is that the stronger is the selection pressure, the greater is the necessity for rapid evolution, then the more noise is needed; bearing in mind the trade-off by which an increased error rate in reproduction also reduces the ability of an evolving system accurately to reproduce itself. A system under strong selection pressure (e.g. a bureaucracy in a rapidly-changing modernizing society) tends to allow or generate more noise to create a wider range of variation for selection to act upon and thereby enable faster evolution – at the expense of less exact replication. By contrast, a system under weaker selection pressure (such as the Weberian bureaucracies of the early 20th century – for instance the British Civil Service) have greater fidelity of replication (less noise), but at the expense of a reduced ability to change rapidly in response to changing selection pressures.
I am saying here that committees using majority voting are responsible for the evolution of malignant bureaucratic growth in modern bureaucracies, and that this is why majority-vote decision-making permeates modern societies from the top to the bottom.
Although almost all major decision-making in the ‘Western’ world is now by majority voting there may be two significant exceptions: firstly military decision-making in time of war; secondly the personal authority of the Pope in the Roman Catholic Church. In both these types of organization there seems to be a greater emphasis on individual decision-making than on committee voting. Military command structures and the Roman Catholic hierarchy are therefore probably both closer to the ideal type of a Weberian rational bureaucracy than to the ideal type of a modern parasitic bureaucracy.
If so, the only major exceptions to majority rule decision-making at a world level, and probably not by coincidence, are the oldest and longest-enduring bureaucratic structures: that is, organizations which have retained functionality and have not themselves been destroyed by bureaucratic cancer.
Why are there committees at all?
Although they may nowadays be almost wholly damaging, committees cannot in their origins have been entirely useless or harmful; or else the form would never have survived its first appearance. If we acknowledge that individuals have the potential for better (i.e. more rational and coherent) decision-making than committees, then the decline of individual decision-making must not be due to the lack of advantages so much as the perceived problems of individual decision-making.
The problems of individual decision-making are the same as the problems of individual power: in essence these problems are self-interest (i.e. the observation that power will be deployed differentially to benefit the power-holder) and corruption (i.e. the observation that over time power will corrupt, making the individual progressively a worse-and-worse decision-maker until he us note merely self-interested but progressively driven mad: power mad).
Since humans are self-centred beings living in an imperfect world, all individuals tend to be both self-interested and corruptible (albeit to widely-varying degrees!). Of course, self-interest and corruptibility applies equally to people 'serving' on committees - each of whom is wielding lesser but anonymous and irresponsible power. Nonetheless, it seems to me that committees are mostly favoured because they are seen as a solution to these intrinsic problems of individual power. The implicit assumption is that when a committee is run by majority voting then individual self-interests will cancel-out. Furthermore, that since power is spread-around more people on a committee, then the inevitably corrupting effect of power will be similarly diluted.
In reality, committees mostly solve the problems of power to the extent that they reduce the effective deployment of power. So that, if committees are indeed less self-interested and less prone to corruption than individuals, this is achieved mainly because the committee structure and procedures make decision-making so unpredictable and incoherent that committees are rendered ineffective: ineffective to such an extent that committees cannot even manage consistently to be self-interested or corrupt! Therefore, the problems of power are ‘solved’, not by reducing the biases or corruptions of power, but simply by reducing the effectiveness of power; by introducing inefficiencies and obscuring the clarity of self-interest with the labile confusions of group dynamics. Power is not controlled but destroyed…
Therefore, if committees were introduced to reduce the abuse of power, then instead of achieving this, their actual outcome is that committees reduce power itself, and society is made docile when confronted by significant problems which could be solved, but are not. And surely this is precisely what we observe in the West, on an hourly basis?
Because committee-based bureaucracy is predicated on an ethic of power as evil: it functions as a sort of unilateral disarmament that would be immediately obvious as self-defeating or maladaptive unless arising in a context of already-existing domination. And a system of committee-based bureaucracy can only survive for as long as it its opponents can be rendered even-weaker by even-more virulent affliction with the same disease: which perhaps explains the extra-ordinarily venomous and dishonest pseudo-moralizing aggression which committee bureaucracy adopts towards other simpler, more-efficient or more-effective organizational systems that still use individual decision-making.
If we assume that committees were indeed introduced as a purported solution to (real or imagined, actual or potential) abuses of individual power; then committees will therefore usually achieve this goal. So long as the quality of decision-making is ignored, then the committees seem to be successful. Committees can therefore be seen as a typical product of one-sided and unbalanced moralism that has discarded the Aristotelian maxim of moderation in all things. Bureaucracy adopts instead unilateral moralism which aims at the complete avoidance of one kind of sin, even at the cost of falling into another contrasting kind of sin (so pride is avoided by encouraging submission, and aggression is avoided by imposing sloth).
However the subject matter of ‘trade-offs’ is avoided; and the inevitable self-created problems of single issue moral action are instead fed-upon by bureaucracy, leading (of course!) to further expansion.
Hence, modern decision-making means that societal capability has declined in many areas. It has become at best slow and expensive, and at worst impossible, to achieve things which were done quickly, efficiently and effectively under systems based on individual decision-making. To avoid the corruption of individual authority, society has been rendered helpless in the face of threats which could have been combated.
Bureaucracy in science – the cancer of peer review
This situation can readily be seen in science. Although modern science is massively distorted and infiltrated by the action of external bureaucracies in politics, public administration, law, business and the media (for example), the major manifestation of bureaucracy actually within science is of course peer review.
Over the last half-century or so, the growth and metastatic spread of peer review as a method of decision-making in science has been truly amazing. Individual decision-making has been all-but obliterated at every level and for almost every task. The elaborateness of peer review has increased (e.g. the number of referees, the number of personnel on evaluating panels, the amount of information input demanded by these groups). And peer review or other types of committee are now used for admissions, appointments, promotions, grant review, project management, research evaluation, journal and book refereeing, the award of prizes… the list just goes on and on. Clearly, peer review fits the pattern of malignant expansion of bureaucracy that is seen in the rest of modern society.
And, as with the rest of society, the cancer of bureaucratic peer review eludes the immune system of science. It has now been widely accepted, by the other bureaucracies of modern society in particular, that peer review is intrinsically valid; and that any other form of decision-making is intrinsically corrupt or unreliable. This belief is not merely implicit, but frequently explicit: with ignorant and nonsensical statements about the vital and defining role of peer review in science being the norm in mainstream communication.
The irresistible rise of peer review can be seen most starkly in that any deficiencies in peer review triggers demands (especially from other bureaucracies) for more elaborate and widespread peer review. So that the endemic failure of increased journal peer review to maintain quality, or to eliminate what it is purported to detect; such as deliberate fraud, or multiple publication, or serious error - leads inevitably leads to plans for further increases in peer review. So there is peer review of greater elaborateness, with further steps added to the process, and extra layers of monitoring by new types of larger committees. The ultimate validity of peer review is simply an assumption; and no amount of contrary evidence of its stultifying inefficiency, its harmful biases, and distorting exclusions can ever prove anything except the need for more of the same.
Yet the role of peer review in the progress of science remains, as it always has been, conjectural and unverified. The processes of gathering and collating peer opinion as a method of decision-making are neither rational nor transparent – and indeed (as argued above) this irrationality and unpredictability is in fact a necessary factor in the ability of committee systems such as peer review to expand without limit.
In the past; the ultimate, bottom-line, within-science validation of science came not from the committee opinions of peer reviewers but from the emergent phenomenon of peer usage – which refers to the actual deployment of previous science (theories, facts, techniques) in the ongoing work of later scientists. This was an implicit, aggregate but not quantified outcome of a multitude of individual-decisions among peers (co-workers in the same domain) about what aspects of previous science they would use in their own research: each user of earlier work was betting their time, effort and reputation on the validity of the previous research which they chose to use. When their work bore fruit, this a validation of previous research (in the sense that having survived this attempt at refutation the old science now commanded greater confidence); but when previous research was faulty it 'sabotaged' any later research building upon it in terms of correctly predicting or effectively-intervening-in the natural world. Beyond this lies the commonsensical evaluation of science in terms of ‘what works’ – especially what works outside of science, by people such as engineers and doctors whose job is to apply science in the natural world.
But now that committee-based peer review has been explicitly accepted as the ‘gold standard’ of scientific validity, we see the bizarre situation that actual scientific usage and even what works is regarded as less important than the ‘bureaucratic reality’ of peer review evaluations. Mere opinions trump observations of objective reality. Since ‘bureaucratic reality’ is merely a construct of interacting bureaucracies, this carries the implication that scientific reality is now, to an ever-increasing extent, simply just another aspect of, and seamlessly-continuous-with, mainstream 'bureaucratic reality'. Science is merely a subdivision of that same bureaucratic reality seen in politics, public administration, law, the media and business. The whole thing is just one gigantic virtual world. It seems probable that much of peer reviewed ‘science’ nowadays therefore carries no implications of being useful in understanding, predicting or intervening-on the natural world.
In other words, when science operates on the basis of peer review and committee decision, it is not really science at all. The cancer of bureaucracy has killed real science wherever it dominates. Much of mainstream science is now ‘Zombie Science’: that is, something which superficially looks-like science, but which is actually dead inside, and kept-moving only by continuous infusion of research funds. So far as bureaucratic reality is concerned, i.e. the reality as acknowledged among the major bureaucracies; real science likely now exists at an unofficial, unacknowledged level, below the radar; only among that minority of scholars and researchers who still deploy the original scientific evaluation mechanisms such as individual judgement, peer usage and real-world effectiveness.
What will happen?
The above analysis suggests that parasitic bureaucracy is so dangerous in the context of a modernizing society that it cannot be allowed to exist; it simply must be destroyed in its entirety or else any residuum will re-grow, metastasize and colonize society all over again. The implication is that a future society which intends to survive in the long-term would need to be one that prevents parasitic bureaucracy from even getting a toe-hold.
The power of parasitic bureaucracy to expand and to trigger further parasitic bureaucracies is now rendered de facto un-stoppable by the power of interacting bureaucracies to frame and construct perceived reality in bureaucratic terms. Since bureaucratic failure is eliminated by continual re-definition of success, and the since any threats of to bureaucratic expansion are eliminated by exclusion or lethal attack; the scope of bureaucratic takeover from now can be limited only by collapse of the social system as a whole.
So, if the above analysis is correct, there can be only two outcomes. Either that the cancer of modern bureaucracy will be extirpated: destroyed utterly. In other words, the host immune system will evolve the ability to destroy the parasite. Maybe, all majority voting committees will coercively be replaced by individuals who have the authority to make decisions and responsibility for those decisions.
Or that the cancer of bureaucracy will kill the host. In other words, the parasite will continue to elude the immune system. Modernizing societies will sooner-or-later be destroyed by a combination of resource starvation plus accumulative damage from delayed and wrong decisions based on the exclusions and distortions of ‘bureaucratic reality’.
Then the most complex rapidly-growing modernizing Western societies will be replaced by, or will regress into, zero-growth societies with a lower level of complexity - probably about the level of the agrarian societies of the European or Asian Middle Ages.
My prediction is that outcome two – societal collapse - is at present the more probable, on the basis that parasites can evolve more rapidly than host immune systems. Although as individuals we can observe the reality of approaching disaster, to modern parasitic bureaucracies the relevant data is either trivial or simply invisible.
***
Further reading: Although I do not mention it specifically above, the stimulus to writing this essay came from Mark A Notturno’s Science and the open society: the future of Karl Popper’s philosophy (Central European University Press: Budapest, 2000) – in particular the account of Popper’s views on induction. It struck me that committee decision-making by majority vote is a form of inductive reasoning, hence non-valid; and that inductive reasoning is in practice no more than a form of ‘authoritarianism’ (as Notturno terms it). In the event, I decided to exclude this line of argument from the essay because I found it too hard to make the point interesting and accessible. Nonetheless, I am very grateful to have had it explained to me.
I should also mention that various analyses of the pseudonymous blogger Mencius Moldbug, who writes at Unqualified Reservations, likely had a significant role in developing the above ideas.
This argument builds upon several previous pieces of mine including: Conflicts of interest in medical science: peer usage, peer review and ‘CoI consultancy' (Medical Hypotheses 2004; 63: 181-186); Charlton BG, Andras P. What is management and what do managers do? A systems theory account. (Philosophy of Management. 2004; 3: 3-15); Peer usage versus peer review (BMJ 2007; 335: 451); Charlton BG, Andras P. Medical research funding may have over-expanded and be due for collapse (QJM 2005; 98: 53–55); Figureheads, ghost-writers and pseudonymous quant bloggers: the recent evolution of authorship in science publishing (Medical Hypotheses. 2008; 71: 475–480); Zombie science’ (Medical Hypotheses 2008; 71:327–329); The vital role of transcendental truth in science’ (Medical Hypotheses. 2009; 72: 373–376); Are you an honest scientist? Truthfulness in science should be an iron law, not a vague aspiration (Medical Hypotheses. 2009; Volume 73: 633-635); and, After science: has the tradition been broken? Medical Hypotheses, in the press.
Sunday, 4 April 2010
Covert drug dependence
Covert drug dependence should be the null hypothesis for explaining drug-withdrawal-induced clinical deterioration: The necessity for placebo versus drug withdrawal trials on normal control subjects
Bruce G. Charlton
Medical Hypotheses. 2010; 74: 761-763.
***
Summary
Just as a placebo can mimic an immediately effective drug so chronic drug dependence may mimic an effective long-term or preventive treatment. The discovery of the placebo had a profound result upon medical practice, since it became recognized that it was much harder to determine the therapeutic value of an intervention than was previously assumed. Placebo is now the null hypothesis for therapeutic improvement. As David Healy describes in the accompanying editorial on treatment induced stress syndromes [1], an analogous recognition of the effect of drug dependence is now overdue. Drug dependence and withdrawal effects should in future become the null hypothesis when there is clinical deterioration following cessation of treatment. The ideal methodology for detecting drug dependence and withdrawal is a double-blind placebo controlled and randomized trial using disease-free normal control subjects. Normal controls are necessary to ensure that the possibility of underlying chronic disease is eliminated: so long as subjects begin the trial as ‘normal controls’ it is reasonable to infer that any clinical or psychological problems (above placebo levels) which they experience following drug withdrawal can reasonably be attributed to the effects of the drug. This is important because the consequences of failing to detect the risk of covert drug dependence may be considerably worse than failing to detect a placebo effect. Drug dependent patients not only fail to receive benefit and suffer continued of inconvenience, expense and side effects; but the drug has actually created and sustained a covert chronic pathology. However, the current situation for drug evaluation is so irrational that it would allow chronic alcohol treatment to be regarded as a cure for alcoholism on the basis that delirium tremens follows alcohol withdrawal and alcohol can be used to treat delirium tremens! Therefore, just as placebo controlled trials of drugs are necessary to detect ineffective drugs, so drug withdrawal trials on normal control subjects should be regarded as necessary to detect dependence-producing drugs.
***
Just as a placebo can mimic an immediately effective drug, so chronic drug dependence may mimic an effective long-term or preventive treatment
The discovery of the placebo had a profound result upon medical practice. After the placebo effect was discovered it was recognized that it was much harder to determine the therapeutic value of an intervention than previously assumed. As David Healy describes in the accompanying editorial on treatment induced stress syndromes [1], an analogous recognition of the effect of drug dependence is now overdue, especially in relation to psychoactive drugs.
Therefore, just as placebo controlled trials of drugs are regarded as necessary to detect ineffective drugs, so drug withdrawal trials on normal control subjects should be regarded as necessary to detect dependence-producing drugs.
Determining the specific benefit of a drug
Throughout most of the history of medicine it was naively assumed that when a patient improved following a specific therapy, then this positive change could confidently be attributed to the beneficial effects of that specific therapy. But it is now recognized that clinical improvement may have nothing to do with the specific treatment but may instead have general psychological causes to do with a patient’s expectations. So that when a drug treatment is begun and the patient gets better, the change may not be due to the drug but some or all of the observed benefit could be due to the placebo effect.
Indeed, nowadays the placebo effect is routinely assumed to be the cause of patient improvement unless proven otherwise. Placebo effect is therefore the null hypothesis used to explain therapeutic improvements.
This tendency to regard the placebo effect as the default explanation for clinical improvement has led to major methodological changes in the evaluation of putative drug therapies; because the first aim of drug evaluation is now to show that measured benefits cannot wholly be explained by placebo. This has led to widespread adoption of placebo controlled trials which compare the effect of the putative drug with a placebo. Only when the drug produces a greater effect than placebo alone, is it recognized as a potentially effective therapy.
The effect of withdrawing a drug upon which a subject has become dependent can be regarded as analogous to the placebo effect, in the sense that drug dependence resembles the placebo effect in being able to mislead concerning clinical effectiveness.
It may routinely be assumed that if a patient gets worse when drug treatment is stopped, then this change is due to the patient losing the beneficial effects of the drug, so that the underlying disease (for which the drug was being prescribed) has re-emerged. That is, when a patient does better when taking a drug than after cessation, it seems apparent that the patient benefits from this drug. So the naïve assumption would be that worsening of a patient’s condition on withdrawal implies that the patient had a long-term illness which was being treated by the drug, and the chronic illness was revealed when drug treatment was withdrawn.
However, this naïve assumption is certainly unjustified as a general rule because drug dependence produces exactly the same effect. When a patient has become dependent on a drug, then adverse consequences following withdrawal may have nothing to do with revealing an underlying, long-term illness. Instead, chronic drug use has actually made the patient ill, the drug has created a new but covert pathology; the body has adapted to the presence of the drug and now needs the drug in order to function normally such that the covert pathology only emerges when the drug is removed and body systems are disrupted by its absence.
In other words, the drug dependent patient may have had independent pathology which has disappeared, or else drug treatment may have been the sole cause of pathology. But either way, clinical deterioration following withdrawal is mainly or wholly a consequence of drug dependence and not a consequence of underlying independent chronic pathology.
So, before assuming that the patient benefits from a drug the possibility of covert drug dependence must first be eliminated as an explanation. Healy’s argument is that drug dependence and withdrawal effects should in future become the null hypothesis in evaluating the chronic need for therapy in the same way as placebo is now a null hypothesis for clinical improvement following drug therapy. Worsening of the patient’s condition following cessation or dose reduction of a drug should therefore be assumed to be caused by withdrawal unless otherwise proven.
However, current methods of therapeutic evaluation cannot reliably detect stress induced drug dependence. This implies that a new kind of clinical trial is required explicitly to test for covert drug dependence and withdrawal effects in a manner analogous to the placebo controlled therapeutic trial.
Assumptions about the cause of post-withdrawal clinical deterioration
It has not yet been generally recognized that eliminating drug dependence as an explanation for withdrawal effects cannot be achieved in the context of normal clinical practice, nor by the standard formal methodologies of controlled clinical trials.
Just as eliminating the possibility of placebo effects requires specially designed placebo controlled therapeutic trails, so eliminating the occurrence of covert drug dependence requires also specially designed withdrawal trials on normal control subjects.
At present, it is usual to assume a drug does not cause dependence, except when it is proved that a specific drug does cause dependence. This means that when no information on dependence is available, or when the information about dependence on a particular drug is either incomplete or inconclusive, then the standard accepted inference is that the drug does not cause dependence. In effect, the onus of proof is currently upon those who are trying to argue that a drug causes dependence.
The situation for withdrawal trials testing for dependence is therefore exactly the opposite of that applying to therapeutic trials and the placebo effect. Consequently, as Healy describes, prevailing clinical evaluation procedures may systematically be incapable of detecting withdrawal effects. Even worse, current procedures systematically tend to misattribute the creation of dependence and harm following withdrawal, as instead being evidence of drug benefit with implication of the necessity for continued treatment of a supposed chronic illness.
The currently prevailing presumption therefore favours new drugs about which little is known; and it favours a perpetuation of the state of ignorance, since no evidence of dependence is almost invariably being interpreted as evidence of no dependence. In other words, as things stand; a drug that actually creates chronic dependence is instead credited with curing a chronic disease; despite that the chronic disease is actually a stress syndrome disease state which that same drug has actually caused.
The current situation is equivalent to chronic alcohol treatment being regarded as a cure for alcoholism on the (warped) basis that delirium tremens follows alcohol withdrawal and alcohol can be used to treat delirium tremens!
When to suspect covert dependence
The almost-total lack of awareness of covert drug dependence and withdrawal problems need not be accidental, but could be a consequence of the fact that unrecognized drug dependence is financially advantageous for the pharmaceutical companies who fund and conduct most clinical trials.
Although there are signs which may warn of dependence on a drug, and the possibility of withdrawal effects (e.g. dwindling effects of a drug, or the need for escalating doses in order to maintain its effect) – none of these are easy to discriminate from therapeutic effects.
But dependence may be suspected when what was perceived as an acute and self-limiting illness requiring a time-limited course of treatment, gradually becomes perceived as a chronic disorder requiring long-term drug treatment. This has been a pattern observed for several psychiatric conditions including depression and acute psychosis. Naturally, there can be rationalizations for this – for example, that the disease was previously unrecognized or under-treated.
Nonetheless, the difficulty of resolving such disputes serves to make clear the need for establishing a presumption of drugs being dependence-producing, and the necessity that this possibility be eliminated by withdrawal trials at an early stage in the evaluation of the drug.
Covert dependence generates a long-term demand for drugs by converting acute into chronic disease among the legitimate therapeutic target community. For example, acute and self-limiting depressive illness can be made into an apparent chronic disease if antidepressants create dependence such that drug withdrawal provokes depressed mood – such that a lifetime of antidepressant treatment can then be justified as ‘preventing’ a supposed chronic recurrent depressive disorder which is actually itself a product of drug administration.
Another way in which covert dependence is advantageous for pharmaceutical companies happens when the inclusiveness of diagnostic criteria are expanded. Because the more patients that are treated (on whatever excuse), the more dependence is produced and the more people who then require chronic drug administration.
Possible examples are when the threshold sensitivity for prescribing is reduced for a dependence-producing drug, such as the suggestion that early or preventive treatment of psychosis is beneficial, using an ‘atypical’ or traditional antipsychotic/neuroleptic. And because withdrawal of antipsychotics causes an increased likelihood of psychotic breakdown, preventive drug treatment is an apparently self-fulfilling prophesy. Or when a new and allegedly high prevalence disease category such as ‘bipolar disorder’ is created along with indications for treatment by dependence-producing drugs; this will tend to generate a new cohort of drug dependent patients whose long-term dependence on drugs can be disguised as a newly-discovered and previously-unsuspected type of severe and chronic psychiatric pathology.
In other words, under currently prevailing research standards, mass creation and exploitation of drug dependence may actually be spun as evidence of medical progress!
The necessity for drug-withdrawal trials on normal control subjects
Drug dependence needs a level of recognition comparable to the placebo effect because it is more damaging than the placebo effect. The main problem of failing to detect a placebo effect is that patients may be unnecessarily exposed to the expense and side effects of a drug. So the placebo effect may be clinically desirable, so long as the placebo is inexpensive and harmless.
But the consequences of failing to detect covert drug dependence may be considerably worse than this. When dependence is a problem, patients who receive chronic drug treatment may not only fail to receive any benefit (and thereby suffer unnecessary risk of side effects and expense) but the drug may actually create increasingly severe covert pathology. If a patient is prescribed a drug inappropriately, then they may become drug dependent even when ineffectiveness, inconvenience, expense or treatment side effects mean that they wish (or need) to stop.
In a nutshell, the problem with placebos is merely that a drug fails to treat pathology, but the problem with dependence is that a drug has created pathology.
Clearly, the ideal – and perhaps indispensable – methodology for detecting covert drug dependence is a double-blind placebo controlled and randomized trial using disease-free normal control subjects. Normal controls are necessary to ensure that the possibility of chronic disease is eliminated: since controls begin the trial as ‘normal’ it is reasonable to infer that any clinical or psychological problems (above placebo levels) which they experience following drug withdrawal can reasonably be attributed to the effects of the drug.
A withdrawal trial needs to be prolonged to include not just sufficient chronicity of treatment by the active drug or placebo; but also a sufficient follow-up period after stopping the drug or placebo, during which it can be discovered whether there is any worsening of conditions following withdrawal and an increase in new pathologies. Specifically, what needs to be measured is a comparison of the frequency of post-withdrawal problems in the two randomly-assigned placebo and active drug groups.
Since the nature of withdrawal effects will not be known in advance, such a trial cannot rely upon highly focused and pre-specified questionnaires but would need to include very general questioning about more general feelings of well-being and quality of life; and any signs of problems as perceived by observers. Follow-up could include measures such as all-cause mortality, all source morbidity; and measures of the frequency of adverse events such as suicide, accidents, medical contacts and hospital admissions.
In conclusion, covert drug dependence should be the null hypothesis explanation for post-withdrawal clinical deterioration, especially for new drugs and even more so for drugs acting on the brain. A default assumption is required that lack of evidence concerning drug dependence implies that a drug is dependence-producing.
Because unless covert drug dependence becomes a default assumption, then it remains advantageous for pharmaceutical companies self-servingly to maintain the current state of ignorance in which recommendations for chronic drug treatment are enforced by drug dependence that is systematically misinterpreted as therapeutic effectiveness.
References
[1] Healy D. Treatment induced stress syndromes. Med Hypotheses, in press. doi:10.1016/j.mehy.2010.01.038.
Bruce G. Charlton
Medical Hypotheses. 2010; 74: 761-763.
***
Summary
Just as a placebo can mimic an immediately effective drug so chronic drug dependence may mimic an effective long-term or preventive treatment. The discovery of the placebo had a profound result upon medical practice, since it became recognized that it was much harder to determine the therapeutic value of an intervention than was previously assumed. Placebo is now the null hypothesis for therapeutic improvement. As David Healy describes in the accompanying editorial on treatment induced stress syndromes [1], an analogous recognition of the effect of drug dependence is now overdue. Drug dependence and withdrawal effects should in future become the null hypothesis when there is clinical deterioration following cessation of treatment. The ideal methodology for detecting drug dependence and withdrawal is a double-blind placebo controlled and randomized trial using disease-free normal control subjects. Normal controls are necessary to ensure that the possibility of underlying chronic disease is eliminated: so long as subjects begin the trial as ‘normal controls’ it is reasonable to infer that any clinical or psychological problems (above placebo levels) which they experience following drug withdrawal can reasonably be attributed to the effects of the drug. This is important because the consequences of failing to detect the risk of covert drug dependence may be considerably worse than failing to detect a placebo effect. Drug dependent patients not only fail to receive benefit and suffer continued of inconvenience, expense and side effects; but the drug has actually created and sustained a covert chronic pathology. However, the current situation for drug evaluation is so irrational that it would allow chronic alcohol treatment to be regarded as a cure for alcoholism on the basis that delirium tremens follows alcohol withdrawal and alcohol can be used to treat delirium tremens! Therefore, just as placebo controlled trials of drugs are necessary to detect ineffective drugs, so drug withdrawal trials on normal control subjects should be regarded as necessary to detect dependence-producing drugs.
***
Just as a placebo can mimic an immediately effective drug, so chronic drug dependence may mimic an effective long-term or preventive treatment
The discovery of the placebo had a profound result upon medical practice. After the placebo effect was discovered it was recognized that it was much harder to determine the therapeutic value of an intervention than previously assumed. As David Healy describes in the accompanying editorial on treatment induced stress syndromes [1], an analogous recognition of the effect of drug dependence is now overdue, especially in relation to psychoactive drugs.
Therefore, just as placebo controlled trials of drugs are regarded as necessary to detect ineffective drugs, so drug withdrawal trials on normal control subjects should be regarded as necessary to detect dependence-producing drugs.
Determining the specific benefit of a drug
Throughout most of the history of medicine it was naively assumed that when a patient improved following a specific therapy, then this positive change could confidently be attributed to the beneficial effects of that specific therapy. But it is now recognized that clinical improvement may have nothing to do with the specific treatment but may instead have general psychological causes to do with a patient’s expectations. So that when a drug treatment is begun and the patient gets better, the change may not be due to the drug but some or all of the observed benefit could be due to the placebo effect.
Indeed, nowadays the placebo effect is routinely assumed to be the cause of patient improvement unless proven otherwise. Placebo effect is therefore the null hypothesis used to explain therapeutic improvements.
This tendency to regard the placebo effect as the default explanation for clinical improvement has led to major methodological changes in the evaluation of putative drug therapies; because the first aim of drug evaluation is now to show that measured benefits cannot wholly be explained by placebo. This has led to widespread adoption of placebo controlled trials which compare the effect of the putative drug with a placebo. Only when the drug produces a greater effect than placebo alone, is it recognized as a potentially effective therapy.
The effect of withdrawing a drug upon which a subject has become dependent can be regarded as analogous to the placebo effect, in the sense that drug dependence resembles the placebo effect in being able to mislead concerning clinical effectiveness.
It may routinely be assumed that if a patient gets worse when drug treatment is stopped, then this change is due to the patient losing the beneficial effects of the drug, so that the underlying disease (for which the drug was being prescribed) has re-emerged. That is, when a patient does better when taking a drug than after cessation, it seems apparent that the patient benefits from this drug. So the naïve assumption would be that worsening of a patient’s condition on withdrawal implies that the patient had a long-term illness which was being treated by the drug, and the chronic illness was revealed when drug treatment was withdrawn.
However, this naïve assumption is certainly unjustified as a general rule because drug dependence produces exactly the same effect. When a patient has become dependent on a drug, then adverse consequences following withdrawal may have nothing to do with revealing an underlying, long-term illness. Instead, chronic drug use has actually made the patient ill, the drug has created a new but covert pathology; the body has adapted to the presence of the drug and now needs the drug in order to function normally such that the covert pathology only emerges when the drug is removed and body systems are disrupted by its absence.
In other words, the drug dependent patient may have had independent pathology which has disappeared, or else drug treatment may have been the sole cause of pathology. But either way, clinical deterioration following withdrawal is mainly or wholly a consequence of drug dependence and not a consequence of underlying independent chronic pathology.
So, before assuming that the patient benefits from a drug the possibility of covert drug dependence must first be eliminated as an explanation. Healy’s argument is that drug dependence and withdrawal effects should in future become the null hypothesis in evaluating the chronic need for therapy in the same way as placebo is now a null hypothesis for clinical improvement following drug therapy. Worsening of the patient’s condition following cessation or dose reduction of a drug should therefore be assumed to be caused by withdrawal unless otherwise proven.
However, current methods of therapeutic evaluation cannot reliably detect stress induced drug dependence. This implies that a new kind of clinical trial is required explicitly to test for covert drug dependence and withdrawal effects in a manner analogous to the placebo controlled therapeutic trial.
Assumptions about the cause of post-withdrawal clinical deterioration
It has not yet been generally recognized that eliminating drug dependence as an explanation for withdrawal effects cannot be achieved in the context of normal clinical practice, nor by the standard formal methodologies of controlled clinical trials.
Just as eliminating the possibility of placebo effects requires specially designed placebo controlled therapeutic trails, so eliminating the occurrence of covert drug dependence requires also specially designed withdrawal trials on normal control subjects.
At present, it is usual to assume a drug does not cause dependence, except when it is proved that a specific drug does cause dependence. This means that when no information on dependence is available, or when the information about dependence on a particular drug is either incomplete or inconclusive, then the standard accepted inference is that the drug does not cause dependence. In effect, the onus of proof is currently upon those who are trying to argue that a drug causes dependence.
The situation for withdrawal trials testing for dependence is therefore exactly the opposite of that applying to therapeutic trials and the placebo effect. Consequently, as Healy describes, prevailing clinical evaluation procedures may systematically be incapable of detecting withdrawal effects. Even worse, current procedures systematically tend to misattribute the creation of dependence and harm following withdrawal, as instead being evidence of drug benefit with implication of the necessity for continued treatment of a supposed chronic illness.
The currently prevailing presumption therefore favours new drugs about which little is known; and it favours a perpetuation of the state of ignorance, since no evidence of dependence is almost invariably being interpreted as evidence of no dependence. In other words, as things stand; a drug that actually creates chronic dependence is instead credited with curing a chronic disease; despite that the chronic disease is actually a stress syndrome disease state which that same drug has actually caused.
The current situation is equivalent to chronic alcohol treatment being regarded as a cure for alcoholism on the (warped) basis that delirium tremens follows alcohol withdrawal and alcohol can be used to treat delirium tremens!
When to suspect covert dependence
The almost-total lack of awareness of covert drug dependence and withdrawal problems need not be accidental, but could be a consequence of the fact that unrecognized drug dependence is financially advantageous for the pharmaceutical companies who fund and conduct most clinical trials.
Although there are signs which may warn of dependence on a drug, and the possibility of withdrawal effects (e.g. dwindling effects of a drug, or the need for escalating doses in order to maintain its effect) – none of these are easy to discriminate from therapeutic effects.
But dependence may be suspected when what was perceived as an acute and self-limiting illness requiring a time-limited course of treatment, gradually becomes perceived as a chronic disorder requiring long-term drug treatment. This has been a pattern observed for several psychiatric conditions including depression and acute psychosis. Naturally, there can be rationalizations for this – for example, that the disease was previously unrecognized or under-treated.
Nonetheless, the difficulty of resolving such disputes serves to make clear the need for establishing a presumption of drugs being dependence-producing, and the necessity that this possibility be eliminated by withdrawal trials at an early stage in the evaluation of the drug.
Covert dependence generates a long-term demand for drugs by converting acute into chronic disease among the legitimate therapeutic target community. For example, acute and self-limiting depressive illness can be made into an apparent chronic disease if antidepressants create dependence such that drug withdrawal provokes depressed mood – such that a lifetime of antidepressant treatment can then be justified as ‘preventing’ a supposed chronic recurrent depressive disorder which is actually itself a product of drug administration.
Another way in which covert dependence is advantageous for pharmaceutical companies happens when the inclusiveness of diagnostic criteria are expanded. Because the more patients that are treated (on whatever excuse), the more dependence is produced and the more people who then require chronic drug administration.
Possible examples are when the threshold sensitivity for prescribing is reduced for a dependence-producing drug, such as the suggestion that early or preventive treatment of psychosis is beneficial, using an ‘atypical’ or traditional antipsychotic/neuroleptic. And because withdrawal of antipsychotics causes an increased likelihood of psychotic breakdown, preventive drug treatment is an apparently self-fulfilling prophesy. Or when a new and allegedly high prevalence disease category such as ‘bipolar disorder’ is created along with indications for treatment by dependence-producing drugs; this will tend to generate a new cohort of drug dependent patients whose long-term dependence on drugs can be disguised as a newly-discovered and previously-unsuspected type of severe and chronic psychiatric pathology.
In other words, under currently prevailing research standards, mass creation and exploitation of drug dependence may actually be spun as evidence of medical progress!
The necessity for drug-withdrawal trials on normal control subjects
Drug dependence needs a level of recognition comparable to the placebo effect because it is more damaging than the placebo effect. The main problem of failing to detect a placebo effect is that patients may be unnecessarily exposed to the expense and side effects of a drug. So the placebo effect may be clinically desirable, so long as the placebo is inexpensive and harmless.
But the consequences of failing to detect covert drug dependence may be considerably worse than this. When dependence is a problem, patients who receive chronic drug treatment may not only fail to receive any benefit (and thereby suffer unnecessary risk of side effects and expense) but the drug may actually create increasingly severe covert pathology. If a patient is prescribed a drug inappropriately, then they may become drug dependent even when ineffectiveness, inconvenience, expense or treatment side effects mean that they wish (or need) to stop.
In a nutshell, the problem with placebos is merely that a drug fails to treat pathology, but the problem with dependence is that a drug has created pathology.
Clearly, the ideal – and perhaps indispensable – methodology for detecting covert drug dependence is a double-blind placebo controlled and randomized trial using disease-free normal control subjects. Normal controls are necessary to ensure that the possibility of chronic disease is eliminated: since controls begin the trial as ‘normal’ it is reasonable to infer that any clinical or psychological problems (above placebo levels) which they experience following drug withdrawal can reasonably be attributed to the effects of the drug.
A withdrawal trial needs to be prolonged to include not just sufficient chronicity of treatment by the active drug or placebo; but also a sufficient follow-up period after stopping the drug or placebo, during which it can be discovered whether there is any worsening of conditions following withdrawal and an increase in new pathologies. Specifically, what needs to be measured is a comparison of the frequency of post-withdrawal problems in the two randomly-assigned placebo and active drug groups.
Since the nature of withdrawal effects will not be known in advance, such a trial cannot rely upon highly focused and pre-specified questionnaires but would need to include very general questioning about more general feelings of well-being and quality of life; and any signs of problems as perceived by observers. Follow-up could include measures such as all-cause mortality, all source morbidity; and measures of the frequency of adverse events such as suicide, accidents, medical contacts and hospital admissions.
In conclusion, covert drug dependence should be the null hypothesis explanation for post-withdrawal clinical deterioration, especially for new drugs and even more so for drugs acting on the brain. A default assumption is required that lack of evidence concerning drug dependence implies that a drug is dependence-producing.
Because unless covert drug dependence becomes a default assumption, then it remains advantageous for pharmaceutical companies self-servingly to maintain the current state of ignorance in which recommendations for chronic drug treatment are enforced by drug dependence that is systematically misinterpreted as therapeutic effectiveness.
References
[1] Healy D. Treatment induced stress syndromes. Med Hypotheses, in press. doi:10.1016/j.mehy.2010.01.038.