Conflicts of interest in medical science: peer usage, peer review and ‘CoI consultancy'
Bruce G. Charlton
Medical Hypotheses 2004; 63: 181-186
In recent years, the perception has grown that conflicts of interest are having a detrimental effect on medical science as it influences health policy and clinical practice, leading medical journals to enforce self-declaration of potential biases in the attempt to counteract or compensate for the problem.
Conflict of interest (CoI) declarations have traditionally been considered inappropriate in pure science since its evaluation systems themselves constitute a mechanism for eliminating the effect of individual biases. Pure science is primarily evaluated by ‘peer usage', in which scientific information is ‘replicated' by being incorporated in the work of other scientists, and tested by further observation of the natural world. Over the long-term, the process works because significant biases impair the quality of science, and bad science tends to be neglected or refuted. However, scientific evaluation operates slowly over years and decades, and only a small proportion of published work is ever actually evaluated.
But most of modern medical science no longer conforms to the model of pure science, and may instead be conceptualized as a system of ‘applied' science having different aims and evaluation processes. The aim of applied medical science is to solve pre-specified problems, and to provide scientific information ready for implementation immediately following publication. The primary evaluation process of applied science is peer review, not peer usage. Peer review is much more rapid (with a timescale of weeks or months) and cheaper than peer usage and (consequently) has a much wider application: peer review is a prospective validation while peer usage is retrospective. Since applied science consists of incremental advances on existing knowledge achieved using established techniques, its results can usually be reliably evaluated by peer review.
However, despite its considerable convenience, peer review has significant limitations related to its reliance on opinion. One major limitation of peer review has proved to be its inability to deal with conflicts of interest, especially in a ‘big science' context when prestigious scientists may have similar biases, and conflicts of interest are widely shared among peer reviewers. When applied medical science has been later checked against the slower but more valid processes of peer usage, it seems that reliance on peer review may allow damaging distortions to become ‘locked-in' to clinical practice and health policy for considerable periods.
Scientific progress is generally underpinned by increasing specialization. Medical journals should specialize in the communication of scientific information, and they have neither the resources nor the motivation to investigate and measure conflicts of interest. Effectively dealing with the problem of conflicts of interest in applied medical science firstly requires a more explicit demarcation between the communications media of pure medical science and applied medical science. Greater specialization of these activities would then allow distinctive aims and evaluation systems to evolve with the expectation of improved performance in both pure and applied systems.
In future, applied medical science should operate with an assumption of bias, with the onus of proof on applied medical scientists to facilitate the ‘data transparency' necessary to validate their research. Journals of applied medical science will probably require more rigorous processes of peer review than at present, since their publications are intended to be ready for implementation. But since peer review does not adequately filter-out conflicts of interest in applied medical science, there is a need for the evolution of specialist post-publication institutional mechanisms. The suggested solution is to encourage the establishment of independent ‘CoI consultancy' services, whose role would be to evaluate conflicts of interest and other biases in published applied medical science prior to their implementation. Such services would be paid-for by the groups who intend to implement applied medical research.
Conflicts of interest in medical science
The conflict of interest debate has arisen mainly because of concern that articles published in some of the most prestigious, high-impact medical journals – including articles that have influenced clinical practice and health policy – appear to have been biased by the commercial or other interests of their authors (the situation has been analyzed in most detail for psychiatry [1, 2 and 3]). Some medical journal editors have tried to tackle this problem by requiring authors to publish a declaration or statement of their possible conflicts of interest.
Interests or biases in science are nothing new, nor are they confined to commercial pressures. Individual biases are pervasive in science, as in life, and provide the motivation for much scientific activity. But it is intrinsic to the nature of scientific evaluation processes that they are structured such that personal interests do not invalidate science . Indeed, given the ubiquity of individual bias, if science had not been able to filter-out personal partiality, there would never have been any such thing as science in the first place.
However, since the social structures of science have been operating fairly effectively for a few centuries, something must have changed in recent decades such that conflicts of interest in medical science are now perceived to be a significant problem. I suggest that the major change has been the nature of evaluation in medical research, in particular the shift away from the reliable but slow and expensive method of evaluating science by ‘peer usage', towards an ever-increasing reliance on the faster and cheaper but much more bias-prone methods of peer review. To distinguish the traditional practice of medical science from the newer processes I have adopted the terms ‘pure' and ‘applied' medical science.
Pure medical science – peer usage evaluations
‘Pure' scientific evaluations can best be understood to operate by allocating status to scientific work (and to scientists) on the basis of how their work stands-up to testing against observations of the natural world . This entails discovering whether published information is replicated by future observation: for instance whether theories are predictive of future observations and whether empirical results provide a reliable basis for further extrapolations. In practice, operationally, this amounts to a process by which ideas (both theories and empirical information) are taken-up, and incorporated as the basis of work, by other scientists. If the other scientists' later work based on earlier ideas is successful, then the earlier ideas are (indirectly and provisionally) validated . This process could be termed scientific evaluation by ‘peer usage' since it depends on published work being used by other scientists working in the same field.
Evaluation by peer usage is the process which lends science its exceptional reliability, but it is a slow process with a timescale of years and even decades. Published ideas must be noticed, read, understood, incorporated; new work must be done, and this new work must be published, noticed, read etc. Naturally, most published scientific work falls at one or another of these hurdles, usually at the earliest ones. This means that not only is peer usage slow, but it is incomplete. Only a small percentage of published science ever actually gets evaluated: those parts which other scientists deem necessary to their work, and information upon the validity of which they are confident enough to build their own work. Peer usage is such a powerful method of evaluation partly because using another scientist's work is not a mere expression of opinion concerning its validity, but amounts to ‘betting your career' on its validity . If a scientist's evaluation of other scientists' work is wrong, when they try to use it, their own research will be impaired (either by being held-back by wrongly ignoring important correct research, or by building-upon erroneous research such that their scientific work is unsuccessful).
The point to emphasize is that the traditional indifference of pure science journals to the biases and conflicts of interest of authors is not an accidental oversight. It relates to the status of the published literature. In a nutshell, pure science is published essentially for the purpose of communication, and pure science publications are intended for further evaluation. Publication is not taken to imply the ‘truth' or intrinsic validity of what is published. Validation, if it comes, will come later as a consequence of peer usage.
Applied medical science – peer review evaluations
Having said all this, the great bulk of modern medical science is not pure science and is not evaluated by peer usage. Most published medical science is now applied science, not pure science; and is evaluated by peer review not by peer usage.
Applied medical science is intended to solve specified problems as quickly, efficiently and (especially) as reliably as possible. The most reliable way to solve problems in the short term is to ensure that the questions asked are relatively modest, incremental extrapolations of current established knowledge; and to use standard, established methods to investigate them – such investigations being performed by researchers with validated competence and experience. The outcomes of such research can be pre-determined to a high degree and the answers can be obtained with a high level of reliability. Indeed, in order to receive significant amounts of research funding nowadays, it is usually necessary to describe a problem precisely, to detail the methods used to tackle it, and to predict the probable nature of the outcome (all this standing in contrast to the situation for pure science).
Applied medical science is evaluated primarily by peer review. Peers, who work in the same scientific field, are pretty much the same people as would be used to evaluate research by peer usage; but with peer review the criterion is the predictive opinion of peers as to whether information is probably valid, whereas with peer usage the evaluation is retrospective and depends upon further successful scientific work which incorporates the information to be evaluated.
Given the constraint of incremental extrapolation using established methodologies, peer reviewers should be able to make a relatively reliable judgment concerning the professional competence of authors, the probable appropriateness of techniques, and whether theorizing is sufficiently modest in its assumptions. The great advantages of peer review are that: (1) it can provide rapid evaluations on a timescale of weeks and months (rather than years and decades for peer usage); (2) peer review is much cheaper than peer usage, since peer usage requires the personnel, time and resources to perform further research, while peer review requires only the personnel, time and resources to give an opinion; (3) (precisely because it is more rapid and cheaper) peer review can in principle be applied to all applied scientific publications – while peer usage applied to only a small percentage of published work.
Naturally, this distinction between pure and applied medical science is somewhat idealized. There is a significant element of pre-publication selection at work even in pure medical science (otherwise these communications could not be identifiable as pure science), and even the most speculative of pure science relies substantially on established scientific knowledge. Furthermore, peer review also has an important role to play in many of the processes of pure science . However, the role of peer review of publications is secondary in pure science, related to increasing the efficiency of scientific communications rather than their validity (after all, peer review is used in many non-scientific academic fields such as English literature and theology ).
Despite such caveats, it seems likely that modern medical science should be considered as mostly a matter of incremental-extrapolations of established knowledge being evaluated by peer review, and implicitly published as ready for implementation. So long as newly published results fit into established practice (in the opinion of peer reviewers) then most published medical science will never be evaluated by pure scientific peer usage, and even where it is so evaluated this will come several years after its implementation. It is indeed striking that some of the recently evolved forms of prestigious and influential applied medical science, such as randomized mega-trials, are probably intrinsically inappropriate for evaluation by peer usage since they are non-theoretical .
The observation that applied medical science has evolved in its actual practice, if not in its self-descriptions, into a system that is fundamentally dependent on peer review mechanisms, can explain the profound shift in the perceived status of applied medical science publications.
Limitations of peer review
To recapitulate: while published pure science is implicitly un-validated and is being published for communication intended to lead to further evaluation by peer usage; applied science is implicitly validated before publication (by peer review) and is therefore published not for further evaluation but for pretty-much immediate implementation. Hence, any deficiencies or biases in applied medical science that are uncorrected by the peer review process tend not to be corrected, at least not for several years; during which time the incremental extrapolations allowed by peer reviewed evaluations may gradually build-up into a substantial edifice of error [1, 2 and 3].
The intrinsic assumption that published applied science is already validated is the loophole through which conflicts of interest have been able to insert themselves into medical science. The extra convenience of peer review evaluations is bought at the price of intrinsic limitations in a system based on personal opinion as a filter for eliminating the individual biases in published science. Research may conform to professional standards of methodology, yet still be significantly biased by commercial, bureaucratic or other interests. Authors and research institutions can trade on their established status and methodological competence in order to increase the probability that their published work is implemented in ways that enhance their own personal and professional self-interest, or the interests of their employers or paymasters. Because peer usage evaluations are so delayed, and can only be applied to a small percentage of the published literature, the chances are that subtly dishonest or unwittingly debased applied medical scientists will ‘get away with it' under the present system.
Conflict of interest declarations and statements are clearly an inadequate response to this problem. Medical journals lack the resources to police conflicts of interest, and the volume of published work would make this task logistically impossible. Furthermore, even in principle, journals probably should not do this job: firstly because it interferes with their primary role, and it is more efficient for journals to specialize at doing what they should be doing; and secondly because journals themselves suffer from conflicts of interest in relation to evaluating their authors' conflicts of interest. The most significantly damaging conflicts of interest are those which influence the most powerful and prestigious scientists, institutions and funding agencies and these are the ones that are most likely to distort medical policy and practice [1, 2 and 3]. Yet prestigious individuals and groups are precisely those which medical journals can least afford to offend.
Since the policing of conflicts of interest in all published applied medical science publications would be impossibly slow, expensive and inefficient; the implication is that this job should be done after publication and selectively focused on that small percentage of published work which is likely to be implemented.
Conflict of interest consultancy
The greatest motivation to evaluate conflicts of interest in applied medical science is probably found among the people who want to implement this work–‘implementers' such as politicians, policy-makers, clinicians, managers, patients and patient representatives. These are the people who have the most direct interest in knowing about conflicts of interest. The crux of the problem is that these implementers often lack both the resources and specialist ‘inside knowledge' required to detect and evaluate damaging conflicts of interest; especially when such conflicts are widespread among peer reviewers, as happens when a whole field of medical science is funded by one (or just a few similar) agencies (whether commercial or governmental).
Conflict of interest evaluation should be confined to that relatively small percentage of peer-reviewed and published scientific information which is potentially going to be implemented. Therefore, prior to implementation, there needs to be an extra process of independent investigation of probable conflicts of interest which may have generated a significant bias in published research. This job would require new forms of specialist expertise, including inside-knowledge of the area of science in question, and the time and resources to explore funding arrangements, personal reputation, networks of influence, etc.
In conclusion, there needs to be a new kind of professional institutional service for the implementers of applied medical science – a Conflict of interest consultancy service – whose job would be to provide conflict of interest evaluations of publications, scientists, teams and institutions. Logically, such CoI consultants would be hired by the groups who intend to use or implement the results of applied medical science. Of course, a new kind of consultancy service would be costly and would itself tend to generate new conflicts of interest. For example, commercial consultancy services have commercial biases; political or civil service consultancies would have political or administrative biases etc. For this reason I do not think this work should be done by a single soi-disant authoritative monopoly provider (such as a government agency). Probably the best way to control both interests and costs would be for CoI consultancy services to be provided by a multiplicity of organizations in a competitive market (since this method of establishing social systems structured around selection processes is usually found to be the most efficient method of fulfilling functions in modernizing societies ).
If CoI consultancy services fulfill a need, then they will probably arise as soon as the users of applied medical research realize that medical journals cannot adequately do this job.
Linked to the problem of conflicts of interest is the need for scrutiny of research data, and the difficulties which some researcher and funders place in the path of those who challenge their findings . Legal and regulatory solutions are likely to be cumbersome, slow and expensive.
In pure science there has always been an assumption of ‘transparency': that the raw data of research should, in principle, be available for scrutiny if and when requested. Of course, raw data very seldom was requested in pure science; only in borderline cases relating to potentially important research, where the work in question was neither quite good enough to be trusted in the published form, nor quite bad enough to be rejected without further need for investigation. And indeed, it would be massively inefficient to require full data disclosure and independent re-analysis for every item of published science. But the threat was there.
The sanction which enforced data transparency was not legal or regulatory, but the fact that if a scientist refused to make their raw data available when requested this would be taken as prima facie evidence of their ‘guilt' (dishonesty or incompetence). Such scientific work would then be disregarded as intrinsically unreliable, and this interpretation would be disseminated around the scientific peer group so that other scientists would not want to ‘bet their career' on such work, and would tend not to use it. In other words, the sanction against data non-transparency was denying the status of peer usage to ‘non-transparent' scientists. If a scientist wanted their worked to be used by their peers, then they had to provide raw data if and when asked.
A system in which research-implementers used CoI consultancy to investigate potential biases could have the same advantage. If the authors of applied medical science wanted the status of having their work implemented, they would need to provide access to any data that the implementers and their CoI consultants needed to scrutinize in order to guarantee research objectivity. (Presumably, the system would be similar to that operated by patent agencies, who are allowed access to otherwise confidential information for the purpose of checking but not for the purpose of use.) There would then be a prima facie assumption among the implementers of applied medical science that data could not be trusted if applied scientists refused access to raw results.
In other words, applied medical science would in future operate under the assumption of bias. All peer reviewed and published applied science would be assumed to embody conflicts of interest, therefore published science should not be implemented without such assumed biases being evaluated. The work of ‘non-transparent' applied scientists would be ignored by research-implementers on the grounds that it was presumably biased. The onus of proof would lie with the producers of applied medical science to cooperate with investigations to demonstrate that their work was free from damaging distortion.
The guarantee that research implementers do their job comes, in turn, from the various types of accountability which govern research implementation – political, managerial and ultimately legal. In a situation in which conflicts of interest are known to distort applied medical science, the research-implementers such as government agencies, health service organizations, or clinical professions arguably have a responsibility to take reasonable measures to investigate bias in the published research which underpins their policy decisions.
Failure to perform CoI investigation prior to implementation of applied medical science may eventually come to be regarded as a kind of negligence that would be liable to legal sanction.
Demarcation of pure and applied medical science
The conflict of interest problem would be clarified if pure and applied medical science were explicitly separated, with each system having its own media of communication that reflected their different needs and evaluation processes. The vast majority of medical researchers should be recognized as practicing applied medical science, and the minority of pure scientists would then evolve into identifiable interactive social groupings based around scientific specializations.
At present pure medical science suffers from being evaluated by the inappropriate application of peer review mechanisms which intrinsically operate to block the publication of ‘speculative' or ‘qualitative' extrapolations from current knowledge – especially when such extrapolations are supported by novel methodologies [6 and 9]. Peer reviewers, used to the requirements of applied science, are reluctant to fund or allow publication of audacious new work that cannot be regarded as probably correct-by-existing-criteria. Yet pure science is supposed to be working towards qualitative breakthroughs, and the publication of pure science is not an assertion of its validity but merely intended to communicate information for future evaluation by peer usage.
On the other hand, applied science is supposed to be ready for implementation at the time of publication (assuming that conflicts of interest are not a major problem), so that peer review is crucially important. For applied medical science, it is necessary that peer reviewers should ensure that the published work is confined to being a quantitative, incremental extrapolation of existing and validated work, that methods used are of established validity, and that speculation on the significance of results is not excessive. Unless this peer-evaluation process is done rigorously, the published work will not be dependable at the time of publication. Clearly much present peer review of applied medical science fails to achieve this ‘ready for implementation' standard. Yet, when their methods are criticized, applied medical scientists often inappropriately cry caveat emptor and claim the pure science ‘privilege' of publishing merely for further evaluation.
In future I would hope to see more clearly differentiated medical science communication systems for pure and applied branches of medical science, including separate journals, conferences and funding arrangements and probably also distinct processes of training, accreditation and career evaluation. It should be unambiguous as to whether communications relating to medical science are intended for further evaluation or to be ‘ready for implementation'. By the above account, Medical Hypotheses is a journal operating explicitly in the system of pure medical science; which I hope explains and justifies the journal's distinctive procedures of editorial review , and our policy that conflict of interest statements are not required of authors.
These opinions are the authors own, but thanks are due to David L. Hull, John Ziman, David Healy, Peter Andras, Jonathan Rees, the late David F. Horrobin and members of the current editorial advisory board of Medical Hypotheses who shared their views on this issue.
1. D. Healy, The antidepressant era. , Harvard University Press, Cambridge, MA (1997).
2. D. Healy, The creation of psychopharmacology. , Harvard University Press, Cambridge, MA (2002).
3. D. Healy, Let them eat Prozac: the unhealthy relationship between the pharmaceutical industry and depression (disease and desire). , New York University Press, New York (2004).
4. D.L. Hull, Science as a process: an evolutionary account of the social and conceptual development of science. , Chicago University Press, Chicago (1988).
5. J. Ziman, Peer review: a brief guide to practice. EASST–Newsletter 12 (1993), pp. 23–26.
6. B.G. Charlton, 2004. Inaugural editorial. Med. Hypotheses 62 (2004), pp. 1–2. SummaryPlus | Full Text + Links | PDF (129 K) | View Record in Scopus | Cited By in Scopus
7. B.G. Charlton, Fundamental deficiencies in the megatrial methodology. Curr. Controlled Trials Cardiovas. Med. 2 (2001), pp. 2–7. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus
8. Charlton B, Andras P. The modernization imperative Imprint academic: Exeter, UK; 2004
9. D.F. Horrobin, The philosophical basis of peer review and the suppression of innovation. JAMA 263 (1990), pp. 1438–1441.