Sunday, February 15, 2009

Scientific misconduct as a principal-agent problem


How does an organization assure that its agents perform their duties truthfully and faithfully? We have ample evidence of the other kind of performance -- theft, misappropriation, lies, fraud, diversion of assets for personal use, and a variety of deceptive accounting schemes. And we have whole professions devoted to detecting and punishing these various forms of dishonesty -- accountants, investigative reporters, management consultants, insurance experts, prosecutors and their investigators. And yet dishonest behavior is common, in business, finance, government, and even the food industry. (See several earlier postings for discussions of the issues of corruption and trust in society.)

Here I'm especially interested in a particular kind of activity -- scientific and medical research. Consider a short but sobering list of scientific and medical frauds in the past fifty years: Cyril Burt's intelligence studies, Dr. Hwang Woo-suk's stem cell cloning fraud, the Anjan Kumar Banerjee case in Britain, the MMR vaccine-autism case, a spate of recent cases in China, and numerous other examples. And consider recent reports that a percentage of scientific photos in leading publications had been photoshopped in ways that favored the researcher's findings (link). (Here are some comments by Edward Tufte on the issue of scientific imaging, and here are some journal guidelines from the Council of Science Editors attempting to regulate the issue.) Plainly, fraud and misconduct sometimes occur within the institutions of scientific and medical research. And each case has consequences -- for citizens, for patients, and for the future course of research.

Here is how the editor of Family Practice describes the problem of research misconduct in a review of Fraud and Misconduct in Medical Research (third edition):
Fraud and misconduct are, it seems, endemic in scientific research. Even Galileo, Newton and Mendel appear to have fudged some of their results. From palaeontology to nanotechnology, scientific fraud reappears with alarming regularity. The Office of Research Integrity in the USA investigated 127 serious allegations of scientific fraud last year. The reasons for conducting fraudulent research and misrepresenting research in scientific publications are complex. The pressures to publish and to achieve career progression and promotion and the lure of fame and money may all play a part, but deeper forces often seem to be at work.

How important are fraud and misconduct in primary care research? As far as Family Practice goes, mercifully rare, as I pointed out in a recent editorial. Sadly, however, there are examples, all along the continuum from the beginning of a clinical trial to submission of a final manuscript, of dishonesty and deceit in general practice and primary care research. Patients have been invented to increase numbers (and profits) in clinical trials, ethical guidance on consent and confidentiality have been breached, and ‘salami’ and duplicate publication crop up from time to time.
The problem is particularly acute in the area of scientific and medical research because the public at large has very little ability to independently evaluate the validity of a research finding, let alone validate the integrity of the research. And this extends to science and medicine journalists in large part as well, since they are rarely given access to underlying records and data for a study.

The stakes are high -- dishonest research can cost lives or delay legitimate research, not to speak of the cost of supporting the fraudulent research in the first place. The temptations for researchers are large as well -- funding from drug and device makers, the incentives and pressures of career advancement, and pure vanity, to name several. And we know that instances of fraud and other forms of serious scientific misconduct continue to occur.

So, thinking of this topic as an organizational problem -- what measures can be taken to minimize the incidence of fraud and misconduct in scientific research?

One way of describing the situation is as a gigantic principal-agent problem. (Khalid Abdalla provides a simple explanation of the principal-agent problem here.) It falls within the scope of the more general challenge of motivating, managing, and supervising highly skilled and independent professionals. The "agent" is the individual researcher and research team. And the "principal" may be construed at a range of levels: society at large, the Federal government, the NIH, the research institute, or the department chair. But it seems likely that the problem is most tractable if we focus attention on the more proximate relationships -- the NIH, the research institute, and the researcher.

So this is a good problem to consider from the point of view of institutional design and complex interactive social behavior. We know what kind of behavior we want; the problem is to create the institutional settings and motivational processes through which the desired behavior is encouraged and the undesired behavior is detected and punished.

One response from the research institutions (research universities, institutes, and medical schools) is to emphasize training programs in scientific professional ethics, to more deeply instill the values of strict scientific integrity in each researcher and each institution. The hope here is that pervasive attention to the importance of scientific integrity will have the effect of reducing the incidence of misconduct. A second approach, from universities, research organizations, and journals, is to increase oversight and internal controls surrounding scientific fraud. One example -- some journals require that the statistical analysis of results be performed by a qualified, independent, academic statistician. Strict requirements governing conflicts of interest are another institutional response. And a third approach from institutions such as the NIH and NSF is to ratchet up the consequences of misconduct. The United States Office of Research Integrity (link) has a number of training and enforcement programs designed to minimize scientific misconduct. The British government has set up a similar organization to combat research fraud, the UK Research Integrity Office (link). Individuals found culpable will be denied access to research funds -- effectively halting their scientific careers, and criminal prosecution is possible as well. So the sanctions for misconduct are significant. (Here's an egregious example leading to criminal prosecution).

And, of course, the first and last line of defense against scientific misconduct is the fundamental requirement of peer review. Scientific journals use expert peers to evaluate the research to be considered for publication, and universities turn to expert peers when they consider scientists for promotion and tenure. Both processes create a strong likelihood of detecting fraud if it exists. Who is better qualified to detect a potentially fraudulent research finding than a researcher in the same field?

But is all of this sufficient? It's unclear. The most favorable interpretation would be the judgment that this combination of motivational factors and local and global institutional constraints will contain the problem to an acceptable level. But is there empirical evidence for this optimism? Or is misconduct becoming more widespread over time? The efforts to deepen researchers' attachment to a code of research integrity are certainly positive -- but what about the small percentage of people who are not motivated by an internal compass? Greater internal controls are certainly a good idea -- but they are surely less effective in the area of research than accounting controls are in the financial arena. Oversight is just more difficult to achieve in the area of scientific research. (And of course we all know how porous those controls are in the financial sector -- witness Enron and other accounting frauds. ) And if the likelihood of detection is low, then the threat of punishment is weakened. So the measures mentioned here have serious limitations in likely effectiveness.

Brian Deer is one of Britain's leading journalists covering medical research (website). His work in the Sunday Times of London established the medical fraud underlying the spurious claim that MMR vaccine causes autism mentioned above. Following a recent public lecture to a medical audience he was asked the question, how can we get a handle on frauds like these? And his answer was blunt: with snap inspections, investigative policing, and serious penalties. In his perception, the stakes are too high to leave the matter to professional ethics.

It perhaps goes without saying that the vast majority of scientific researchers are honest investigators who are guided by the advancement of science and medicine. But it is also apparent that there are a small number of researchers of whom these statements are not true. And the problem confronting the organizations of scientific research is a hard one: how to create the institutional structures where misconduct is unlikely to occur and where misconduct is most likely to be detected when it does.

There is one other connection that strikes me as important, and it is a connection to the philosophy of science. It is an item of faith for philosophers of science that the scientific enterprise is truth-enhancing, in this sense: the community of researchers follows a set of institutionally embodied processes that are well designed to enhancing the comprehensiveness and veridicality of our theories and weeding out the false theories. Our theories get better through the empirical and logical testing that occurs as a result of these socially embodied procedures of science. But if the corrupting influences mentioned above are indeed common, then the confidence we have in the epistemic value of the procedures of science takes a big hit. And this is worrisome news indeed.

5 comments:

Anonymous said...

The one thing I take from your post and from almost every other text on scientific misconduct is that there is not much besides individual cases we know about it. There is no trustworthy information on the scope of the problem. (If it even is a problem.) The most vexing thing about it is that even defining scientific misconduct is a highly problematic enterprise. (e.g. there is not even a thin line between misconduct and creativity. Very often it is the same thing.)
If we accept that misconduct might be a problem in science then I am sceptical about using a principal-agent model for analysis. As can be seen from your post no new or creative solutions emanate from this perspective. Professional ethics, oversight and internal controls, training and enforcement, and peer review are obviously ineffective, especially when the problem can not be clearly specified. Based on the only anecdotal evidence there is Philip Mirowski's macro-historical approach to this topic seems to me most appropriate. Even David Guston who is tackling the topic with a principal-agent approach also, remains exclusively on a macro-level. From their point-of-view commercialization might be the main culprit and principal-agent approaches seem to be blind for this aspect.

Nancy Walton, Ph.D. said...

I think that this is an important perspective from which to view research integrity. Those people involved in research ethics and the oversight of scientific and medical research should have an awareness of this approach.

Here's my own blog entry citing yours.

http://www.researchethics.ca/blog/2009/02/research-integrity-and-principal-agent.html

Nancy

Dan Little said...

Nancy,

Thanks for the discussion. I visited your site and found it very interesting! The Wakefield posting is particularly pertinent. Your blog is a great resource for people interested in research ethics. Dan

Lee Redding said...

I think framing misconduct in research as a principal-agent problem is interesting. Usually when principal-agent problems are discussed in employment situations it is considered important that the agent be given a lot of incentive -- a virtually-guaranteed job at a fixed salary would be a disaster as the employee would have insufficient incentive to produce results.



In this setting, however, the identified potential problem is that the agent would have too much incentive to produce "results" so that fake results would be reported. The obvious solution, then, is that once the agent has proven herself of high quality she should be given a reasonably guaranteed job at a relatively fixed salary. This of course is a rough description of the academic tenure process.

Dan Little said...

Lee, Thanks for the interesting observation. You're right that the incentives associated with academic tenure are highly relevant to faculty productivity. Wouldn't an organizational behavior theorist say that there needs to be a differential incentive distinguishing the high-productivity, medium-productivity, and low-productivity professionals? But then this brings us back to the need for an effective and visible enforcement regime to discourage false results.