Teaching and Assessing the Responsible Conduct of Research: A Delphi Consensus Panel Report.

2009 
Introduction Responsible Conduct of Research (RCR) education and training too often emphasize rules like "Do not falsify data" or "Do not plagiarize." These are simple extrapolations of what most researchers learned in kindergarten: lying and stealing are wrong. Reminding researchers of such rules involves stating the obvious, with the result that RCR education and training may be perceived as boring, unnecessary, and ineffective. However, not all issues in research ethics are so clear-cut. In a survey by Martinson, Anderson, & de Vries (2005) of over 1,700 researchers, 33% reported engaging in so-called "questionable research practices" such as dropping data points from analyses based on a hunch or inappropriately assigning authorship. The example of inappropriate authorship is particularly instructive. First, practices for assigning authorship vary across disciplines (Steneck, 2004). Second, even in a discipline such as medicine, in which international standards have been published (International Committee of Medical Journal Editors, 2007), authorship assignment has not become standardized. A recent review of 234 biomedical journals found that 41% gave no guidance about authorship and only 19% were based on the current criteria of the International Committee of Medical Journal Editors (Wager, 2007). Uncertainty about criteria helps to explain the high rates at which researchers admit to assigning authorship in a questionable manner. Yet, given a lack of standardized criteria within professions, even RCR instructors are uncertain what should be taught in the area of authorship. While rates of strict research misconduct (data falsification, fabrication, or plagiarism) are much lower than rates of questionable practices, they are also higher than many might assume. A survey by the U.S. Office of Research Integrity (ORI) of researchers holding funding from the National Institutes of Health (NIH) at 605 different institutions, inquired into the number of times researchers had observed suspected research misconduct in their own departments over the previous three academic years (Titus, Wells, & Rhoades, 2008). A total of 2,212 researchers completed the survey (yielding a 51% response rate); they reported observing a total of 201 instances. By extrapolating this rate of observed suspected misconduct--assuming that the 49% who did not respond observed no instances of misconduct--the authors estimated that there are more than 2,300 observations of likely misconduct per year in research funded by the U.S. Department of Health and Human Services (DHHS). Given that ORI receives an average of only 24 institutional investigation reports per year (approximately 1% of the estimated incidences observed), these numbers suggest the need for RCR education and training--not only to reduce rates of misconduct, but also to provide guidance to researchers in how to respond to observed misconduct. Yet this topic is also controversial. Real-world decisions regarding whistle-blowing are often far more complex (Smith, 2006) and their consequences far more devastating (Couzin, 2006) than ethics textbooks suggest. While it is not sufficient for RCR instructors to remind people of a duty to report misconduct, it is unclear precisely what content or standards should be taught. In 2000, ORI identified nine cores areas that RCR courses should address: (1) data acquisition, management, sharing, and ownership; (2) mentor/trainee responsibilities; (3) publication practices and responsible authorship; (4) peer review; (5) collaborative science; (6) human subjects; (7) research involving animals; (8) research misconduct; and (9) conflict of interest and commitment. While these core areas provide a useful initial framework, there is no evidence of professional consensus that ORI's list includes the most important areas of RCR, nor what content should be taught and assessed within the core areas (Steneck & Bulger, 2007). …
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    39
    Citations
    NaN
    KQI
    []