25 April 2012

Questionable Research Practices: The "Steroids of Scientific Competition"

A new research paper just out in the journal Psychological Science by John et al. seeks to quantify the incidence of what are called "questionable research practices" in psychological research.  They write:
Although cases of overt scientific misconduct have received significant media attention recently (Altman, 2006; Deer, 2011; Steneck, 2002, 2006), exploitation of the gray area of acceptable practice is certainly much more prevalent, and may be more damaging to the academic enterprise in the long run, than outright fraud. Questionable research practices (QRPs), such as excluding data points on the basis of post hoc criteria, can spuriously increase the likelihood of finding evidence in support of a hypothesis. Just how dramatic these effects can be was demonstrated by Simmons, Nelson, and Simonsohn (2011) in a series of experiments and simulations that showed how greatly QRPs increase the likelihood of finding support for a false hypothesis. QRPs are the steroids of scientific competition, artificially enhancing performance and producing a kind of arms race in which researchers who strictly play by the rules are at a competitive disadvantage. QRPs, by nature of the very fact that they are often questionable as opposed to blatantly improper, also offer considerable latitude for rationalization and self-deception.
John et al. used multiple methods to assess the prevalence of questionable research practices among psychology researchers.  They found a surprising high prevalence of such practices in their study.

They note that some questionable research practices are indeed, questionable, but that the researchers surveyed also found many of the practices to be unjustifiable:
As noted in the introduction, there is a large gray area of acceptable practices. Although falsifying data (Item 10 in our study) is never justified, the same cannot be said for all of the items on our survey; for example, failing to report all of a study’s dependent measures (Item 1) could be appropriate if two measures of the same construct show the same significant pattern of results but cannot be easily combined into one measure. Therefore, not all self-admissions represent scientific felonies, or even misdemeanors; some respondents provided perfectly defensible reasons for engaging in the behaviors. Yet other respondents provided justifications that, although self categorized as defensible, were contentious (e.g., dropping dependent measures inconsistent with the hypothesis because doing so enabled a more coherent story to be told and thus increased the likelihood of publication). It is worth noting, however, that in the follow-up survey—in which participants rated the behaviors regardless of personal engagement—the defensibility ratings were low. This suggests that the general sentiment is that these behaviors are unjustifiable.
Even so, there are incentives in the research world to engage in questionable research practices:
We assume that the vast majority of researchers are sincerely motivated to conduct sound scientific research. Furthermore, most of the respondents in our study believed in the integrity of their own research and judged practices they had engaged in to be acceptable. However, given publication pressures and professional ambitions, the inherent ambiguity of the defensibility of “questionable” research practices, and the well-documented ubiquity of motivated reasoning (Kunda, 1990), researchers may not be in the best position to judge the defensibility of their own behavior. This could in part explain why the most egregious practices in our survey (e.g., falsifying data) appear to be less common than the relatively less questionable ones (e.g., failing to report all of a study’s conditions). It is easier to generate a post hoc explanation to justify removing nuisance data points than it is to justify outright data falsification, even though both practices produce similar consequences.
The authors suggest that the prevalence of questionable research practices may help to explain the finding the many studies cannot be replicated:
QRPs can waste researchers’ time and stall scientific progress, as researchers fruitlessly pursue extensions of effects that are not real and hence cannot be replicated. More generally, the prevalence of QRPs raises questions about the credibility of research findings and threatens research integrity by producing unrealistically elegant results that may be difficult to match without engaging in such practices oneself. This can lead to a “race to the bottom,” with questionable research begetting even more questionable research. If reforms would effectively reduce the prevalence of QRPs, they not only would bolster scientific integrity but also could reduce the pressure on researchers to produce unrealistically elegant results.
I think I am on safe ground when I say that the problem of questionable research practices goes well beyond the discipline of psychology.

10 comments:

  1. The only way to put a dent in QRPs would be to require a full public description of your research plan, from beginning to end, before the work is started. If you have to detail your hypothesis, experimental design and analysis ahead of time, you can't data mine or change statistical analysis to get 'significant' results.

    Of course, you'd also be revealing your research direction long before you publish. And keeping the secret is more important than scientific integrity.

    ReplyDelete
  2. -1- Mark B,

    I would disagree. Currently, journals only ever publish novel positive results. If they started publishing negative results, then there would be an incentive to publish studies that could be replicated (i.e., for fewer QRP's).

    This assumes, of course, that follow ups that could not reproduce the result would cause a researcher's reputation to suffer. Also, the researcher could more easily meet publishing requirements without having to find only novel positive results.

    But the current system seems to be built to encourage QRP's, so why are we surprised at the results?

    ReplyDelete
  3. MattL identifies the problem. We need to go to on-line journals and rid ourselves of the tyranny and space limitations of dead tree journals. Negative results are very important to the progress of science and need to be published so later researchers can reference them.

    ReplyDelete
  4. This is but one side of the coin. What are the ethics related to avoiding research subjects that may upset politically correct findings fearing access to future funding.
    A good example given the the growing costs associated with environmental nitrogen (reactive) controls is the drinking water standard for Nitrate based on two highly flawed 60 year old studies. Don't hold your breath waiting for a researcher dependent on EPA funds to revisit this MCL. Sometimes the big problems are what you don't see.

    ReplyDelete
  5. I think the problem is deeper than research plans or losing negative results.

    Over a 37 year career in academics I came to believe that about 30% of my colleagues had engaged in some sort of misconduct, some serious. What is worse, in two egregious cases of misconduct, deans tried to suppress the problem. In one case, the dean did cover up the misconduct, but in another my department's senior faculty forced the dean to act, and a plagiarist was dismissed from the university.

    Part of the problem is that academics are not policed. Every claim by Big Pharma is scrutinized and investigated, but faculty researchers can rest assured that no one will ever do that to them.

    ReplyDelete
  6. There seems to be some confusion about negative results. In molecular biology, 'negative results' generally means you did something wrong. Sloppy work in the lab does not contribute to scientific advance - it's just poor work. And in many fields, one could propose an endless number of hypotheses. Ruling one of them out is hardly a contribution to science - it only existed because you thought it up. Following that logic, one could produce an endless number of papers, all with negative results, and all worthless.

    Negative results CAN be valuable to know, but only in special cases. If there is a favored hypothesis that hasn't been proven, but is generally accepted as most probably, it might be useful to publish a refutation, so that people don't go on accepting it.

    ReplyDelete
  7. I like the subliminals! Smoothing the line between Psychology and more locally relevant matters. Kind of an image pun . . .

    "This can lead to a “race to the bottom,” with questionable research begetting even more questionable research." I like this, it reminds me of when the "tell-all" graph is emblazoned in a power point in front of a large audience. Sometimes people cheer, but perhaps one person in the audience knows that there are serious assumptions hidden in the graph that the graph was built on, itself based on the other graphs. But it is a graph none-the-less. It is hard to dispute such a powerfully simple argument as the graph. If you move away from the simplicity back into argument mode, you look the madman.

    But I'm not sure if assumptions are even in the gray area, bias may steer towards the QRPs and there perhaps is a lot worse.

    ReplyDelete
  8. -6- Mark B,

    Then why do the experiment in the first place?!

    But it's true, you can't simply be willing to publish negative results, you must be willing to publish positive replications of other research. But my understanding is that such replications often get turned away as not being novel results. Sure, you don't want to fill every journal with this stuff, but if you don't do it at all, we get blog posts like this one.

    Then we could get to a more correct popular understanding of peer review, in that it's not worth very much absent replication.

    I suppose some could abuse this system by postulating absurd theories, but that supposes that someone was willing to fund those studies.

    ReplyDelete
  9. I would like to discuss the recent paper by William Nordhaus: http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9779.2011.01544.x/full

    Can I post and introductory comment and ask some questions about that paper? How?

    ReplyDelete
  10. Given the recent article regarding Cancer research... money quote:

    "We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."**

    (widely covered, Reuters article here: http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328 ) it becomes difficult to shake the sense that much of what is published over the 20 years in academia may be more "best story" than best science.

    Where then is trust? Why then the marginalization of gadflies and skeptics?

    ReplyDelete