NewsPronto

 
The Property Pack
.

News

  • Written by Fiona Fidler, Associate Professor, School of Historical and Philosophical Studies, University of Melbourne

Cherry picking or hiding results, excluding data to meet statistical thresholds and presenting unexpected findings as though they were predicted all along – these are just some of the “questionable research practices” implicated in the replication crisis psychology and medicine have faced over the last half a decade or so.

Read more: Science is in a reproducibility crisis – how do we resolve it?

We recently surveyed more than 800 ecologists and evolutionary biologists and found high rates of many of these practices. We believe this to be first documentation of these behaviours in these fields of science.

Our pre-print results have certain shock value, and their release attracted a lot of attention on social media.

  • 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking)

  • 42% had collected more data after inspecting whether results were statistically significant (a form of “p hacking”)

  • 51% reported an unexpected finding as though it had been hypothesised from the start (known as “HARKing”, or Hypothesising After Results are Known).

Although these results are very similar to those that have been found in psychology, reactions suggest that they are surprising – at least to some ecology and evolution researchers.

There are many possible interpretations of our results. We expect there will also be many misconceptions about them and unjustified extrapolations. We talk though some of these below.

Read more: How we edit science part 2: significance testing, p-hacking and peer review

It’s fraud!

It’s not fraud. Scientific fraud involves fabricating data and carries heavy criminal penalties. The questionable research practices we focus on are by definition questionable: they sit in a grey area between acceptable practices and scientific misconduct.

Not crazy. Not kooky. Scientists are just humans. from www.shutterstock.com

We did ask one question about fabricating data and the answer to that offered further evidence that it is very rare, consistent with findings from other fields.

Read more: Research fraud: the temptation to lie – and the challenges of regulation

Scientists lack integrity and we shouldn’t trust them

There are a few reasons why this should not be the take home message of our paper.

First, reactions to our results so far suggest an engaged, mature scientific community, ready to acknowledge and address these problems.

If anything, this sort of engagement should increase our trust in these scientists and their commitment to research integrity.

Second, the results tell us much more about structured incentives and institutions than they tell us about individuals and their personal integrity.

Read more: Publish or perish culture encourages scientists to cut corners

For example, these results tell us about the institution of scientific publishing, where negative (non statistically significant results) are all but banished from most journals in most fields of science, and where replication studies are virtually never published because of relentless focus on novel, “ground breaking” results.

The survey results tells us about scientific funding, again where “novel” (meaning positive, significant) findings are valued more than careful, cautious procedures and replication. They also tell us about universities, about the hiring and promotion practices within academic science that focus on publication metrics and overvalue quantity at the expense of quality.

So what do they mean, these questionable research practices admitted by the scientists in our survey? We think they’re best understood as the inevitable outcome of publication bias, funding protocols and an ever increasing pressure to publish.

Read more: Novelty in science – real necessity or distracting obsession?

We can’t base important decisions on current scientific evidence

There’s a risk our results will feed into a view that our science is not policy ready. In many areas, such as health and the environment, this could be very damaging, even disastrous.

One reason it’s unwarranted is that climate science is a model based science, and there have been many independent replications of these models. Similarly with immunisation trials.

We know that any criticism of scientific practice runs a risk in the context of anti-science sentiment, but such criticism is fundamental to the success of science.

Remaining open to criticism is science’s most powerful self-correction mechanism, and ultimately what makes the scientific evidence base trustworthy.

Transparency can build trust in science and scientists. from www.shutterstock.com

Scientists are human and we need safeguards

This is an interpretation we wholeheartedly endorse. Scientists are human and subject to the same suite of cognitive biases – like confirmation bias – as the rest of us.

As we learn more about cognitive biases and how best to mitigate them in different circumstances, we need to feed this back into the norms of scientific practice.

Read more: Confirmation bias: A psychological phenomenon that helps explain why pundits got it wrong

The same is true of our knowledge about how people function under different incentive structures and conditions. This is the basis of many of the initiatives designed to make science more open and transparent.

The open science movement is about developing initiatives to protect against the influence of cognitive bias, and alter the incentive structures so that research using these questionable research practices stops being rewarded.

Some of these initiatives have been enthusiastically adopted by many scientists and journal editors. For example, many journals now publish analysis code and data along with their articles, and many have signed up to Transparency and Openness Promotion (TOP) guidelines.

Other initiatives offer great promise too. For example, registered report formats are now offered by some journals, mostly in psychology and medical fields. In a registered report, articles are reviewed on the strength of their underlying premise and approach, before data is collected. This removes the temptation to select only positive results or to apply different standards of rigour to negative results. In short, it thwarts publication bias.

We hope that by drawing attention to the prevalence of questionable research practices, our research will encourage support of these initiatives, and importantly, encourage institutions to support researchers in their own efforts to align their practice with their scientific values.

Read more http://theconversation.com/our-survey-found-questionable-research-practices-by-ecologists-and-biologists-heres-what-that-means-94421