To discourage academics from publishing eye-catching but irreproducible results—researchers have put forward an ambitious new metric to rate the reproducibility of a scientific paper – the R-factor.  

It is estimated that half of the funding for research work is spent on irreproducible research. Academics are writing and publishing papers, to propel their careers and accumulate citations. However, how many of these papers actually have valid results?

Attempts have been made to overcome the problem, including replicating previous studies with a specialised reproducibility centre. However, this has also raised other concerns, such as cost and effectiveness.

Therefore, the proposed R-factor aims to measure the factual accuracy of a paper and to solve the growing “crisis” of scientific credibility.

R-factor: The ratio of confirming studies over total attempts

Upon publication, a scientific paper can be cited by another paper. The latter confirms, refutes or mentions the paper. The R-factor comes into play by indicating how often a claim has been confirmed.

It is calculated by dividing the number of subsequent published reports that verified a scientific claim by the number of attempts to do so.

In other words, the R-factor is the proportion of confirmed claims over the total number of attempts. It is rated from zero to one: the closer the R-factor is to 1, the more likely the study is true.

The R-factor stands for “reproducibility, reputation, responsibility and robustness”. Photo credit: Verum Analytics
The R-factor stands for “reproducibility, reputation, responsibility and robustness”. Photo credit: Verum Analytics

Attaching an R-factor to a paper should give researchers “a bit of pause before they actually publish stuff” – because the metric will rise or fall, depending on whether later work corroborates their findings, asserted Josh Nicholson, one of the inventors of the metric and chief research officer at Authorea, a New York-based research writing software company.

R-factors can also be applied to investigators, journals or institutions, whereby their R-factor would be the average of the R-factors of the confirmed claims that they reported.

Freely and openly accessible

Verum Analytics, a website aimed at popularising the R-factor is working towards making the R-factor of scientific claims freely and openly accessible.

They hope to have this metric stored in a database that is easily accessed through an interactive interface. They also want the R-factor to be indicated on the first page of all scientific reports, so that readers can gauge the credibility of a report.

However, the current major challenge lies in the overwhelming task of poring through and interpreting countless articles to conclude the R-factor of one paper. This means all papers that cite a study need to be identified and the number of attempts to confirm the claims will be manually tallied.

The team has so far tediously accomplished this with 12,000 papers. It has now launched Verum Analytics to seek help from the public. The aim is to have enough examples to train a machine learning system to do the categorisation automatically.

Can the R-factor be the ultimate solution?

Having an R-factor assigned to scientific papers would inhibit academics from rushing to publish unverified claims, the team claims. It could correct the existing imbalance of incentives, as a scientist could acquire higher funding with a higher R-factor.

The work may be ambitious and critics have bashed that such a calculation could be too simplistic. This is because it gives all subsequent studies equal weight, even if some studies have much bigger sample sizes than others.

It also “glosses over hard questions”. For instance, calculating the R-factor for the claim that ‘antidepressants cause suicide’. If a paper was found reporting that antidepressants increase suicide attempts but not suicide deaths, would that confirm the hypothesis, refute it, or neither? Opinions will differ, therefore arising in different R-factors for the same literature.

Critics also say that the R-factor has no advantages over a proper meta-analysis as it has the similar process of combing through multiple papers to obtain results.

The R-factor is also subject to publication biases as it takes published literature ‘at face value’, which will be affected by publication bias, p-hacking among others.

The research team has also been focusing on examples from cancer biology, which critics say is a limitation. Molecular biology studies do not usually use statistics and results are presented in a qualitative manner. Therefore, the R-factor may be useful in some field – but not on science that uses statistics, such as psychology and neuroscience. MIMS

Read more:
How can doctors tackle medical misinformation?
Star Wars hoax make four scientific journals look like a joke
When it comes to reports and papers – go easy on those jargons, scientists

Sources:
https://replicationnetwork.com/2017/08/20/is-the-r-factor-the-answer/
http://www.biorxiv.org/content/biorxiv/early/2017/08/09/172940.full.pdf
https://www.timeshighereducation.com/news/r-factor-new-way-rate-journal-articles
http://blogs.discovermagazine.com/neuroskeptic/2017/08/21/r-factor-fix-science/#.WaTRNj4jGUk