Over the years, scientists worldwide have reached beyond boundaries to address various issues – and their efforts have no doubt made huge contributions to our society.

Contributions that have affected each one of us, in one way or another.

However, a recent study by a highly reputable researcher and professor from Stanford University suggested that “the large majority of produced systematic reviews and meta-analyses are unnecessary, misleading, or conflicted”. Dr John Ioannidis, who ironically is also a well-known meta-researcher, added that such studies published in recent years are now in doubt.

Meta-analyses and systemic reviews are retrospective studies that involve researchers collecting and scrutinising information from previous studies in order to achieve a better understanding of the topic. These methods are typically considered of high-standards for gathering scientific evidence, and researchers utilise this method in order to distil and address gaps in their field of study.

Ioannidis: Most studies published in recent years are redundant

Before the 1980s, systematic reviews and meta-analyses were not common methods used by researchers. In his study, however, Ioannidis showed that there has been a staggering increase in the number of publications that were based on these methods – an impressive 2,500% inflation from 1991 to 2014.

The growth, he theorised, was due to the proliferation of papers published by experts from various scientific disciplines over the years, which led to a repressed demand for newer scientists to compile, analyse and make sense of the masses of evidence that have been accumulated.

While these methods are known to draw meaningful and statistically-valid conclusions, Ioannidis disputed that plenty of these studies that were published in recent years are unnecessary, reasoning that most of the topics addressed have overlapped meta-analyses which were redundant.

Systematic reviews may be used as a marketing tool

“Some fields produce massive numbers of meta-analyses; for example, 185 meta-analyses of antidepressants for depression were published between 2007 and 2014. These meta-analyses are often produced either by industry employees or by authors with industry ties and results are aligned with sponsor interests,” he wrote, hypothesising that only approximately 3% of all these reviews provide accurate and essential information.

According to Ioannidis, pharmaceutical companies and corporations from similar industries have taken advantage of systematic reviews as a marketing tool and begun to engage in services that are offered to conduct meta-analyses for a fee. The concern is that these companies only report results that are inclined toward their clients’ favour, causing a distortion of evidence and data that are ultimately published.

“If the paying customer doesn’t want to see the results because they are negative, the contractor doesn’t publish them,” he said, adding that a large portion of analyses carried out by these contractors were never published.

These reviews are usually published in respectable journals and frequently cited, and the hazard of inaccurate data from such influential sources “is worse than when unimportant studies are wrong.”

“Surprising results” given incentives as scientists race to publish

A separate study by a cognitive scientist from the University of California has also suggested that scientists are often incentivised to publish surprising results, even if such findings are likely to be inaccurate. This substandard practice is caused by the highly competitive environment in the academic world, and will continue so long as such incentives are in place to motivate these publications, said Paul Smaldino who led the study.

“This doesn’t require anyone to actively game the system or violate any ethical standards. Competition for limited resources – in this case jobs and funding – will do all the work,” he explained. One such “survival of the fittest” example he gave is the problem of “low statistical power”, which refers to findings in human behaviour, health or psychology based on data from a sample of people that is actually too small to be able to draw any statistically sound conclusions.

Despite the race to publish, Vince Walsh, a professor of neuroscience at University College London stated that he is yet to be convinced of the existence of a “replication crisis.”

However, he agreed that the pressure to publish was “anti-intellectual”. “Scientists are just humans and if organisations are dumb enough to rate them on sales figures, they will do discounts to reach the targets, just like any sales person,” he said.

How can the situation be improved?

According to Smaldino, the circumstances can change sooner if more people are made aware of these problems and commit to improving the situation.

One approach is to encourage more prospective reviews.

Currently, most systematic reviews and meta-analyses are retrospective, which means that scientists collect and analyse data from previous studies in order to draw a conclusion. Retrospective reviews can be flawed, however, as the researchers who conducted the studies in the past may adhere to different protocols, consequently producing results of varying quality despite addressing the same question.

In such cases, these studies may not be comparable and the data collected would need to be tweaked in order to allow for fair evaluation. However, this risks yielding inaccurate results.

It may also be difficult for researchers carrying out the reviews to contact authors of the older studies in order to inquire for retrieve additional data that were not published.

Prospective reviews are one way to overcome these limitations. According to Ioannidis, there is an initiative website available that allows scientists to plan their method of research ahead of time, and if registration on the site becomes a requirement by top journals, the number of prospective reviews published will radically increase.

An alternative solution would be to fix the input of resources; whereby scientists will need to be more transparent about their methods of work. At present, the quality of systematic reviews and meta-analyses are only as good as the original studies that are being reviewed, which means that if the data compiled are biased due to selective publication, the review that is churned out will also ultimately produce a biased and incomplete conclusion of the topic being analysed.

On 14 September 2016, the United Nations called for global action on transparency in clinical trials by asking government bodies worldwide to pass legislation that will require registration of clinical trials and full disclosure of methods and results in publications.

Another method to address the input issue would be to publish systematic reviews as “living documents” that allow other researchers who are interested in the same subject area to progressively edit or update reviews using a standard methodology, according to Ioannidis.

Meanwhile, Malcolm Macleod, professor of neurology and translational neuroscience at Edinburgh University says that Ionnaidis’ work is the first to reveal the staggering numbers of substandard reviews that are being carried out.

Kay Dickersin, director of the Centre for Clinical Trials and Evidence Synthesis at Johns Hopkins University in Maryland said that it is hoped that such analyses will bring attention to the problem. MIMS


Read more:
Does scientific recognition boil down to a race against time?
Putting the Chan-Zuckerberg Initiative under the medical research microscope
Singaporean and Malaysian authorities reject haze study estimates of early deaths