A study led by researchers from the Stanford University School of Medicine has revealed that the biggest source of bias in the scientific field, across all disciplines, is the overestimation of effect size.

Other risk factors for unreliable results include a scientist’s early career status, the degree of isolation from other researchers, and prior record of misconduct. The study reviewed more than 3,000 meta-analyses from 22 fields of science. These comprised nearly 50,000 individual research studies.

“Our study tested with much greater accuracy than before several hypotheses about the prevalence and causes of bias,” said Stanford senior research scientist and lead author of the paper, Daniele Fanelli. “Our results send a reassuring message, but only in part.”

However, professor of medicine and of health research and policy John Ioannidis said that in the data they examined, the influence of different kinds of bias changed over time and seemed to depend on the individual scientist. “We show that some of the patterns and risk factors seem to be getting worse in intensity over time,” he said.

Several types of hypothesised scientific bias were analysed:

1. Small-study effect

In small studies, it is harder to determine whether certain trends observed are true of the entire population in general, or merely due to statistical fluctuation. Hence, there is a greater tendency for small trials to report larger benefits of treatment. If there are many small studies that have been done, a systematic review draws from all these biased results, and will end up slanted, too.

Such small biased studies abound in the field of homeopathy, an alternative form of medicine that most doctors look askance at for good reason. For example, a small (170-strong) 1994 study reported that homeopathic medicines did have a small benefit to recurrent upper respiratory tract infections. Yet, this has been largely disproved.

2. Grey literature bias

Grey literature refers to materials and research produced by parties who do not go through traditional academic publishing and distribution channels like peer-reviewed journals. They include government reports, conference proceedings and PhD theses, and tend to report statistically insignificant effects. Nevertheless, different research disciplines exhibit different degrees of reliance on grey literature.

3. United States effect

US scientists, especially those who investigate human behaviour are more likely to report exaggerated results than their counterparts in other countries. “We think the most likely explanation is that it's about the research environment in the US," says Fanelli.

"Somehow the researchers there are subtly more pressured than elsewhere in the world to make strong discoveries. This very idea that you do science to make strong discoveries is natural but it's a problem to science itself.”

This statement underscores the possible common root cause of most of these biases: the desire and need to publish something of significance. This tendency is unsurprisingly, amplified in scientists who are just starting out in their careers, with the need to prove themselves, and in researchers with a history of misconduct. Those working in small or long-distance collaborations, too, were found to exhibit a greater likelihood of overestimating effect sizes.

4. Early-extremes effect and decline effect

Excitement, or the desire to shock, from possibly extreme/controversial findings may cause researchers to publish their work early, in what is known as the early-extremes effect. Conversely, the decline effect may occur at the same time in some cases– being applied to a case where reports of extreme effects are followed by subsequent reports of lower effects.

This was the case for the obesity drug Contrave, where the first interim study indicating the high effectiveness (41% reduction in relative risk of cardiovascular events) of the drug was leaked. However, later findings suggest more modest effectiveness of about 12%, resulting in a scandal.

5. Citation bias

This refers to the tendency of scientists to cite papers with greater effect size. The more a paper is cited, the greater is its perceived impact. This can lead to the formation of an entirely distorted scientific consensus in a topic, as in the case of a widely accepted belief: that the protein β amyloid – which plays a role in brain degeneration in Alzheimer’s disease – is also produced by and harms skeletal muscle fibres in the progressive wasting muscle disease, sporadic inclusion body myositis.

Despite the prevalence of technical weaknesses, four of the papers supporting this claim were heavily cited, accounting for 94% of the 214 citations made on the topic over the next 12 years. This led to the four papers being perceived to have authoritative status.

6. Industry bias

Industry-based funding can skew the findings in favour of the effectiveness of a touted product or new fact which lends support to corporate goals. A 2006 study analysed cardiovascular drug trials and found that 65.5% of the industry-funded studies endorsed the benefits of the newer treatment, while only 39.5% of non-profit funded studies did so. MIMS

Read more:
Only 16% of medical news found to have independent expert commentary
Study claims many scientific publications from recent years are “unnecessary, misleading and conflicted”
Secondhand smoke may not be as dangerous as previously believed

Sources:
http://med.stanford.edu/news/all-news/2017/03/studies-of-scientific-bias-targeting-the-right-problems.html
Nüesch, E., Trelle, S., Reichenbach, S., Anne W S Rutjes, Tschannen, B., Altman, D. G., . . . Jüni, P. (2010). Small study effects in meta-analyses of osteoarthritis trials: Meta-epidemiological study. BMJ: British Medical Journal, 341(7766), 241-241. doi:10.1136/bmj.c3515
http://www.iflscience.com/health-and-medicine/homeopathy-ineffective-study-concludes/
http://cardiobrief.org/2016/03/08/controversial-ill-fated-obesity-drug-trial-published-in-jama/
https://p2c.com/students/should-we-have-faith-in-science-part-iii-citation-bias/
Greenberg, S. A. (2009). How citation distortions create unfounded authority: Analysis of a citation network. BMJ: British Medical Journal, 339(7714), 210-213. doi:10.1136/bmj.b2680
http://theness.com/neurologicablog/index.php/citation-bias-confirmation-bias-for-scientists/
https://blogs.scientificamerican.com/guest-blog/can-the-source-of-funding-for-medical-research-affect-the-results/
https://www.theguardian.com/science/2013/aug/26/research-funding-exaggerates-results