Large sums of monetary funds are being channelled into clinical trials, but the question remains; Do clinical trials still produce statistically significant results that can guide best practice in medicine?
Are clinical trials worth the billions of dollars invested?Clinical trials have been regarded by the medical fraternity as the gold standard for research in science, particularly for two main purposes – the approval of new treatment drugs for safe use in humans and the comparison of efficacy in existing treatments.
The former is usually funded by medical companies, whose interests lie in the successful development of a drug, and conducted in private laboratories.
Trials comparing treatments however, are often publicly funded and tend to take place in universities to steer informed decisions by the government, healthcare providers and patients. Although funds are smaller than novel drug trials, they are still costly, with the National Institute of Health Research spending £74million on trials in 2014 and 2015.
Similarly, the Singapore Economic Development Board (EDB), the statuary board responsible for grants, offers S$1 million over two years, to S$25 million over five years for research grants. The Prime Minister of Singapore, Lee Hsien Loong, has also announced that the government will invest a staggering S$4 billion per annum on research activities until the year 2020.
With such volumes of publically funded trials, uncertainty over the quality looms –with some experts estimating nearly 50% of trials produce statistically uncertain results and yield inaccurate information about effectiveness of treatments.
To make matters worse, a large numbers of scientific publications were found to have unnecessary, misleading, and conflicted information.
Clinical trials gold standard – only with good sample sizeOne of the larger issues is attributed to the large numbers of sample size required to ensure statistically significant data, usually in later-stage trials that recruit human participants.
Before the commencement of a trial, researchers are required to calculate the appropriate sample size based on data on the minimum clinically important difference and the outcome variance that is being measured, and publish the numbers with the trial results for peer review.
For example, the Add-Aspirin trial in the UK is seeking 11,000 patients to investigate the effects of aspirin on recurrence of cancer. Should there be insufficient participants when compared to the desired sample size, researchers are unable to conclude with certainty that their conclusion is valid – even if a difference is detected during the trial – and end up being wrong.
Inadequate sample size overlooked, clinical trials proceedThe problem is that some of these studies are still being evaluated despite poor enrolment of participants.
An analysis of trials that were funded by two of the UK’s largest funding bodies and performed between 1994 and 2002 revealed that over two-thirds failed to recruit the required numbers. Of these, 53% were granted an extension and funding, yet 80% still failed to meet their targets.
A subsequent follow up analysis of trails performed in 2002 to 2008 found than only 55% of the trials managed to meet their target numbers.
With the UK – established as a word leader for clinical trials – struggling with recruitment, the chances of clinical trials meeting their recruitment targets elsewhere may be much lower.
One publication revealed that telephoning non-response targets would result in a 6% increase in recruitment.
Other interventions are disappointing, with suggestions to inform participants that they are in the control group, ultimately violating the key tenet of clinical trials that is blind testing.
Many researchers who are aware of the issue are working to find methods for more effective recruitment methods. Unfortunately, with funding increasingly tightened, addressing recruitment does not appear to be a priority in policy. MIMS
Study claims many scientific publications from recent years are “unnecessary, misleading and conflicted”
More negativity in science is needed
Side effects of clinical trials often under-reported in journals
Does scientific recognition boil down to a race against time?