Assessments of harms using unpublished clinical study reports

Trials recently published an article on the reporting of harm outcomes and the use of unpublished clinical trials. We asked co-author Alex Hodkinson to tell us more about the work.

Randomized controlled trials (RCTs) are considered the gold standard to assess the beneficial effects of healthcare interventions. However, they are not always suitable to evaluate harms.

Randomized trials frequently lack pre-specified hypotheses for harms, and are usually designed to evaluate beneficial effects as their primary objective. They often lack large enough sample sizes and can be vulnerable to reporting biases, which can lead to distorted conclusions about harms when data are unpublished, partially reported, downplayed, or omitted.

Early evidence of harms reporting in RCTs was found to be largely inadequate, which initially prompted a revision of the current standards.

The consolidated standards of reporting in trials (CONSORT) members convened in May 2003 to generate an extension guideline (CONSORT-harms) of the standard CONSORT. This extension provides a set of 10 specific and comprehensive guidelines regarding harms reporting in RCTs.

Do authors and journals follow guidelines?

Since the release of the extension it was still unknown whether authors and journals routinely endorse the guidelines. To gauge this uncertainty, in 2013 we published a systematic review involving other empirical reviews that assessed the reporting of harms in RCTs using CONSORT-harms as a benchmark.

Our review showed inadequate and inconsistent levels of reporting harms across seven reviews of varied clinical areas.

Our review showed inadequate and inconsistent levels of reporting harms across seven reviews of varied clinical areas. Adverse events were poorly defined, with six out of the seven studies failing to exceed 50% adherence to CONSORT-harms. We encourage wider adoption of the CONSORT-harms to enhance the reporting of harms in RCTs.

However, there is recognition in the research world that journal publications are frequently impeded by word restrictions, which can limit and restrict the outcomes that are reported. This can then lead to reporting biases.

Selective outcome reporting bias

Selective outcome reporting bias has been identified as a major problem in systematic reviews. Not only can it affect the reliability of reviews of beneficial outcomes, it can also impact on reviews of harms when the evidence is distorted.

In a recent study including 92 Cochrane reviews, it was reported that 86% of the reviews did not include full data from the main harm outcome of interest, which could lead to suspicion of outcome reporting bias.

The last several years has seen an increasing focus on the importance of opening up access to clinical trials, to both build public confidence and facilitate research.

The last several years has seen an increasing focus on the importance of opening up access to clinical trials, to both build public confidence and facilitate research. There are three interlinked elements to the transparency discussions: registration of trials, improved reporting of summary results and opening up access to the underlying raw data.

In 2009, a Cochrane review of the drug oseltamivir to verify the safety and effectiveness was hampered by the limited published data available. This lead to the updated Cochrane review on Neuraminidase Inhibitors (oseltamivir and zanamivir) and was the first time a Cochrane systematic review has been based only on unpublished data from clinical study reports (CSRs) and regulator’s comments.

The review found small evidence of benefit and increased risks of harms when using neuraminidase inhibitors for patients with influenza symptoms or illnesses. The study underlines the importance of CSRs both for past and future trials for unbiased trial evaluation.

Clinical study reports have no word restrictions imposed like journal publications; therefore they provide a highly structured formatted report for integrated and complete reporting of the planning, execution, results, and analysis of a clinical trial.

In recent years there have been a number of studies where researchers have successfully gained access to CSRs to investigate other medical products, and in some cases the evidence based in journal publications was overturned by the findings from the CSRs.

What did we set out to find?

However none of these studies have assessed the reporting of harms in a more narrative setting when using CSRs. Therefore, in this study we compared the quality and quantity of information on harms in a clinical trial that are reported in journal publications and CSRs using a case study of orlistat trials.

Our results showed differences in the completeness and quality of reporting harms related information between journal publications and CSRs.

Our results showed differences in the completeness and quality of reporting harms related information between journal publications and CSRs. Substantial amounts of information on patient-relevant harm outcomes, including serious adverse events, required for unbiased trial evaluation was missing from the publicly available journal article. The missing data obtained from the CSRs resulted in five statistically significant outcomes, including three adverse events and two serious adverse events.

The CSRs provided more complete and robust information on harms, compared to journal publications. Restricting an evidence synthesis to journal publications would effectively have missed these potential harms, which could have major impacts on the safety of the product.

Open data sharing platforms of clinical trial results are currently limited. Studies like ours will help to promote the use of CSRs in validity studies, but also provide some initial guidance of how to include these data in an evidence synthesis.

View the latest posts on the On Medicine homepage

Comments