Traducir Expected to Strengthen Again Before Hitting

  • Commentary
  • Open up Admission
  • Published:

Why clinical trial outcomes neglect to translate into benefits for patients

  • 30k Accesses

  • 141 Citations

  • 502 Altmetric

  • Metrics details

Abstract

Clinical research should ultimately amend patient intendance. For this to be possible, trials must evaluate outcomes that genuinely reflect real-world settings and concerns. Still, many trials continue to mensurate and report outcomes that fall short of this clear requirement. We highlight problems with trial outcomes that make evidence difficult or incommunicable to interpret and that undermine the translation of enquiry into practise and policy. These complex issues include the use of surrogate, composite and subjective endpoints; a failure to take account of patients' perspectives when designing enquiry outcomes; publication and other effect reporting biases, including the under-reporting of adverse events; the reporting of relative measures at the expense of more informative absolute outcomes; misleading reporting; multiplicity of outcomes; and a lack of core outcome sets. Trial outcomes can be developed with patients in listen, however, and can be reported completely, transparently and competently. Clinicians, patients, researchers and those who pay for wellness services are entitled to demand reliable prove demonstrating whether interventions better patient-relevant clinical outcomes.

Peer Review reports

Background

Clinical trials are the nearly rigorous fashion of testing how novel treatments compare with existing treatments for a given effect. Well-conducted clinical trials accept the potential to make a significant bear upon on patient care and therefore should be designed and conducted to achieve this goal. One way to do this is to ensure that trial outcomes are relevant, appropriate and of importance to patients in real-world clinical settings. Still, relatively few trials make a meaningful contribution to patient care, oft as a outcome of the way that the trial outcomes are called, collected and reported. For case, authors of a recent assay of cancer drugs approved by the U.S. Food and Drug Administration (FDA) reported a lack of clinically meaningful benefit in many post-marketing studies, owing to the use of surrogates, which undermines the ability of physicians and patients to make informed treatment decisions [one].

Such examples are concerning, given how disquisitional trial outcomes are to clinical decision making. The Earth Health Organisation (WHO) recognises that 'choosing the most important outcome is critical to producing a useful guideline' [2]. A survey of 48 U.Thou. clinical trials units found that 'choosing appropriate outcomes to measure' every bit one of the height iii priorities for methods research [3]. Yet, despite the importance of advisedly selected trial outcomes to clinical practise, relatively little is understood about the components of outcomes that are critical to decision making.

Most articles on trial outcomes focus on 1 or 2 aspects of their evolution or reporting. Assessing the extent to which outcomes are critical, however, requires a comprehensive understanding of all the shortcomings that can undermine their validity (Fig. i). The problems we set out are complex, frequently coexist and tin interact, contributing to a situation where clinical trial outcomes ordinarily neglect to interpret into clinical benefits for patients.

Fig. 1
figure 1

Why clinical trial outcomes fail to translate into benefits for patients

Total size prototype

Main text

Badly chosen outcomes

Surrogate outcomes

Surrogate markers are frequently used to infer or predict a more direct patient-oriented event, such every bit death or functional capacity. Such outcomes are popular because they are frequently cheaper to measure and because changes may emerge faster than the existent clinical outcome of interest. This can be a valid approach when the surrogate marking has a potent association with the real event of involvement. For example, intra-ocular pressure level in glaucoma and claret force per unit area in cardiovascular disease are well-established markers. Still, for many surrogates, such every bit glycated haemoglobin, os mineral density and prostate-specific antigen, in that location are considerable doubts about their correlation with illness [four]. Circumspection is therefore required in their interpretation [5]. Authors of an analysis of 626 randomised controlled trials (RCTs) reported that 17% of trials used a surrogate chief outcome, but only one-third discussed their validity [half dozen]. Surrogates mostly provide less straight relevant show than studies using patient-relevant outcomes [five, 7], and over-estimation runs the risk of wrong interpretations because changes may not reverberate important changes in outcomes [8]. Every bit an example, researchers in a well-conducted clinical trial of the diabetes drug rosiglitazone reported that it effectively lowered blood glucose (a surrogate) [9]; however, the drug was subsequently withdrawn in the European Union because of increased cardiovascular events, the patient-relevant result [10].

Composite outcomes

The use of combination measures is highly prevalent in, for example, cardiovascular enquiry. Nonetheless, their use can frequently pb to exaggerated estimates of treatment furnishings or render a trial report uninterpretable. Authors of an analysis of 242 cardiovascular RCTs, published in vi loftier-touch on medical journals, plant that in 47% of the trials, researchers reported a blended issue [eleven]. Authors of a further review of twoscore trials, published in 2008, found that composites ofttimes had little justification for their choice [12], were inconsistently defined, and often the consequence combinations did non make clinical sense [xiii]. Private outcomes within a composite can vary in the severity of their furnishings, which may be misleading when the virtually important outcomes, such as death, make relatively petty contribution to the overall consequence measure [xiv]. Having more than upshot data by using a composite does allow more precise result estimation. Estimation, nevertheless, is peculiarly problematic when data are missing. Authors of an analysis of 51 rheumatoid arthritis RCTs reported >xx% information was missing for the composite main outcomes in 39% of the trials [15]. Missing data often requires imputation; however, the optimal method to address this remains unknown [15].

Subjective outcomes

Where an observer exercises judgment while assessing an event, or where the issue is self-reported, the outcome is considered subjective [16]. In trials with such outcomes, effects are often exaggerated, particularly when methodological biases occur (i.eastward., when result assessors are not blinded) [17, xviii]. In a systematic review of observer bias, not-blinded outcome assessors exaggerated ORs in RCTs by 36% compared with blinded assessors [19]. In improver, trials with inadequate or unclear sequence generation also biased estimates when outcomes were subjective [twenty]. Yet, despite these shortcomings, subjective outcomes are highly prevalent in trials as well every bit systematic reviews: In a study of 43 systematic reviews of drug interventions, researchers reported the primary event was objective in only 38% of the pooled analyses [21].

Complex scales

Combinations of symptoms and signs tin exist used to course upshot scales, which can also show to exist problematic. A review of 300 trials from the Cochrane Schizophrenia Group's register revealed that trials were more likely to be positive when unpublished and unreliable and not-validated scales were used [22]. Furthermore, changes to the measurement scale used during the trial (a form of outcome switching) was 1 of the possible causes for the high number of results favouring new rheumatoid arthritis drugs [23]. Clinical trials require rating scales that are rigorous, only this is difficult to achieve [24]. Moreover, patients want to know the extent to which they are free of a symptom or a sign, more then than the mean modify in a score.

Lack of relevance to patients and determination makers

Estimation of changes in trial outcomes needs to go beyond a unproblematic discussion of statistical significance to include clinical significance. Sometimes, however, such interpretation does not happen: In a review of 57 dementia drug trials, researchers institute that less than half (46%) discussed the clinical significance of their results [17]. Furthermore, authors of a systematic cess of the prevalence of patient-reported outcomes in cardiovascular trials published in the ten leading medical journals plant that important outcomes for patients, such as death, were reported in just 23% of the 413 included trials. In 40% of the trials, patient-reported outcomes were judged to be of little added value, and 70% of the trials were missing crucial outcome data relevant to clinical determination making (mainly due to use of composite outcomes and under-reporting of agin events) [25]. There has been some improvement over time in reporting of patient-relevant outcomes such every bit quality of life, but the situation remains dire: past 2010, only 16% of cardiovascular disease trials reported quality of life, a threefold increment from 1997. Apply of surrogate, composite and subjective outcomes farther undermines relevance to patients [26] and ofttimes accompanies issues with reporting and interpretation [25].

Studies often undermine decision making by failing to determine thresholds of practical importance to patient intendance. The smallest difference a patient, or the patient's clinician, would be willing to accept to use a new intervention is the minimal clinically important difference (MCID). Crucially, clinicians and patients can help in developing MCIDs; however, to appointment, such joint working is rare, and employ of MCIDs has remained limited [27].

Problems are farther compounded by the lack of consistency in the application of subjective outcomes across different interventions. Guidelines, for instance, turn down the use of antibiotics in sore pharynx [28] attributable to their minimal effects on symptoms; even so, similar guidelines approve the utilise of antivirals because of their furnishings on symptoms [29], despite similar limited furnishings [30]. This contradiction occurs because conclusion makers, and particularly guideline developers, frequently lack agreement of the MCIDs required to alter therapeutic conclusion making. Caution is also warranted, though, when it comes to assessing minimal effects: Authors of an analysis of 51 trials institute small event effects were usually reported and often eliminated by the presence of minimal bias [31]. Also, MCIDs may not necessarily reverberate what patients consider to be important for decision making. Researchers in a study of patients with rheumatoid arthritis reported that the difference they considered really important was up to 3 to four times greater than MCIDs [32]. Moreover, inadequate elapsing of follow-upwardly and trials that are stopped too early also contribute to a lack of reliable evidence for decision makers. For example, authors of systematic reviews of patients with mild low take reported that only a handful of trials in primary care provide consequence data on the long-term effectiveness (beyond 12 weeks) of anti-depressant drug treatments [33]. Furthermore, results of simulation studies prove that trials halted also early, with modest effects and few events, volition issue in large overestimates of the outcome effect [34].

Desperately nerveless outcomes

Missing information

Issues with missing data occur in near all research: Its presence reduces study ability and can easily lead to false conclusions. Authors of a systematic review of 235 RCTs plant that xix% of the trials were no longer meaning, based on assumptions that the losses to follow-upwardly actually had the upshot of interest. This figure was 58% in a worst-case scenario, where all participants lost to follow-up in the intervention group and none in the control group had the consequence of involvement [35]. The '5 and twenty rule' (i.e., if >20% missing data, and then the report is highly biased; if <5%, then low risk of bias) exists to assist agreement. However, interpretation of the outcomes is seriously problematic when the accented upshot size is less than the loss to follow-upward. Despite the development of a number of different means of handling missing data, the just existent solution is to forestall it from happening in the first identify [36].

Poorly specified outcomes

It is important to make up one's mind the exact definitions for trial outcomes because poorly specified outcomes tin lead to confusion. Every bit an case, in a Cochrane review on neuraminidase inhibitors for preventing and treating influenza, the diagnostic criteria for pneumonia could be either (1) laboratory-confirmed diagnosis (due east.g., based on radiological evidence of infection); (2) clinical diagnosis by a doctor without laboratory confirmation; or (3) some other blazon of diagnosis, such as cocky-study past the patient. Treatment furnishings for pneumonia were statistically dissimilar, depending on which diagnostic criteria were used. Furthermore, who actually assesses the outcome is important. Cocky-study measures are particularly prone to bias, attributable to their subjectivity, but even the blazon of clinician assessing the outcome can bear on the estimate: Stroke risk because of carotid endarterectomy differs depending on whether patients are assessed by a neurologist or a surgeon [37].

Selectively reported outcomes

Publication bias

Problems with publication bias are well documented. Amid accomplice studies post-obit registered or ethically approved trials, half go unpublished [38], and trials with positive outcomes are twice every bit likely to exist published, and published faster, compared with trials with negative outcomes [39, 40]. The International Committee of Medical Journal Editors accept stated the importance of trial registration to accost the event of publication bias [41]. Their policy requires 'investigators to eolith information about trial design into an accepted clinical trials registry earlier the onset of patient enrolment'. Despite this initiative, publication bias remains a major issue contributing to translational failure. This led to the AllTrials campaign, which calls for all past and nowadays clinical trials to be registered and their results reported [42].

Reporting bias

Consequence reporting bias occurs when a study has been published, but some of the outcomes measured and analysed have not been reported. Reporting bias is an nether-recognised trouble that significantly affects the validity of the consequence. Authors of a review of 283 Cochrane reviews found that more than half did not include data on the primary outcome [43]. One manifestation of reporting bias is the nether-reporting of adverse events.

Under-reporting of adverse events

Interpreting the net benefits of treatments requires full outcome reporting of both the benefits and the harms in an unbiased mode. A review of recombinant homo os morphogenetic protein two used in spinal fusion, however, showed that data from publications substantially underestimated agin events when compared with private participant data or internal industry reports [44]. A further review of 11 studies comparing agin events in published and unpublished documents reported that 43% to 100% (median 64%) of agin events (including outcomes such as death or suicide) were missed when journal publications were solely relied on [45]. Researchers in multiple studies accept found that journal publications under-report side effects and therefore exaggerate treatment benefits when compared with more than complete data presented in clinical report reports [46], FDA reviews [47], ClinicalTrials.gov study reports [48] and reports obtained through litigation [49].

The aim of the Consolidated Standards of Reporting Trials (CONSORT), currently endorsed by 585 medical journals, is to meliorate reporting standards. However, despite Espoused's attempts, both publication and reporting bias remain a substantial problem. This impacts essentially the results of systematic reviews. Authors of an analysis of 322 systematic reviews found that 79% did non include the full information on the main harm outcome. This was due mainly to poor reporting in the included primary studies; in near two-thirds of the principal studies, issue reporting bias was suspected [50]. The aim of updates to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist for systematic reviews is to ameliorate the current situation by ensuring that a minimal prepare of adverse events items is reported [51].

Switched outcomes is the failure to correctly report pre-specified outcomes, which remains highly prevalent and presents meaning problems in interpreting results [52]. Authors of a systematic review of selective outcome reporting, including 27 analyses, found that the median proportion of trials with a discrepancy between the registered and published primary outcome was 31% [53]. Researchers in a recent report of 311 manuscripts submitted to The BMJ institute that 23% of outcomes pre-specified in the protocol went unreported [54]. Furthermore, many trial authors and editors seem unaware of the ramifications of incorrect outcome reporting. The Middle for Evidence-Based Medicine Outcome Monitoring Project (COMPare) prospectively monitored all trials in five journals and submitted correction letters in real time on all misreported trials, only the majority of correction messages submitted were rejected by periodical editors [55].

Inappropriately interpreted outcomes

Relative measures

Relative measures can exaggerate findings of modest clinical do good and tin can frequently exist uninterpretable, such as if control effect rates are non reported. Authors of a 2009 review of 344 journal articles reporting on health inequalities enquiry found that, of the 40% of abstracts reporting an effect measure, 88% reported just the relative measure, 9% an absolute measure out and just 2% reported both [56]. In contrast, 75% of all full-text manufactures reported relative furnishings, and only 7% reported both accented and relative measures in the full text, despite reporting guidelines, such as CONSORT, recommending using both measures whenever possible [57].

Spin

Misleading reporting by presenting a study in a more positive way than the actual results reflect constitutes 'spin' [58]. Authors of an assay of 72 trials with not-meaning results reported information technology was a common phenomenon, with 40% of the trials containing some class of spin. Strategies included reporting on statistically pregnant results for within-group comparisons, secondary outcomes, or subgroup analyses and not the main outcome, or focussing the reader on another study objective away from the statistically non-significant consequence [59]. Additionally, the results revealed the common occurrence of spin in the abstract, the almost accessible and most read role of a trial study. In a written report that randomised 300 clinicians to two versions of the same abstract (the original with spin and a rewritten version without spin), researchers institute there was no difference in clinicians' rating of the importance of the report or the need for a further trial [lx]. Spin is also oft found in systematic reviews; authors of an assay found that spin was present in 28% of the 95 included reviews of psychological therapies [61]. A consensus process amongst members of the Cochrane Collaboration has identified 39 different types of spin, xiii of which were specific to systematic reviews. Of these, the 3 most serious were recommendations for practice not supported past findings in the conclusion, misleading titles and selective reporting [62].

Multiplicity

Advisable attention has to exist paid to the multiplicity of outcomes that are present in nearly all clinical trials. The higher the number of outcomes, the more chance there is of simulated-positive results and unsubstantiated claims of effectiveness [63]. The problem is compounded when trials have multiple time points, further increasing the number of outcomes. For licensing applications, secondary outcomes are considered comparatively convincing to plant the main torso of evidence and are intended to provide supporting evidence in relation to the main outcome [63]. Furthermore, almost one-half of all trials brand further claims by undertaking subgroup analysis, simply circumspection is warranted when interpreting their effects. An analysis of 207 studies constitute that 31% claimed a subgroup effect for the primary outcome; yet, such subgroups were oftentimes not pre-specified (a course of outcome switching) and frequently formed function of a large number of subgroup analyses [64]. At a minimum, triallists should perform a test of interaction, and journals should ensure it is done, to examine whether treatment furnishings actually differ amongst subpopulations [64], and decision makers should exist very wary of loftier numbers of outcomes included in a trial written report.

Cadre outcome sets

Core outcome sets could facilitate comparative effectiveness research and evidence synthesis. As an example, all of the top-cited Cochrane reviews in 2009 described problems with inconsistencies in their reported outcomes [65]. Standardised core event sets take account of patient preferences that should be measured and reported in all trials for a specific therapeutic area [65]. Since 1992, the Outcome Measures in Rheumatoid Arthritis Clinical Trials (OMERACT) collaboration has advocated the use of core issue sets [66], and the Core Event Measures in Effectiveness Trials (COMET) Initiative collates relevant resource to facilitate core outcome evolution and user appointment [67, 68]. Consequently, their utilize is on the increase, and the Grading of Recommendations Assessment, Development and Evaluation (Course) working grouping recommend upward to vii patient-important outcomes exist listed in the 'summary of findings' tables in systematic reviews [69].

Conclusions

The treatment choices of patients and clinicians should ideally be informed by show that interventions improve patient-relevant outcomes. Too oft, medical research falls brusque of this modest ideal. Nonetheless, there are means forrard. One of these is to ensure that trials are conceived and designed with greater input from end users, such as patients. The James Lind Alliance (JLA) brings together clinicians, patients and carers to identify areas of practice where uncertainties exist and to prioritise clinical research questions to answer them. The aim of such 'priority setting partnerships' (PSPs) is to develop research questions using measurable outcomes of direct relevance to patients. For case, a JLA PSP of dementia research generated a list of fundamental measures, including quality of life, independence, management of behaviour and outcome on progression of disease, equally outcomes that were relevant to both persons with dementia and their carers [seventy].

However, identifying best do is only the beginning of a wider procedure to modify the civilization of research. The ecosystem of evidence-based medicine is broad, including ethics committees, sponsors, regulators, triallists, reviewers and periodical editors. All these stakeholders need to ensure that trial outcomes are developed with patients in listen, that unbiased methods are adhered to, and that results are reported in full and in line with those pre-specified at the trial outset. Until addressed, the problems of how outcomes are chosen, collected, reported and subsequently interpreted will continue to make a meaning contribution to the reasons why clinical trial outcomes frequently fail to translate into clinical benefit for patients.

Abbreviations

COMET:

Cadre Event Measures in Effectiveness Trials

COMPare:

Middle for Evidence-Based Medicine Outcome Monitoring Project

CONSORT:

Consolidated Standards of Reporting Trials

FDA:

Nutrient and Drug Administration

Grade:

Grading of Recommendations Assessment, Development and Evaluation

JLA:

James Lind Alliance

MCID:

Minimal clinically of import difference

OMERACT:

Outcome Measures in Rheumatoid Arthritis Clinical Trials

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

PSP:

Priority setting partnership

RCT:

Randomised controlled trial

WHO:

World Health Organisation

References

  1. Rupp T, Zuckerman D. Quality of life, overall survival, and costs of cancer drugs approved based on surrogate endpoints. JAMA Intern Med. 2017;177:276–7. doi:x.1001/jamainternmed.2016.7761.

    Commodity  PubMed  Google Scholar

  2. World Wellness Organisation (WHO). WHO handbook for guideline developers. Geneva: WHO; 2012. http://apps.who.int/iris/bitstream/10665/75146/1/9789241548441_eng.pdf?ua=1. Accessed 11 Mar 2017.

  3. Tudur Smith C, Hickey H, Clarke M, Blazeby J, Williamson P. The trials methodological research agenda: results from a priority setting exercise. Trials. 2014;15:32. doi:10.1186/1745-6215-15-32.

    Article  PubMed  PubMed Central  Google Scholar

  4. Twaddell S. Surrogate outcome markers in research and clinical practice. Aust Prescr. 2009;32:47–50. doi:10.18773/austprescr.2009.023.

    Article  Google Scholar

  5. Yudkin JS, Lipska KJ, Montori VM. The idolatry of the surrogate. BMJ. 2011;343:d7995. doi:10.1136/bmj.d7995.

    Commodity  PubMed  Google Scholar

  6. la Cour JL, Brok J, Gøtzsche PC. Inconsistent reporting of surrogate outcomes in randomised clinical trials: cohort study. BMJ. 2010;341:c3653. doi:10.1136/bmj.c3653.

    Article  PubMed  PubMed Fundamental  Google Scholar

  7. Atkins D, Briss PA, Eccles M, Flottorp South, Guyatt GH, Harbour RT, et al. Systems for grading the quality of evidence and the strength of recommendations II: pilot study of a new system. BMC Health Serv Res. 2005;5:25. doi:x.1186/1472-6963-5-25.

    Article  PubMed  PubMed Cardinal  Google Scholar

  8. D'Agostino Jr RB. Debate: the slippery gradient of surrogate outcomes. Curr Command Trials Cardiovasc Med. 2000;1:76–8. doi:x.1186/cvm-1-2-076.

    Article  PubMed  PubMed Central  Google Scholar

  9. DREAM (Diabetes REduction Assessment with ramipril and rosiglitazone Medication) Trial Investigators. Issue of rosiglitazone on the frequency of diabetes in patients with impaired glucose tolerance or impaired fasting glucose: a randomised controlled trial. Lancet. 2006;368:1096–105. doi:10.1016/S0140-6736(06)69420-8. A published erratum appears in Lancet. 2006;368:1770.

    Commodity  Google Scholar

  10. Cohen D. Rosiglitazone: what went wrong? BMJ. 2010;341:c4848. doi:x.1136/bmj.c4848.

    Commodity  PubMed  Google Scholar

  11. Ferreira-González I, Busse JW, Heels-Ansdell D, Montori VM, Akl EA, Bryant DM, et al. Problems with use of blended end points in cardiovascular trials: systematic review of randomised controlled trials. BMJ. 2007;334:786. doi:10.1136/bmj.39136.682083.AE.

    Article  PubMed  PubMed Cardinal  Google Scholar

  12. Freemantle N, Calvert MJ. Interpreting composite outcomes in trials. BMJ. 2010;341:c3529. doi:x.1136/bmj.c3529.

    Commodity  PubMed  Google Scholar

  13. Cordoba G, Schwartz L, Woloshin Due south, Bae H, Gøtzsche PC. Definition, reporting, and interpretation of composite outcomes in clinical trials: systematic review. BMJ. 2010;341:c3920. doi:10.1136/bmj.c3920.

    Article  PubMed  PubMed Central  Google Scholar

  14. Lim E, Brown A, Helmy A, Mussa S, Altman DG. Composite outcomes in cardiovascular research: a survey of randomized trials. Ann Intern Med. 2008;149:612–vii.

  15. Ibrahim F, Tom BDM, Scott DL, Prevost AT. A systematic review of randomised controlled trials in rheumatoid arthritis: the reporting and handling of missing data in composite outcomes. Trials. 2016;17:272. doi:x.1186/s13063-016-1402-five.

    Article  PubMed  PubMed Primal  Google Scholar

  16. Moustgaard H, Bello Due south, Miller FG, Hróbjartsson A. Subjective and objective outcomes in randomized clinical trials: definitions differed in methods publications and were often absent-minded from trial reports. J Clin Epidemiol. 2014;67:1327–34. doi:10.1016/j.jclinepi.2014.06.020.

    Article  PubMed  Google Scholar

  17. Molnar FJ, Man-Son-Hing M, Fergusson D. Systematic review of measures of clinical significance employed in randomized controlled trials of drugs for dementia. J Am Geriatr Soc. 2009;57:536–46. doi:10.1111/j.1532-5415.2008.02122.10.

    Article  PubMed  Google Scholar

  18. Woods L, Egger M, Gluud LL, Schulz KF, Jüni P, Altman DG, et al. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological written report. BMJ. 2008;336:601–v. doi:10.1136/bmj.39465.451748.AD.

    Article  PubMed  PubMed Central  Google Scholar

  19. Hróbjartsson A, Thomsen Donkey, Emanuelsson F, Tendal B, Hilden J, Boutron I, et al. Observer bias in randomised clinical trials with binary outcomes: systematic review of trials with both blinded and not-blinded outcome assessors. BMJ. 2012;344:e1119. doi:10.1136/bmj.e1119.

    Article  PubMed  Google Scholar

  20. Page MJ, Higgins JPT, Clayton G, Sterne JA, Hróbjartsson A, Savović J. Empirical evidence of written report design biases in randomized trials: systematic review of meta-epidemiological studies. PLoS One. 2016;11:e0159267. doi:ten.1371/journal.pone.0159267.

    Article  PubMed  PubMed Fundamental  Google Scholar

  21. Abraha I, Cherubini A, Cozzolino F, De Florio R, Luchetta ML, Rimland JM, et al. Deviation from intention to treat analysis in randomised trials and treatment effect estimates: meta-epidemiological study. BMJ. 2015;350:h2445. doi:10.1136/bmj.h2445.

    Article  PubMed  PubMed Central  Google Scholar

  22. Marshall One thousand, Lockwood A, Bradley C, Adams C, Joy C, Fenton M. Unpublished rating scales: a major source of bias in randomised controlled trials of treatments for schizophrenia. Br J Psychiatry. 2000;176:249–52. doi:10.1192/bjp.176.3.249.

    CAS  Article  PubMed  Google Scholar

  23. Gøtzsche PC. Methodology and overt and subconscious bias in reports of 196 double-blind trials of nonsteroidal antiinflammatory drugs in rheumatoid arthritis. Control Clin Trials. 1989;ten:31–56. https://world wide web.ncbi.nlm.nih.gov/pubmed/2702836. A published erratum appears in Command Clin Trials. 1989;10:356.

    Article  PubMed  Google Scholar

  24. Hobart JC, Cano SJ, Zajicek JP, Thompson AJ. Rating scales as event measures for clinical trials in neurology: problems, solutions, and recommendations. Lancet Neurol. 2007;6:1094–105. doi:10.1016/S1474-4422(07)70290-nine. A published erratum appears in Lancet Neurol. 2008;7:25.

    Article  PubMed  Google Scholar

  25. Rahimi One thousand, Malhotra A, Banning AP, Jenkinson C. Result selection and part of patient reported outcomes in contemporary cardiovascular trials: systematic review. BMJ. 2010;341:c5707. doi:x.1136/bmj.c5707.

    Article  PubMed  PubMed Cardinal  Google Scholar

  26. Rothwell PM. Factors that can affect the external validity of randomised controlled trials. PLoS Clin Trials. 2006;ane:e9. doi:ten.1371/journal.pctr.0010009.

    Article  PubMed  PubMed Cardinal  Google Scholar

  27. Make B. How can we assess outcomes of clinical trials: the MCID arroyo. COPD. 2007;4:191–4.

    Commodity  PubMed  Google Scholar

  28. National Institute for Health and Intendance Excellence. https://cks.nice.org.uk/sore-pharynx-acute#!topicsummary. Accessed xi Mar 2017.

  29. National Institute for Health and Care Excellence (Prissy). Amantadine, oseltamivir and zanamivir for the treatment of flu: engineering appraisal guidance. NICE guideline TA168. London: Overnice; 2009. https://www.nice.org.uk/Guidance/ta168. Accessed 28 December 2016.

    Google Scholar

  30. Heneghan CJ, Onakpoya I, Jones MA, Doshi P, Del Mar CB, Hama R, et al. Neuraminidase inhibitors for flu: a systematic review and meta-analysis of regulatory and bloodshed data. Health Technol Assess. 2016;20:(42). doi:10.3310/hta20420.

  31. Siontis GCM, Ioannidis JPA. Risk factors and interventions with statistically significant tiny furnishings. Int J Epidemiol. 2011;40:1292–307. doi:ten.1093/ije/dyr099.

    Article  PubMed  Google Scholar

  32. Wolfe F, Michaud One thousand, Strand V. Expanding the definition of clinical differences: from minimally clinically important differences to really of import differences: analyses in 8931 patients with rheumatoid arthritis. J Rheumatol. 2005;32:583–9.

  33. Linde K, Kriston Fifty, Rücker G, Jamil S, Schumann I, Meissner G, et al. Efficacy and acceptability of pharmacological treatments for depressive disorders in primary care: systematic review and network meta-assay. Ann Fam Med. 2015;xiii:69–79. doi:10.1370/afm.1687.

    Article  PubMed  PubMed Central  Google Scholar

  34. Guyatt GH, Briel M, Glasziou P, Bassler D, Montori VM. Bug of stopping trials early on. BMJ. 2012;344:e3863. doi:10.1136/bmj.e3863.

    Commodity  PubMed  Google Scholar

  35. Akl EA, Briel M, Y'all JJ, Sun 10, Johnston BC, Busse JW, et al. Potential impact on estimated treatment effects of information lost to follow-up in randomised controlled trials (LOST-It): systematic review. BMJ. 2012;344:e2809. doi:10.1136/bmj.e2809.

    Commodity  PubMed  Google Scholar

  36. Kang H. The prevention and handling of the missing data. Korean J Anesthesiol. 2013;64:402–vi. doi:10.4097/kjae.2013.64.5.402.

    Article  PubMed  PubMed Central  Google Scholar

  37. Rothwell P, Warlow C. Is self-audit reliable? Lancet. 1995;346:1623. doi:ten.1016/s0140-6736(95)91953-8.

    CAS  Article  PubMed  Google Scholar

  38. Schmucker C, Schell LK, Portalupi S, Oeller P, Cabrera L, Bassler D, et al. Extent of non-publication in cohorts of studies approved by research ethics committees or included in trial registries. PLoS One. 2014;9:e114023. doi:x.1371/journal.pone.0114023.

    Commodity  PubMed  PubMed Central  Google Scholar

  39. Song F, Parekh S, Hooper 50, Loke YK, Ryder J, Sutton AJ, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess. 2010;fourteen(8). doi:10.3310/hta14080.

  40. Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin 1000. Publication bias in clinical trials due to statistical significance or management of trial results. Cochrane Database Syst Rev. 2009;1:MR000006. doi:10.1002/14651858.MR000006.pub3.

    Google Scholar

  41. Laine C, De Angelis C, Delamothe T, Drazen JM, Frizelle FA, Haug C, et al. Clinical trial registration: looking back and moving ahead. Ann Intern Med. 2007;147:275–7. doi:10.7326/0003-4819-147-4-200708210-00166.

    Article  PubMed  Google Scholar

  42. AllTrials. About AllTrials. http://www.alltrials.cyberspace/find-out-more than/about-alltrials/. Accessed 11 Mar 2017.

  43. Kirkham JJ, Dwan KM, Altman DG, Gamble C, Dodd S, Smyth R, et al. The touch of outcome reporting bias in randomised controlled trials on a accomplice of systematic reviews. BMJ. 2010;340:c365. doi:10.1136/bmj.c365.

    Commodity  PubMed  Google Scholar

  44. Rodgers MA, Brown JVE, Heirs MK, Higgins JPT, Mannion RJ, Simmonds MC, et al. Reporting of industry funded study upshot information: comparing of confidential and published data on the safety and effectiveness of rhBMP-ii for spinal fusion. BMJ. 2013;346:f3981. doi:10.1136/bmj.f3981.

    Article  PubMed  PubMed Central  Google Scholar

  45. Golder S, Loke YK, Wright K, Norman G. Reporting of adverse events in published and unpublished studies of health care interventions: a systematic review. PLoS Med. 2016;xiii:e1002127. doi:10.1371/periodical.pmed.1002127.

    Article  PubMed  PubMed Central  Google Scholar

  46. Wieseler B, Wolfram N, McGauran N, Kerekes MF, Vervölgyi V, Kohlepp P, et al. Abyss of reporting of patient-relevant clinical trial outcomes: comparison of unpublished clinical study reports with publicly available data. PLoS Med. 2013;10:e1001526. doi:10.1371/periodical.pmed.1001526.

    Article  PubMed  PubMed Fundamental  Google Scholar

  47. Hart B, Lundh A, Bero Fifty. Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses. BMJ. 2012;344:d7202. doi:10.1136/bmj.d7202.

    Article  PubMed  Google Scholar

  48. Schroll JB, Penninga EI, Gøtzsche PC. Assessment of Agin Events in Protocols, Clinical Report Reports, and Published Papers of Trials of Orlistat: A Document Assay. PLoS Med. 2016;13(8):e1002101. doi:ten.1371/journal.pmed.1002101.

  49. Vedula SS, Bero L, Scherer RW, Dickersin Thousand, et al. Consequence reporting in industry-sponsored trials of gabapentin for off-label employ. Due north Engl J Med. 2009;361:1963–71. doi:x.1056/nejmsa0906126.

    CAS  Commodity  PubMed  Google Scholar

  50. Saini P, Loke YK, Adventure C, Altman DG, Williamson PR, Kirkham JJ. Selective reporting bias of impairment outcomes within studies: findings from a accomplice of systematic reviews. BMJ. 2014;349:g6501. doi:x.1136/bmj.g6501.

    Article  PubMed  PubMed Central  Google Scholar

  51. Zorzela L, Loke YK, Ioannidis JP, Golder S, Santaguida P, Altman DG, et al. PRISMA harms checklist: improving harms reporting in systematic reviews. BMJ. 2016;352:i157. doi:10.1136/bmj.i157. A published erratum appears in BMJ. 2016;353:i2229.

    Commodity  PubMed  Google Scholar

  52. Smyth RMD, Kirkham JJ, Jacoby A, Altman DG, Risk C, Williamson PR. Frequency and reasons for outcome reporting bias in clinical trials: interviews with trialists. BMJ. 2011;342:c7153. doi:10.1136/bmj.c7153.

    CAS  Article  PubMed  PubMed Primal  Google Scholar

  53. Dwan G, Altman DG, Clarke M, Gamble C, Higgins JPT, Sterne JA, et al. Evidence for the selective reporting of analyses and discrepancies in clinical trials: a systematic review of cohort studies of clinical trials. PLoS Med. 2014;eleven:e1001666. doi:10.1371/journal.pmed.1001666.

    Article  PubMed  PubMed Central  Google Scholar

  54. Weston J, Dwan G, Altman D, Clarke Grand, Gamble C, Schroter S, et al. Feasibility study to examine discrepancy rates in prespecified and reported outcomes in manufactures submitted to The BMJ. BMJ Open. 2016;6:e010075. doi:10.1136/bmjopen-2015-010075.

    Commodity  PubMed  PubMed Central  Google Scholar

  55. Goldacre B, Drysdale H, Powell-Smith A, Dale A, Milosevic I, Slade Due east, et al. The COMPare Trials Projection. 2016. http://compare-trials.org/. Accessed 11 Mar 2017.

    Google Scholar

  56. King NB, Harper S, Immature ME. Use of relative and absolute effect measures in reporting health inequalities: structured review. BMJ. 2012;345:e5774. doi:x.1136/bmj.e5774.

    Article  PubMed  PubMed Central  Google Scholar

  57. Schulz KF, Altman DG, Moher D, CONSORT Group. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c332. doi:ten.1136/bmj.c332.

    Article  PubMed  PubMed Central  Google Scholar

  58. Mahtani KR. 'Spin' in reports of clinical research. Evid Based Med. 2016;21:201–2. doi:ten.1136/ebmed-2016-110570.

    Article  PubMed  Google Scholar

  59. Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for chief outcomes. JAMA. 2010;303:2058–64. doi:10.1001/jama.2010.651.

    CAS  Article  PubMed  Google Scholar

  60. Boutron I, Altman DG, Hopewell S, Vera-Badillo F, Tannock I, Ravaud P. Affect of spin in the abstracts of manufactures reporting results of randomized controlled trials in the field of cancer: the SPIIN randomized controlled trial. J Clin Oncol. 2014;32:4120–six. doi:10.1200/jco.2014.56.7503.

    Article  PubMed  Google Scholar

  61. Lieb K, von der Osten-Sacken J, Stoffers-Winterling J, Reiss N, Barth J. Conflicts of involvement and spin in reviews of psychological therapies: a systematic review. BMJ Open. 2016;6:e010606. doi:x.1136/bmjopen-2015-010606.

    Article  PubMed  PubMed Central  Google Scholar

  62. Yavchitz A, Ravaud P, Altman DG, Moher D, Hróbjartsson A, Lasserson T, et al. A new nomenclature of spin in systematic reviews and meta-analyses was developed and ranked according to the severity. J Clin Epidemiol. 2016;75:56–65. doi:10.1016/j.jclinepi.2016.01.020.

    Article  PubMed  Google Scholar

  63. European Bureau for the Evaluation of Medicinal Products, Committee for Proprietary Medicinal Products. Points to consider on multiplicity issues in clinical trials. CPMP/EWP/908/99. xix Sep 2002. http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2009/09/WC500003640.pdf. Accessed 11 Mar 2017.

  64. Dominicus Ten, Briel M, Busse JW, You JJ, Akl EA, Mejza F, et al. Credibility of claims of subgroup effects in randomised controlled trials: systematic review. BMJ. 2012;344:e1553. doi:10.1136/bmj.e1553.

    Article  PubMed  Google Scholar

  65. Williamson PR, Altman DG, Blazeby JM, Clarke M, Devane D, Gargon Eastward, et al. Developing cadre event sets for clinical trials: bug to consider. Trials. 2012;13:132. doi:ten.1186/1745-6215-13-132.

    Article  PubMed  PubMed Fundamental  Google Scholar

  66. Tugwell P, Boers M, Brooks P, Simon 50, Strand 5, Idzerda L. OMERACT: an international initiative to improve outcome measurement in rheumatology. Trials. 2007;8:38. doi:10.1186/1745-6215-8-38.

    Article  PubMed  PubMed Cardinal  Google Scholar

  67. Williamson P. The COMET Initiative [abstract]. Trials. 2013;14 Suppl i:O65. doi:10.1186/1745-6215-14-s1-o65.

    Article  PubMed Central  Google Scholar

  68. Gargon E, Williamson PR, Altman DG, Blazeby JM, Clarke M. The COMET Initiative database: progress and activities update (2014). Trials. 2015;16:515. doi:10.1186/s13063-015-1038-ten.

    Article  PubMed  PubMed Central  Google Scholar

  69. Grading of Recommendations Assessment, Development and Evaluation (Grade). http://www.gradeworkinggroup.org/. Accessed 11 Mar 2017.

  70. Kelly S, Lafortune Fifty, Hart Northward, Cowan 1000, Fenton M. Brayne C; Dementia Priority Setting Partnership. Dementia priority setting partnership with the James Lind Alliance: using patient and public interest and the testify base of operations to inform the research calendar. Age Ageing. 2015;44:985–93. doi:10.1093/ageing/afv143.

    Article  PubMed  PubMed Central  Google Scholar

Download references

Acknowledgements

Not applicative.

Funding

No specific funding.

Availability of data and materials

Not applicable.

Authors' contributions

CH, BG and KRM equally participated in the design and coordination of this commentary and helped to draft the manuscript. All authors read and approved the final manuscript.

Authors' data

All authors work at the Center for Evidence-Based Medicine at the Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, Uk.

Competing interests

BG has received research funding from the Laura and John Arnold Foundation, the Wellcome Trust, the NHS National Establish for Health Research, the Health Foundation, and the WHO. He also receives personal income from speaking and writing for lay audiences on the misuse of science. KRM has received funding from the NHS National Constitute for Health Enquiry and the Royal College of General Practitioners for independent research projects. CH has received grant funding from the WHO, the NIHR and the NIHR Schoolhouse of Primary Intendance. He is likewise an advisor to the WHO International Clinical Trials Registry Platform (ICTRP). BG and CH are founders of the AllTrials campaign. The views expressed are those of the authors and not necessarily those of any of the funders or institutions mentioned higher up.

Consent for publication

Not applicable.

Ethics approval and consent to participate

Non applicative.

Writer information

Affiliations

Respective writer

Correspondence to Carl Heneghan.

Rights and permissions

Open Access This article is distributed nether the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/past/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you requite appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and bespeak if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zippo/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

Near this article

Verify currency and authenticity via CrossMark

Cite this commodity

Heneghan, C., Goldacre, B. & Mahtani, K.R. Why clinical trial outcomes neglect to translate into benefits for patients. Trials 18, 122 (2017). https://doi.org/x.1186/s13063-017-1870-ii

Download citation

  • Received:

  • Accustomed:

  • Published:

  • DOI : https://doi.org/x.1186/s13063-017-1870-2

Keywords

  • Clinical outcomes
  • Surrogate outcomes
  • Blended outcomes
  • Publication bias
  • Reporting bias
  • Core outcome sets

hambricksorm1980.blogspot.com

Source: https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-017-1870-2

0 Response to "Traducir Expected to Strengthen Again Before Hitting"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel