15Institutional Academic−Industry RelationshipsNational survey, 2006, 125 allopathic medical schools459 of 688 department chairs completed the survey (67%)60% of department chairs had personal relationship with industryConsultant (27%)Member of a scientific advisory board (27%)Paid speaker (14%)officer (7%), founder (9%)A member of the board of directors (11%)Two-thirds (67%) of departments as administrative units had relationships with industry.No effect on professional activities (66%)BUT negative impact on conducting unbiased research (72%)Context Institutional academic–industry relationships have the potential of creatinginstitutional conflicts of interest. To date there are no empirical data to support theestablishment and evaluation of institutional policies and practices related to managingthese relationships.Objective To conduct a national survey of department chairs about the nature, extent,and consequences of institutional–academic industry relationships for medical schoolsand teaching hospitals.Design, Setting, and Participants National survey of department chairs in the125 accredited allopathic medical schools and the 15 largest independent teachinghospitals in the United States, administered between February 2006 and October2006.Main Outcome Measure Types of relationships with industry.Results A total of 459 of 688 eligible department chairs completed the survey, yieldingan overall response rate of 67%. Almost two-thirds (60%) of department chairshad some form of personal relationship with industry, including serving as a consultant(27%), a member of a scientific advisory board (27%), a paid speaker (14%), anofficer (7%), a founder (9%), or a member of the board of directors (11%). Twothirds(67%) of departments as administrative units had relationships with industry.Clinical departments were more likely than nonclinical departments to receive researchequipment (17% vs 10%, P=.04), unrestricted funds (19% vs 3%, P.001),residency or fellowship training support (37% vs 2%, P.001), and continuing medialeducation support (65% vs 3%, P.001). However, nonclinical departments weremore likely to receive funding from intellectual property licensing (27% vs 16%, P=.01).More than two-thirds of chairs perceived that having a relationship with industry hadno effect on their professional activities, 72% viewed a chair’s engaging in more than1 industry-related activity (substantial role in a start-up company, consulting, or servingon a company’s board) as having a negative impact on a department’s ability toconduct independent unbiased research.Conclusion Overall, institutional academic–industry relationships are highly prevalentand underscore the need for their active disclosure and management.JAMA. 2007;298(15): www.Campbell EG et al - JAMA. 2007;298(15):

18Discrepancies between large meta-analyses and subsequent large RCT (>1000 Pts)Lelorier J et Gregoire G: NEJM 1997; 337:8:536BACKGROUND: Meta-analyses are now widely used to provide evidence to support clinical strategies. However, large randomized, controlled trials are considered the gold standard in evaluating the efficacy of clinical interventions. METHODS: We compared the results of large randomized, controlled trials (involving 1000 patients or more) that were published in four journals (the New England Journal of Medicine, the Lancet, the Annals of Internal Medicine, and the Journal of the American Medical Association) with the results of meta-analyses published earlier on the same topics. Regarding the principal and secondary outcomes, we judged whether the findings of the randomized trials agreed with those of the corresponding meta-analyses, and we determined whether the study results were positive (indicating that treatment improved the outcome) or negative (indicating that the outcome with treatment was the same or worse than without it) at the conventional level of statistical significance (P<0.05). RESULTS: We identified 12 large randomized, controlled trials and 19 meta-analyses addressing the same questions. For a total of 40 primary and secondary outcomes, agreement between the meta-analyses and the large clinical trials was only fair (kappa= 0.35; 95 percent confidence interval, 0.06 to 0.64). The positive predictive value of the meta-analyses was 68 percent, and the negative predictive value 67 percent. However, the difference in point estimates between the randomized trials and the meta-analyses was statistically significant for only 5 of the 40 comparisons (12 percent). Furthermore, in each case of disagreement a statistically significant effect of treatment was found by one method, whereas no statistically significant effect was found by the other. CONCLUSIONS: The outcomes of the 12 large randomized, controlled trials that we studied were not predicted accurately 35 percent of the time by the meta-analyses published previously on the same topics.

26AJRCCM 2003; 167:1304Reducing tidal volumes administered to patients with acute lung injury is the only intervention reported to decrease mortality resulting from this life-threatening condition. Whereas many medical advances are slowly brought into practice, clinicians in teaching hospitals are often assumed to be early adopters of new medical advances. Our objective was to examine trends in the ventilatory prescription for 398 patients with acute lung injury treated in three teaching hospitals from 1994 to There was no change in tidal volumes until mid to late 1998, when volumes started to slowly decline at the rate of 48.0 (95% confidence interval, 21.0 to 74.4) ml/year. In the 2 years after the results were released from a large trial that demonstrated the superiority of 6 ml/kg tidal volume therapy over 12 ml/kg, clinicians prescribed tidal volumes of 651 +/- 128 ml or /- 1.9 ml/kg. Tidal volumes after intubation were minimally reduced over the subsequent 2 days of mechanical ventilation (mean reduction, 33 ml). Hospital category, male sex, and disease onset before May 1999 were associated with higher volumes whereas lung injury severity was inversely associated. We conclude that clinicians practicing at these teaching hospitals have not rapidly adopted low tidal volume ventilation that may reduce mortality.Diminution progressive des VC non influencé par les résultats de l’ARDS net et largement au dessus de 6 ml/kg…

27Why 6 ml/kg is not accepted?Because tidal volume was 10.3  2 ml/Kg in 10 centers between 1996 and 1999 and not 12!!! (Thompson, Chest 2001)These results was confirmed in two large epidemiological study (Esteban JAMA 2002, Brun-Buisson ICM 2004)Because mortality of not enroled ards patients was similar to the low tidal volume group!!!! (Hayden AJRCCM 2000)

33« CONCLUSIONS: A learning curve appeared to be present within the PROWESS trial … efficacy improved with increasing site experience... Investigational sites may need to require a minimum level of protocol-specific experience to appropriately implement a given trial. …This experience should be an important consideration in designing trials and analysis plans. … »Sources of variability on the estimate of treatment effect in the PROWESS trial: implications for the design and conduct of future studies in severe sepsis. Macias WL, Vallet B, Bernard GR, Vincent JL, Laterre PF, Nelson DR, Derchak PA, Dhainaut JF. Lilly Research Laboratories, Eli Lilly and Company, Indianapolis, IN, USA. OBJECTIVE: To elucidate sources of variability in the estimate of treatment effects in a successful phase 3 trial in severe sepsis and to assess their implications on the design of future clinical trials. DESIGN: Retrospective evaluation of prospectively defined subgroups from a large phase 3, placebo-controlled clinical trial (PROWESS). SETTING: The study involved 164 medical centers. PATIENTS: Patients were 1,690 patients with severe sepsis. INTERVENTIONS: Drotrecogin alfa (activated) (Xigris) 24 microg/kg/hr for 96 hrs, or placebo. MEASUREMENTS AND MAIN RESULTS: All prospectively defined subgroups were examined to identify treatment effects that potentially differed across subgroup strata (assessed by Breslow-Day p < .10). Potential interactions were identified for subgroups defined by a) presence vs. absence of a significant protocol violation (p = .07); b) original vs. amended protocol (p = .08); and c) Acute Physiology and Chronic Health Evaluation (APACHE) II quartile at baseline (p = .09). No treatment benefit was observed in patients having a protocol violation, regardless of type. There appeared to be less treatment effect in patients enrolled under the original vs. amended protocol. The risk ratio exceeded 1.0 for patients in the lowest APACHE II score quartile. A highly significant correlation was observed between the sequence of enrollment at a site, the frequency of protocol violations, and the observed treatment effect. As enrollment increased, frequency of protocol violations decreased (p < .0001) and the treatment effect improved. The correlation between the sequence of enrollment and improvement in treatment effect remained even after removal of patients with protocol violations. Removal of the first block of patients at each site from the analysis reduced the extent of interaction by protocol version and APACHE II score. CONCLUSIONS: A learning curve appeared to be present within the PROWESS trial such that the ability to demonstrate efficacy improved with increasing site experience. This potential learning curve may have implications for design of future trials. Investigational sites may need to require a minimum level of protocol-specific experience to appropriately implement a given trial. This experience should be an important consideration in designing trials and analysis plans. Diligence by coordinating centers, site investigators, study coordinators, and sponsors is necessary to ensure that the protocol is executed as designed such that a treatment benefit, if present, will be evident.Macias et al – Crit care med 2004;32:2385

43What is the best???Day 14  more related to the disease itself…low noiseDay 28  compromise?Day 90  competing events?, probably more important at the patient’s point of view1 year   competing events, more important for patient and at the societal point of viewAll of the end-points  YES!!BUTMultiple comparisons ( NNT,  power)

44Multiple comparisons probabilty to obtain one 5 on a throw of diceProbability to obtain at least one 5 on 2 throws of diceP = 1 – (5/6)= 0.16P = 1 – (5/6)2= 0.30

49JAMA 2005; 294: 2203143 RCT, 92 dans les 5 « top impact » journauxSurtout: sponsorisé par l’industrie, cardiologie, cancérologieEn très forte augmentation(0.5  1.2% en )63% du recrutement prévuRR médian: 0.53 ( )135/143 (94%) ne renseignent pas sur:L’effectif initialement prévu, l’existence d’analyse(s) intermédiaire(s), les règles d’arrêt.CONTEXT: Randomized clinical trials (RCTs) that stop earlier than planned because of apparent benefit often receive great attention and affect clinical practice. Their prevalence, the magnitude and plausibility of their treatment effects, and the extent to which they report information about how investigators decided to stop early are, however, unknown. OBJECTIVE: To evaluate the epidemiology and reporting quality of RCTs involving interventions stopped early for benefit. DATA SOURCES: Systematic review up to November 2004 of MEDLINE, EMBASE, Current Contents, and full-text journal content databases to identify RCTs stopped early for benefit. STUDY SELECTION: Randomized clinical trials of any intervention reported as having stopped early because of results favoring the intervention. There were no exclusion criteria. DATA EXTRACTION: Twelve reviewers working independently and in duplicate abstracted data on content area and type of intervention tested, reporting of funding, type of end point driving study termination, treatment effect, length of follow-up, estimated sample size and total sample studied, role of a data and safety monitoring board in stopping the study, number of interim analyses planned and conducted, and existence and type of monitoring methods, statistical boundaries, and adjustment procedures for interim analyses and early stopping. DATA SYNTHESIS: Of 143 RCTs stopped early for benefit, the majority (92) were published in 5 high-impact medical journals. Typically, these were industry-funded drug trials in cardiology, cancer, and human immunodeficiency virus/AIDS. The proportion of all RCTs published in high-impact journals that were stopped early for benefit increased from 0.5% in to 1.2% in (P<.001 for trend). On average, RCTs recruited 63% (SD, 25%) of the planned sample and stopped after a median of 13 (interquartile range [IQR], 3-25) months of follow-up, 1 interim analysis, and when a median of 66 (IQR, ) patients had experienced the end point driving study termination (event). The median risk ratio among truncated RCTs was 0.53 (IQR, ). One hundred thirty-five (94%) of the 143 RCTs did not report at least 1 of the following: the planned sample size (n = 28), the interim analysis after which the trial was stopped (n = 45), whether a stopping rule informed the decision (n = 48), or an adjusted analysis accounting for interim monitoring and truncation (n = 129). Trials with fewer events yielded greater treatment effects (odds ratio, 28; 95% confidence interval, 11-73). CONCLUSIONS: RCTs stopped early for benefit are becoming more common, often fail to adequately report relevant information about the decision to stop early, and show implausibly large treatment effects, particularly when the number of events is small. These findings suggest clinicians should view the results of such trials with skepticism.Importance de l’effet traitement peu plausible, règles d’arrêt pas claires, à regarder avec distance…

64ValidityInternal validity: How well was the study done? Do the results reflect the truth?External validity: can I apply these results to MY patients?When we consider validity, we really need to look at two types of validity. Internal validity is, basically, how well methodologically was the study performed. In other words, was the study design rigorous enough so that we can be confident that the researchers found what they think they found? We’re asking how well do the results reflect the truth of what actually happens in the world.External validity asks the results found in the study apply to people not included in the study. In other words, can I apply these results to my patients? We need to determine the generalizability of the study to the population of patients we see. Rarely do we have a study that includes all types of patients and there are usually many types of exclusion and inclusion criteria that frequently do not apply to our patients. It’s important for us to determine whether or not these results really have a probability of similarly affecting our particular patients.For more on generalizability, see: Altman DG, Bland JM. Generalisation and extrapolation. BMJ 1998;317:4