We have often observed that evidence can be a neutralizing force. This editorial highlights for us that this means involving the patient in a meaningful way and finding ways to support decisions based on patients’ personal requirements. These personal “patient requirements” include health care needs and wants and a recognition of individual circumstances, values and preferences.

To achieve this, we believe that patients should receive the same information as clinicians including what alternatives are available, a quantified assessment of potential benefits and harms of each including the strength of evidence for each and potential consequences of making various choices including things like vitality and cost.

Decisions may differ between patients, and physicians may make incorrect assumption about what most matters to patients of which there are many examples in the literature such as in the citations below.

Status

In a recent BMJ article, “Why we can’t trust clinical guidelines,” Jeanne Lenzer raises a number of concerns regarding clinical guidelines[1]. She begins by summarizing the conflict between 1990 guidelines recommending steroids for acute spinal injury versus 2013 cllinical recommendations against using steroids in acute spinal injury. She then asks, “Why do processes intended to prevent or reduce bias fail?

Her proposed answers to this question include the following—

Many doctors follow guidelines, even if not convinced about the recommendations, because they fear professional censure and possible harm to their careers.

Supporting this, she cites a poll of over 1000 neurosurgeons which showed that—

Only 11% believed the treatment was safe and effective.

Only 6% thought it should be a standard of care.

Yet when asked if they would continue prescribing the treatment, 60% said that they would. Many cited a fear of malpractice if they failed to follow “a standard of care.” (Note: the standard of care changed in March 2013 when the Congress of Neurological Surgeons stated there was no high quality evidence to support the recommendation.)

The Cochrane reviewer for the 1990 guideline she references had strong ties to industry.

Delfini Comment

Fear-based Decision-making by Physicians

We believe this is a reality. In our work with administrative law judges, we have been told that if you “run with the pack,” you better be right, and if you “run outside the pack,” you really better be right. And what happens in court is not necessarily true or just. The solution is better recommendations constructed from individualized, thoughtful decisions based on valid critically appraised evidence found to be clinically useful, patient preferences and other factors. The important starting place is effective critical appraisal of the evidence.

Financial Conflicts of Interest & Industry Influence

It is certainly true that money can sway decisions, be it coming from industry support or potential for income. However, we think that most doctors want to do their best for patients and try to make decisions or provide recommendations with the patient’s best interest in mind. Therefore, we think this latter issue may be more complex and strongly affected in both instances by the large number of physicians and others involved in health care decision-making who 1) do not understand that many research studies are not valid or reported sufficiently to tell; and, 2) lack the skills to be able to differentiate reliable studies from those which may not be reliable.

When it comes to industry support, one of the variables traveling with money includes greater exposure to information through data or contacts with experts supporting that manufacturer’s products. We suspect that industry influence may be less due to financial incentives than this exposure coupled with lack of critical appraisal understanding. As such, we wrote a Letter to the Editor describing our theory that the major problem of low quality guidelines might stem from physicians’ and others’ lack of competency in evaluating the quality of the evidence. Our response is reproduced here.

Delfini BMJ Rapid Response [2]:

We (Delfini) believe that we have some unique insight into how ties to industry may result in advocacy for a particular intervention due to our extensive experience training health care professionals and students in critical appraisal of the medical literature. We think it is very possible that the outcomes Lenzer describes are less due to financial influence than are due to lack of knowledge. The vast majority of physicians and other health care professionals do not have even rudimentary skills in identifying science that is at high to medium risk of bias or understand when results may have a high likelihood of being due to chance. Having ties to industry would likely result in greater exposure to science supporting a particular intervention.

Without the ability to evaluate the quality of the science, we think it is likely that individuals would be swayed and/or convinced by that science. The remedy for this and for other problems with the quality of clinical guidelines is ensuring that all guideline development members have basic critical appraisal skills and there is enough transparency in guidelines so that appraisal of a guideline and the studies utilized can easily be accomplished.

Status

Review of Endocrinology Guidelines

Decision-makers frequently rely on the body of pertinent research in making decisions regarding clinical management decisions. The goal is to critically appraise and synthesize the evidence before making recommendations, developing protocols and making other decisions. Serious attention is paid to the validity of the primary studies to determine reliability before accepting them into the review. Brito and colleagues have described the rigor of systematic reviews (SRs) cited from 2006 until January 2012 in support of the clinical practice guidelines put forth by the Endocrine Society using the Assessment of Multiple Systematic Reviews (AMSTAR) tool [1].

The authors included 69 of 2817 studies. These 69 SRs had a mean AMSTAR score of 6.4 (standard deviation, 2.5) of a maximum score of 11, with scores improving over time. Thirty five percent of the included SRs were of low-quality (methodological AMSTAR score 1 or 2 of 5, and were cited in 24 different recommendations). These low quality SRs were the main evidentiary support for five recommendations, of which only one acknowledged the quality of SRs.

The authors conclude that few recommendations in field of endocrinology are supported by reliable SRs and that the quality of the endocrinology SRs is suboptimal and is currently not being addressed by guideline developers. SRs should reliably represent the body of relevant evidence. The authors urge authors and journal editors to pay attention to bias and adequate reporting.

Delfini note: Once again we see a review of guideline work which suggests using caution in accepting clinical recommendations without critical appraisal of the evidence and knowing the strength of the evidence supporting clinical recommendations.

Status

Everything citing medical science should be appraised for validity and clinical usefulness. That includes clinical guidelines and other secondary sources. Our tool for evaluating these resources— the Delfini QI Project Appraisal Tool—has been updated and is available in the Delfini Tools & Educational Library at www.delfini.org. For quick access to the PDF version, go to—

Status

Reliable Clinical Guidelines—Great Idea, Not-Such-A-Great Reality

Although clinical guideline recommendations about managing a given condition may differ, guidelines are, in general, considered to be important sources for individual clinical decision-making, protocol development, order sets, performance measures and insurance coverage. The Institute of Medicine [IOM] has created important recommendations that guideline developers should pay attention to—

Transparency;

Management of conflict of interest;

Guideline development group composition;

How the evidence review is used to inform clinical recommendations;

Establishing evidence foundations for making strength of recommendation ratings;

Clear articulation of recommendations;

External review; and,

Updating.

Investigators recently evaluated 114 randomly chosen guidelines against a selection from the IOM standards and found poor adherence [Kung 12]. The group found that the overall median number of IOM standards satisfied was only 8 out of 18 (44.4%) of those standards. They also found that subspecialty societies tended to satisfy fewer IOM methodological standards. This study shows that there has been no change in guideline quality over the past decade and a half when an earlier study found similar results [Shaneyfeld 99]. This finding, of course, is likely to have the effect of leaving end-users uncertain as to how to best incorporate clinical guidelines into clinical practice and care improvements. Further, Kung’s study found that few guidelines groups included information scientists (individuals skilled in critical appraisal of the evidence to determine the reliability of the results) and even fewer included patients or patient representatives.

An editorialist suggests that currently there are 5 things we need [Ransohoff]. We need:

1. An agreed-upon transparent, trustworthy process for developing ways to evaluate clinical guidelines and their recommendations.

2. A reliable method to express the degree of adherence to each IOM or other agreed-upon standard and a method for creating a composite measure of adherence.

From these two steps, we must create a “total trustworthiness score” which reflects adherence to all standards.

3. To accept that our current processes of developing trustworthy measures is a work in progress. Therefore, stakeholders must actively participate in accomplishing these 5 tasks.

4. To identify an institutional home that can sustain the process of developing measures of trustworthiness.

5. To develop a marketplace for trustworthy guidelines. Ratings should be displayed alongside each recommendation.

At this time, we have to agree with Shaneyfeld who wrote an accompanying commentary to Kung’s study [Shaneyfeld 12]:

What will the next decade of guideline development be like? I am not optimistic that much will improve. No one seems interested in curtailing the out-of-control guideline industry. Guideline developers seem set in their ways. I agree with the IOM that the Agency for Healthcare Research and Quality (AHRQ) should require guidelines to indicate their adherence to development standards. I think a necessary next step is for the AHRQ to certify guidelines that meet these standards and allow only certified guidelines to be published in the National Guidelines Clearinghouse. Currently, readers cannot rely on the fact that a guideline is published in the National Guidelines Clearinghouse as evidence of its trustworthiness, as demonstrated by Kung et al. I hope efforts by the Guidelines International Network are successful, but until then, in guidelines we cannot trust.