Doug Owens

Henry J. Kaiser, Jr. Professor, Senior Fellow at the Freeman Spogli Institute for International Studies and Professor, by courtesy, of Health Research and Policy and of Management Science and Engineering

Medicine - Primary Care Outcomes Research

Bio

Bio

Douglas K. Owens is the Henry J. Kaiser, Jr. Professor, and Director of the Center for Health Policy (CHP) in the Freeman Spogli Institute for International Studies (FSI) and of the Center for Primary Care and Outcomes Research (PCOR) in the Department of Medicine and School of Medicine at Stanford. He is a general internist and Associate Director of the Center for Innovation to Implementation, a health services research center of excellence, at the VA Palo Alto Health Care System. Owens is a Professor of Medicine and, by courtesy, Professor of Health Research and Policy, and Professor of Management Science and Engineering, at Stanford University; he is also a Senior Fellow at FSI.

Owens' research focuses on technology assessment, cost-effectiveness analysis, evidence synthesis, and methods for clinical decision making and guideline development. He is studying the cost-effectiveness of preventive and therapeutic interventions for HIV/AIDS in several countries; diagnostic and therapeutic interventions for cardiovascular disease; the cost effectiveness of current and emerging therapies for hepatitis C virus infection; approaches to quality improvement; and he has developed methods for developing clinical practice guidelines tailored to specific patient populations. Owens chaired the Clinical Guidelines Committee of the American College of Physicians for four years. The guideline committee develops clinical guidelines that are used widely and are published regularly in the Annals of Internal Medicine. He is a member of the U.S. Preventive Services Task Force, which develops national guidelines on preventive care, including guidelines for screening for breast, colorectal, prostate, and lung cancer.

Owens also directed the Stanford-UCSF Evidence-based Practice Center and the Program on Clinical Decision Making and Guideline Development at PCOR. He directs three training programs in health services research: the Fellowship Program in Health Research and Policy at Stanford, the VA Physician Fellowship in Health Services Research, and the VA Postdoctoral Informatics Fellowship Program.

Owens received a BS and an MS from Stanford University, and an MD from the University of California-San Francisco. He completed a residency in internal medicine at the University of Pennsylvania and a fellowship in health research and policy at Stanford. Owens is a past-President of the Society for Medical Decision Making. He received the VA Undersecretary’s Award for Outstanding Achievement in Health Services Research, and the Eisenberg Award for Leadership in Medical Decision Making from the Society for Medical Decision Making. He was elected to the American Society for Clinical Investigation (ASCI) and the Association of American Physicians (AAP).

Contact

Links

Research & Scholarship

Current Research and Scholarly Interests

Our research concerns health policy, both domestic and international, clinical policy, and the development of analytic methods for evaluating policy questions. I am particularly interested in technology assessment and the application of decision theory to clinical/health policy problems. My group has a special interest in questions related to disease caused by the human immunodeficiency virus (HIV) and cardiovascular disease. We also perform systematic reviews on a variety of topics. We also work on developing methods for producing normative, model-based practice and screening guidelines.

Abstract

Elevations in levels of total, low-density lipoprotein, and non-high-density lipoprotein cholesterol; lower levels of high-density lipoprotein cholesterol; and, to a lesser extent, elevated triglyceride levels are associated with risk of cardiovascular disease in adults.To update the 2007 US Preventive Services Task Force (USPSTF) recommendation on screening for lipid disorders in children, adolescents, and young adults.The USPSTF reviewed the evidence on screening for lipid disorders in children and adolescents 20 years or younger--1 review focused on screening for heterozygous familial hypercholesterolemia, and 1 review focused on screening for multifactorial dyslipidemia.Evidence on the quantitative difference in diagnostic yield between universal and selective screening approaches, the effectiveness and harms of long-term treatment and the harms of screening, and the association between changes in intermediate outcomes and improvements in adult cardiovascular health outcomes are limited. Therefore, the USPSTF concludes that the balance of benefits and harms cannot be determined.The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of screening for lipid disorders in children and adolescents 20 years or younger. (I statement).

Abstract

Colorectal cancer is the second leading cause of cancer death in the United States. In 2016, an estimated 134,000 persons will be diagnosed with the disease, and about 49,000 will die from it. Colorectal cancer is most frequently diagnosed among adults aged 65 to 74 years; the median age at death from colorectal cancer is 68 years.To update the 2008 US Preventive Services Task Force (USPSTF) recommendation on screening for colorectal cancer.The USPSTF reviewed the evidence on the effectiveness of screening with colonoscopy, flexible sigmoidoscopy, computed tomography colonography, the guaiac-based fecal occult blood test, the fecal immunochemical test, the multitargeted stool DNA test, and the methylated SEPT9 DNA test in reducing the incidence of and mortality from colorectal cancer or all-cause mortality; the harms of these screening tests; and the test performance characteristics of these tests for detecting adenomatous polyps, advanced adenomas based on size, or both, as well as colorectal cancer. The USPSTF also commissioned a comparative modeling study to provide information on optimal starting and stopping ages and screening intervals across the different available screening methods.The USPSTF concludes with high certainty that screening for colorectal cancer in average-risk, asymptomatic adults aged 50 to 75 years is of substantial net benefit. Multiple screening strategies are available to choose from, with different levels of evidence to support their effectiveness, as well as unique advantages and limitations, although there are no empirical data to demonstrate that any of the reviewed strategies provide a greater net benefit. Screening for colorectal cancer is a substantially underused preventive health strategy in the United States.The USPSTF recommends screening for colorectal cancer starting at age 50 years and continuing until age 75 years (A recommendation). The decision to screen for colorectal cancer in adults aged 76 to 85 years should be an individual one, taking into account the patient's overall health and prior screening history (C recommendation).

Abstract

Randomized trials of left atrial appendage (LAA) closure with the Watchman device have shown varying results, and its cost effectiveness compared with anticoagulation has not been evaluated using all available contemporary trial data.We used a Markov decision model to estimate lifetime quality-adjusted survival, costs, and cost effectiveness of LAA closure with Watchman, compared directly with warfarin and indirectly with dabigatran, using data from the long-term (mean 3.8 year) follow-up of Percutaneous Closure of the Left Atrial Appendage Versus Warfarin Therapy for Prevention of Stroke in Patients With Atrial Fibrillation (PROTECT AF) and Prospective Randomized Evaluation of the Watchman LAA Closure Device in Patients With Atrial Fibrillation (PREVAIL) randomized trials. Using data from PROTECT AF, the incremental cost-effectiveness ratios compared with warfarin and dabigatran were $20 486 and $23 422 per quality-adjusted life year, respectively. Using data from PREVAIL, LAA closure was dominated by warfarin and dabigatran, meaning that it was less effective (8.44, 8.54, and 8.59 quality-adjusted life years, respectively) and more costly. At a willingness-to-pay threshold of $50 000 per quality-adjusted life year, LAA closure was cost effective 90% and 9% of the time under PROTECT AF and PREVAIL assumptions, respectively. These results were sensitive to the rates of ischemic stroke and intracranial hemorrhage for LAA closure and medical anticoagulation.Using data from the PROTECT AF trial, LAA closure with the Watchman device was cost effective; using PREVAIL trial data, Watchman was more costly and less effective than warfarin and dabigatran. PROTECT AF enrolled more patients and has substantially longer follow-up time, allowing greater statistical certainty with the cost-effectiveness results. However, longer-term trial results and postmarketing surveillance of major adverse events will be vital to determining the value of the Watchman in clinical practice.

Abstract

This study aimed to evaluate the cost-effectiveness of the CardioMEMS (CardioMEMS Heart Failure System, St Jude Medical Inc, Atlanta, Georgia) device in patients with chronic heart failure.The CardioMEMS device, an implantable pulmonary artery pressure monitor, was shown to reduce hospitalizations for heart failure and improve quality of life in the CHAMPION (CardioMEMS Heart Sensor Allows Monitoring of Pressure to Improve Outcomes in NYHA Class III Heart Failure Patients) trial.We developed a Markov model to determine the hospitalization, survival, quality of life, cost, and incremental cost-effectiveness ratio of CardioMEMS implantation compared with usual care among a CHAMPION trial cohort of patients with heart failure. We obtained event rates and utilities from published trial data; we used costs from literature estimates and Medicare reimbursement data. We performed subgroup analyses of preserved and reduced ejection fraction and an exploratory analysis in a lower-risk cohort on the basis of the CHARM (Candesartan in Heart failure: Reduction in Mortality and Morbidity) trials.CardioMEMS reduced lifetime hospitalizations (2.18 vs. 3.12), increased quality-adjusted life-years (QALYs) (2.74 vs. 2.46), and increased costs ($176,648 vs. $156,569), thus yielding a cost of $71,462 per QALY gained and $48,054 per life-year gained. The cost per QALY gained was $82,301 in patients with reduced ejection fraction and $47,768 in those with preserved ejection fraction. In the lower-risk CHARM cohort, the device would need to reduce hospitalizations for heart failure by 41% to cost

Abstract

The Affordable Care Act (ACA) eliminated cost-sharing for evidence-based preventive services in an effort to encourage use.To evaluate use of colorectal cancer (CRC) screening in a national population-based sample before and after implementation of the ACA.Repeated cross-sectional analysis of the Medical Expenditure Panel Survey (MEPS) between 2009 and 2012 comparing CRC screening rates before and after implementation of the ACA.Adults 50-64 with private health insurance and adults 65-75 with Medicare.Self-reported receipt of screening colonoscopy, sigmoidoscopy, or fecal occult blood test (FOBT) within the past year among those eligible for screening.Our study included 8617 adults aged 50-64 and 3761 adults aged 65-75. MEPS response rates ranged from 58 to 63%. Among adults aged 50-64, 18.9-20.9% received a colonoscopy in the survey year, 0.59-2.1% received a sigmoidoscopy, and 7.9-10.4% received an FOBT. For adults aged 65-75, 23.6-27.7% received a colonoscopy, 1.3-3.2% a sigmoidoscopy, and 13.5-16.4% an FOBT. In adjusted analyses, among participants aged 50-64, there was no increase in yearly rates of colonoscopy (-0.28 percentage points, 95% CI -2.3 to 1.7, p = 0.78), sigmoidoscopy (-1.1%, 95% CI -1.7 to -0.46, p = <0.001), or FOBT (-1.6%, 95% CI -3.2 to -0.03, p = 0.046) post-ACA. For those aged 65-75, rates of colonoscopy (+2.3%, 95% CI -1.4 to 6.0, p = 0.22), sigmoidoscopy (+0.34%, 95% CI 0.88 to 1.6, p = 0.58) and FOBT (-0.65, 95% CI -4.1 to 2.8, p = 0.72) did not increase. Among those aged 65-75 with Medicare and no additional insurance, the use of colonoscopy rose by 12.0% (95% CI 3.3 to 20.8, p = 0.007). Among participants with Medicare living in poverty, colonoscopy use also increased (+5.7%, 95% CI 0.18 to 11.3, p = 0.043).Eliminating cost-sharing for CRC screening has not resulted in changes in the use of CRC screening services for many Americans, although use may have increased in the post-ACA period among some Medicare beneficiaries.

Abstract

Update of the US Preventive Services Task Force (USPSTF) recommendation on screening for impaired visual acuity in older adults.The USPSTF reviewed the evidence on screening for visual acuity impairment associated with uncorrected refractive error, cataracts, and age-related macular degeneration among adults 65 years or older in the primary care setting; the benefits and harms of screening; the accuracy of screening; and the benefits and harms of treatment of early vision impairment due to uncorrected refractive error, cataracts, and age-related macular degeneration.This recommendation applies to asymptomatic adults 65 years or older who do not present to their primary care clinician with vision problems.The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of screening for impaired visual acuity in older adults. (I statement).

Abstract

New US Preventive Services Task Force (USPSTF) recommendation on screening for autism spectrum disorder (ASD) in young children.The USPSTF reviewed the evidence on the accuracy, benefits, and potential harms of brief, formal screening instruments for ASD administered during routine primary care visits and the benefits and potential harms of early behavioral treatment for young children identified with ASD through screening.This recommendation applies to children aged 18 to 30 months who have not been diagnosed with ASD or developmental delay and for whom no concerns of ASD have been raised by parents, other caregivers, or health care professionals.The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of screening for ASD in young children for whom no concerns of ASD have been raised by their parents or a clinician. (I statement).

Abstract

Cardiac resynchronization therapy (CRT) reduces mortality and heart failure hospitalizations in patients with mild heart failure.To estimate the cost-effectiveness of adding CRT to an implantable cardioverter-defibrillator (CRT-D) compared with implantable cardioverter-defibrillator (ICD) alone among patients with left ventricular systolic dysfunction, prolonged intraventricular conduction, and mild heart failure.Markov decision model.Clinical trials, clinical registries, claims data from Centers for Medicare & Medicaid Services, and Centers for Disease Control and Prevention life tables.Patients aged 65 years or older with a left ventricular ejection fraction (LVEF) of 30% or less, QRS duration of 120 milliseconds or more, and New York Heart Association (NYHA) class I or II symptoms.Lifetime.Societal.CRT-D or ICD alone.Life-years, quality-adjusted life-years (QALYs), costs, and incremental cost-effectiveness ratios (ICERs).Use of CRT-D increased life expectancy (9.8 years versus 8.8 years), QALYs (8.6 years versus 7.6 years), and costs ($286 500 versus $228 600), yielding a cost per QALY gained of $61 700.The cost-effectiveness of CRT-D was most dependent on the degree of mortality reduction: When the risk ratio for death was 0.95, the ICER increased to $119 600 per QALY. More expensive CRT-D devices, shorter CRT-D battery life, and older age also made the cost-effectiveness of CRT-D less favorable.The estimated mortality reduction for CRT-D was largely based on a single trial. Data on patients with NYHA class I symptoms were limited. The cost-effectiveness of CRT-D in patients with NYHA class I symptoms remains uncertain.In patients with an LVEF of 30% or less, QRS duration of 120 milliseconds or more, and NYHA class II symptoms, CRT-D appears to be economically attractive relative to ICD alone when a reduction in mortality is expected.National Institutes of Health, University of Copenhagen, U.S. Department of Veterans Affairs.

Abstract

To characterize the association of antiretroviral drug combinations on risk of cardiovascular events. Certain antiretroviral medications for human immunodeficiency virus (HIV) have been implicated in increasing risk of cardiovascular disease. However, antiretroviral drugs are typically prescribed in combination. We characterized the association of current exposure to antiretroviral drug combinations on risk of cardiovascular events including myocardial infarction, stroke, percutaneous coronary intervention, and coronary artery bypass surgery. We used the Veterans Health Administration Clinical Case Registry to analyze data from 24 510 patients infected with HIV from January 1996 through December 2009. We assessed the association of current exposure to 15 antiretroviral drugs and 23 prespecified combinations of agents on the risk of cardiovascular event by using marginal structural models and Cox models extended to accommodate time-dependent variables. Over 164 059 person-years of follow-up, 934 patients had a cardiovascular event. Current exposure to abacavir, efavirenz, lamivudine, and zidovudine was significantly associated with increased risk of cardiovascular event, with odds ratios ranging from 1.40 to 1.53. Five combinations were significantly associated with increased risk of cardiovascular event, all of which involved lamivudine. One of these-efavirenz, lamivudine, and zidovudine-was the second most commonly used combination and was associated with a risk of cardiovascular event that is 1.60 times that of patients not currently exposed to the combination (odds ratio = 1.60, 95% confidence interval, 1.25-2.04). In the VA cohort, exposure to both individual drugs and drug combinations was associated with modestly increased risk of a cardiovascular event.

Abstract

New drugs therapies have revolutionized the treatment of hepatitis C virus (HCV) infection. The objectives of this study were to evaluate uptake and utilization of boceprevir and telaprevir in the Department of Veterans Affairs (VA). We evaluated whether therapies conformed to response-guided protocols, whether they replaced standard interferon plus ribavirin treatment, and whether IL-28B was used to guide treatment. We performed an administrative data-based analysis of all patients receiving pharmacologic treatment for HCV in VA from October 2009 to July 2013. There were 12 737 new HCV prescriptions in VA during this time, with 5564 boceprevir or telaprevir prescriptions (44%) and 7173 prescriptions (56%) written for standard interferon plus ribavirin treatment. Prescriptions for the new treatments heavily favoured boceprevir vs telaprevir (83% vs 17%). Sixty-two percent (62%) of boceprevir-treated patients completed their minimum-specified protocol, while 69.2% of telaprevir-treated patients completed their minimum-specified protocol. From October 2010 to July 2012, 4090 patients had an IL-28B test; less than 16% of these tests guided subsequent HCV prescriptions. Uptake of boceprevir and telaprevir was rapid; the number of patients initiating treatment approximately doubled in the period after their introduction. While new prescriptions favor boceprevir or telaprevir over standard interferon plus ribavirin therapy, there appears to still be a strong role of interferon plus ribavirin in treating HCV patients. This work can inform our understanding of how other new effective HCV therapies will be used, their diffusion, and the timing of their diffusion in actual clinical practice.

Abstract

Many clinical practice guidelines (CPGs) are intended to provide evidence-based guidance to clinicians on a single disease, and are frequently considered inadequate when caring for patients with multiple chronic conditions (MCC), or two or more chronic conditions. It is unclear to what degree disease-specific CPGs provide guidance about MCC. In this study, we develop a method for extracting knowledge from single-disease chronic condition CPGs to determine how frequently they mention commonly co-occurring chronic diseases. We focus on 15 highly prevalent chronic conditions. We use publicly available resources, including a repository of guideline summaries from the National Guideline Clearinghouse to build a text corpus, a data dictionary of ICD-9 codes from the Medicare Chronic Conditions Data Warehouse (CCW) to construct an initial list of disease terms, and disease synonyms from the National Center for Biomedical Ontology to enhance the list of disease terms. First, for each disease guideline, we determined the frequency of comorbid condition mentions (a disease-comorbidity pair) by exactly matching disease synonyms in the text corpus. Then, we developed an annotated reference standard using a sample subset of guidelines. We used this reference standard to evaluate our approach. Then, we compared the co-prevalence of common pairs of chronic conditions from Medicare CCW data to the frequency of disease-comorbidity pairs in CPGs. Our results show that some disease-comorbidity pairs occur more frequently in CPGs than others. Sixty-one (29.0%) of 210 possible disease-comorbidity pairs occurred zero times; for example, no guideline on chronic kidney disease mentioned depression, while heart failure guidelines mentioned ischemic heart disease the most frequently. Our method adequately identifies comorbid chronic conditions in CPG recommendations with precision 0.82, recall 0.75, and F-measure 0.78. Our work identifies knowledge currently embedded in the free text of clinical practice guideline recommendations and provides an initial view of the extent to which CPGs mention common comorbid conditions. Knowledge extracted from CPG text in this way may be useful to inform gaps in guideline recommendations regarding MCC and therefore identify potential opportunities for guideline improvement.

Abstract

Clinical practice guidelines should be based on the best scientific evidence derived from systematic reviews of primary research. However, these studies often do not provide evidence needed by guideline development groups to evaluate the tradeoffs between benefits and harms. In this article, the authors identify 4 areas where models can bridge the gaps between published evidence and the information needed for guideline development applying new or updated information on disease risk, diagnostic test properties, and treatment efficacy; exploring a more complete array of alternative intervention strategies; assessing benefits and harms over a lifetime horizon; and projecting outcomes for the conditions for which the guideline is intended. The use of modeling as an approach to bridge these gaps (provided that the models are high-quality and adequately validated) is considered. Colorectal and breast cancer screening are used as examples to show the utility of models for these purposes. The authors propose that a modeling study is most useful when strong primary evidence is available to inform the model but critical gaps remain between the evidence and the questions that the guideline group must address. In these cases, model results have a place alongside the findings of systematic reviews to inform health care practice and policy.

Abstract

The American College of Physicians (ACP) developed this guideline to present the evidence and provide clinical recommendations on the diagnosis of obstructive sleep apnea in adults.This guideline is based on published literature on this topic that was identified by using MEDLINE (1966 through May 2013), the Cochrane Central Register of Controlled Trials, and the Cochrane Database of Systematic Reviews. Searches were limited to English-language publications. The clinical outcomes evaluated for this guideline included all-cause mortality, cardiovascular mortality, nonfatal cardiovascular disease, stroke, hypertension, type 2 diabetes, postsurgical outcomes, and quality of life. Sensitivities, specificities, and likelihood ratios were also assessed as outcomes of diagnostic tests. This guideline grades the evidence and recommendations by using ACP's clinical practice guidelines grading system.ACP recommends a sleep study for patients with unexplained daytime sleepiness. (Grade: weak recommendation, low-quality evidence).ACP recommends polysomnography for diagnostic testing in patients suspected of obstructive sleep apnea. ACP recommends portable sleep monitors in patients without serious comorbidities as an alternative to polysomnography when polysomnography is not available for diagnostic testing. (Grade: weak recommendation, moderate-quality evidence).

Abstract

Vaccination for the 2009 pandemic did not occur until late in the outbreak, which limited its benefits. Influenza A (H7N9) is causing increasing morbidity and mortality in China, and researchers have modified the A (H5N1) virus to transmit via aerosol, which again heightens concerns about pandemic influenza preparedness.To determine how quickly vaccination should be completed to reduce infections, deaths, and health care costs in a pandemic with characteristics similar to influenza A (H7N9) and A (H5N1).Dynamic transmission model to estimate health and economic consequences of a severe influenza pandemic in a large metropolitan city.Literature and expert opinion.Residents of a U.S. metropolitan city with characteristics similar to New York City.Lifetime.Societal.Vaccination of 30% of the population at 4 or 6 months.Infections and deaths averted and cost-effectiveness.In 12 months, 48 254 persons would die. Vaccinating at 9 months would avert 2365 of these deaths. Vaccinating at 6 months would save 5775 additional lives and $51 million at a city level. Accelerating delivery to 4 months would save an additional 5633 lives and $50 million.If vaccination were delayed for 9 months, reducing contacts by 8% through nonpharmaceutical interventions would yield a similar reduction in infections and deaths as vaccination at 4 months.The model is not designed to evaluate programs targeting specific populations, such as children or persons with comorbid conditions.Vaccination in an influenza A (H7N9) pandemic would need to be completed much faster than in 2009 to substantially reduce morbidity, mortality, and health care costs. Maximizing non-pharmaceutical interventions can substantially mitigate the pandemic until a matched vaccine becomes available.Agency for Healthcare Research and Quality, National Institutes of Health, and Department of Veterans Affairs.

Abstract

To determine the association between preexisting characteristics and current health and the cost of different types of advanced human immunodeficiency virus (HIV) care.Treatment-experienced patients failing highly active antiretroviral treatment (ART) in the United States, Canada, and the United Kingdom were factorial randomized to an antiretroviral-free period and ART intensification. Cost was estimated by multiplying patient-reported utilization by a unit cost.A total of 367 participants were followed for a mean of 15.3 quarters (range 1-26). Medication accounted for most (61.8%) of the $26,832 annual cost. Cost averaged $4147 per quarter for ART, $1981 for inpatient care, $580 for outpatient care, and $346 for other medications. Cost for inpatient stays, outpatient visits, and other medications was 171% higher (P 200 cells/μL. Some baseline characteristics, including low CD4 count, high viral load, and HIV from injection drug use with hepatitis C coinfection, had a sustained effect on cost.The association between health status and cost depended on the type of care. Indicators of poor health were associated with higher inpatient and concomitant medication costs and lower cost for ART medication. Although ART has supplanted hospitalization as the most important cost in HIV care, some patients continue to incur high hospitalization costs in periods when they are using less ART. The cost of interventions to improve the use of ART might be offset by the reduction of other costs.

Abstract

Pre-exposure prophylaxis with oral antiretroviral treatment (oral PrEP) for HIV-uninfected injection drug users (IDUs) is potentially useful in controlling HIV epidemics with a significant injection drug use component. We estimated the effectiveness and cost effectiveness of strategies for using oral PrEP in various combinations with methadone maintenance treatment (MMT) and antiretroviral treatment (ART) in Ukraine, a representative case for mixed HIV epidemics.We developed a dynamic compartmental model of the HIV epidemic in a population of non-IDUs, IDUs who inject opiates, and IDUs in MMT, adding an oral PrEP program (tenofovir/emtricitabine, 49% susceptibility reduction) for uninfected IDUs. We analyzed intervention portfolios consisting of oral PrEP (25% or 50% of uninfected IDUs), MMT (25% of IDUs), and ART (80% of all eligible patients). We measured health care costs, quality-adjusted life years (QALYs), HIV prevalence, HIV infections averted, and incremental cost effectiveness. A combination of PrEP for 50% of IDUs and MMT lowered HIV prevalence the most in both IDUs and the general population. ART combined with MMT and PrEP (50% access) averted the most infections (14,267). For a PrEP cost of $950, the most cost-effective strategy was MMT, at $520/QALY gained versus no intervention. The next most cost-effective strategy consisted of MMT and ART, costing $1,000/QALY gained compared to MMT alone. Further adding PrEP (25% access) was also cost effective by World Health Organization standards, at $1,700/QALY gained. PrEP alone became as cost effective as MMT at a cost of $650, and cost saving at $370 or less.Oral PrEP for IDUs can be part of an effective and cost-effective strategy to control HIV in regions where injection drug use is a significant driver of the epidemic. Where budgets are limited, focusing on MMT and ART access should be the priority, unless PrEP has low cost.

Abstract

Newer antiretroviral drugs provide substantial benefits but are expensive. The cost-effectiveness of using antiretroviral drugs in combination for patients with multidrug-resistant HIV disease was determined.A cohort state-transition model was built representing treatment-experienced patients with low CD4 counts, high viral load levels, and multidrug-resistant virus. The effectiveness of newer drugs (those approved in 2005 or later) was estimated from published randomized trials. Other parameters were estimated from a randomized trial and from the literature. The model had a lifetime time horizon and used the perspective of an ideal insurer in the United States. The interventions were combination antiretroviral therapy, consisting of 2 newer drugs and 1 conventional drug, compared with 3 conventional drugs. Outcome measures were life-years, quality-adjusted life-years (QALYs), costs, and incremental cost-effectiveness.Substituting newer antiretroviral drugs increased expected survival by 3.9 years in advanced HIV disease. The incremental cost-effectiveness ratio of newer, compared with conventional, antiretroviral drugs was $75,556/QALY gained. Sensitivity analyses showed that substituting only one newer antiretroviral drug cost $54,559 to $68,732/QALY, depending on assumptions about efficacy. Substituting 3 newer drugs cost $105,956 to $117,477/QALY. Cost-effectiveness ratios were higher if conventional drugs were not discontinued.In treatment-experienced patients with advanced HIV disease, use of newer antiretroviral agents can be cost-effective, given a cost-effectiveness threshold in the range of $50,000 to $75,000 per QALY gained. Newer antiretroviral agents should be used in carefully selected patients for whom less expensive options are clearly inferior.

Abstract

Update of the 2003 U.S. Preventive Services Task Force (USPSTF) recommendation on primary care interventions to prevent tobacco use in children and adolescents.The USPSTF reviewed the evidence on the effectiveness of primary care interventions on the rates of initiation or cessation of tobacco use in children and adolescents and on health outcomes, such as respiratory health, dental and oral health, and adult smoking. The USPSTF also reviewed the evidence on the potential harms of these interventions.This recommendation applies to school-aged children and adolescents. The USPSTF has issued a separate recommendation statement on tobacco use counseling in adults and pregnant women.The USPSTF recommends that primary care clinicians provide interventions, including education or brief counseling, to prevent initiation of tobacco use in school-aged children and adolescents.

Abstract

STUDY OBJECTIVE: We determine the minimum mortality reduction that helicopter emergency medical services (EMS) should provide relative to ground EMS for the scene transport of trauma victims to offset higher costs, inherent transport risks, and inevitable overtriage of patients with minor injury. METHODS: We developed a decision-analytic model to compare the costs and outcomes of helicopter versus ground EMS transport to a trauma center from a societal perspective during a patient's lifetime. We determined the mortality reduction needed to make helicopter transport cost less than $100,000 and $50,000 per quality-adjusted life-year gained compared with ground EMS. Model inputs were derived from the National Study on the Costs and Outcomes of Trauma, National Trauma Data Bank, Medicare reimbursements, and literature. We assessed robustness with probabilistic sensitivity analyses. RESULTS: Helicopter EMS must provide a minimum of a 17% relative risk reduction in mortality (1.6 lives saved/100 patients with the mean characteristics of the National Study on the Costs and Outcomes of Trauma cohort) to cost less than $100,000 per quality-adjusted life-year gained and a reduction of at least 33% (3.7 lives saved/100 patients) to cost less than $50,000 per quality-adjusted life-year. Helicopter EMS becomes more cost-effective with significant reductions in patients with minor injury who are triaged to air transport or if long-term disability outcomes are improved. CONCLUSION: Helicopter EMS needs to provide at least a 17% mortality reduction or a measurable improvement in long-term disability to compare favorably with other interventions considered cost-effective. Given current evidence, it is not clear that helicopter EMS achieves this mortality or disability reduction. Reducing overtriage of patients with minor injury to helicopter EMS would improve its cost-effectiveness.

Abstract

To assess the cost-effectiveness of diagnostic laparoscopy, computed tomography (CT), and magnetic resonance imaging (MRI) after indeterminate ultrasonography in pregnant women with suspected appendicitis.A decision-analytic model was developed to simulate appendicitis during pregnancy taking into consideration the health outcomes for both the pregnant women and developing fetuses. Strategies included diagnostic laparoscopy, CT, and MRI. Outcomes included positive appendectomy, negative appendectomy, maternal perioperative complications, preterm delivery, fetal loss, childhood cancer, lifetime costs, discounted life expectancy, and incremental cost-effectiveness ratios.Magnetic resonance imaging is the most cost-effective strategy, costing $6,767 per quality-adjusted life-year gained relative to CT, well below the generally accepted $50,000 per quality-adjusted life-year threshold. In a setting where MRI is unavailable, CT is cost-effective even when considering the increased risk of radiation-associated childhood cancer ($560 per quality-adjusted life-year gained relative to diagnostic laparoscopy). Unless the negative appendectomy rate is less than 1%, imaging of any type is more cost-effective than proceeding directly to diagnostic laparoscopy.Depending on imaging costs and resource availability, both CT and MRI are potentially cost-effective. The risk of radiation-associated childhood cancer from CT has little effect on population-level outcomes or cost-effectiveness but is a concern for individual patients. For pregnant women with suspected appendicitis, an extremely high level of clinical diagnostic certainty must be reached before proceeding to operation without preoperative imaging.

Abstract

Optimal solutions for reducing diversion without worsening emergency department (ED) crowding are unclear. We performed a systematic review of published simulation studies to identify: 1) the tradeoff between ambulance diversion and ED wait times; 2) the predicted impact of patient flow interventions on reducing diversion; and 3) the optimal regional strategy for reducing diversion.Systematic review of articles using MEDLINE, Inspec, Scopus. Additional studies identified through bibliography review, Google Scholar, and scientific conference proceedings.Only simulations modeling ambulance diversion as a result of ED crowding or inpatient capacity problems were included.Independent extraction by two authors using predefined data fields.We identified 5,116 potentially relevant records; 10 studies met inclusion criteria. In models that quantified the relationship between ED throughput times and diversion, diversion was found to only minimally improve ED waiting room times. Adding holding units for inpatient boarders and ED-based fast tracks, improving lab turnaround times, and smoothing elective surgery caseloads were found to reduce diversion considerably. While two models found a cooperative agreement between hospitals is necessary to prevent defensive diversion behavior by a hospital when a nearby hospital goes on diversion, one model found there may be more optimal solutions for reducing region wide wait times than a regional ban on diversion.Smoothing elective surgery caseloads, adding ED fast tracks as well as holding units for inpatient boarders, improving ED lab turnaround times, and implementing regional cooperative agreements among hospitals are promising avenues for reducing diversion.

Abstract

Background- Transcatheter aortic valve replacement (TAVR) seems to improve the survival and quality of life of patients with aortic stenosis ineligible for surgical aortic valve replacement. Methods and Results- We used a decision analytic Markov model to estimate lifetime costs and benefits in a hypothetical cohort of patients with severe, symptomatic aortic stenosis who were ineligible for surgical aortic valve replacement. The model compared transfemoral TAVR with medical management and was calibrated to the Placement of Aortic Transcatheter Valves (PARTNER) trial. TAVR increased life expectancy from 2.08 to 2.93 years and quality-adjusted life expectancy from 1.19 to 1.93 years. TAVR also reduced subsequent hospitalizations by 1.40 but increased complications, particularly stroke (from 1% to 11% lifetime risk), and also increased lifetime costs from $83 600 to $1 69 100. The incremental cost-effectiveness of TAVR was $1 16 500 per quality-adjusted life-year gained ($99 900 per life-year gained). Results were robust to reasonable changes in individual variables but were sensitive to the level of annual healthcare costs caused by noncardiac diseases and to the projected life expectancy of medically treated patients. Conclusions- TAVR seems to be an effective but somewhat expensive alternative to medical management among patients with symptomatic aortic stenosis ineligible for surgery. TAVR is more cost-effective for patients with a lower burden of noncardiac disease.

Abstract

Chinese translationProstate cancer is an important health problem in men. It rarely causes death in men younger than 50 years; most deaths associated with it occur in men older than 75 years. The benefits of screening with the prostate-specific antigen (PSA) test are outweighed by the harms for most men. Prostate cancer never becomes clinically significant in a patient's lifetime in a considerable proportion of men with prostate cancer detected with the PSA test. They will receive no benefit and are subject to substantial harms from the treatment of prostate cancer. The American College of Physicians (ACP) developed this guidance statement for clinicians by assessing current prostate cancer screening guidelines developed by other organizations. ACP believes that it is more valuable to provide clinicians with a rigorous review of available guidelines rather than develop a new guideline on the same topic when several guidelines are available on a topic or when existing guidelines conflict. The purpose of this guidance statement is to critically review available guidelines to help guide internists and other clinicians in making decisions about screening for prostate cancer. The target patient population for this guidance statement is all adult men.This guidance statement is derived from an appraisal of available guidelines on screening for prostate cancer. Authors searched the National Guideline Clearinghouse to identify prostate cancer screening guidelines in the United States and selected 4 developed by the American College of Preventive Medicine, American Cancer Society, American Urological Association, and U.S. Preventive Services Task Force. The AGREE II (Appraisal of Guidelines, Research and Evaluation in Europe) instrument was used to evaluate the guidelines. GUIDANCE STATEMENT 1: ACP recommends that clinicians inform men between the age of 50 and 69 years about the limited potential benefits and substantial harms of screening for prostate cancer. ACP recommends that clinicians base the decision to screen for prostate cancer using the prostate-specific antigen test on the risk for prostate cancer, a discussion of the benefits and harms of screening, the patient's general health and life expectancy, and patient preferences. ACP recommends that clinicians should not screen for prostate cancer using the prostate-specific antigen test in patients who do not express a clear preference for screening. GUIDANCE STATEMENT 2: ACP recommends that clinicians should not screen for prostate cancer using the prostate-specific antigen test in average-risk men under the age of 50 years, men over the age of 69 years, or men with a life expectancy of less than 10 to 15 years.

Abstract

The authors sought to evaluate the cost-effectiveness of statins for primary prevention of myocardial infarction (MI) and stroke in patients with chronic kidney disease (CKD).Patients with CKD have an elevated risk of MI and stroke. Although HMG Co-A reductase inhibitors (“statins”) may prevent cardiovascular events in patients with non–dialysis-requiring CKD, adverse drug effects and competing risks could materially influence net effects and clinical decision-making.We developed a decision-analytic model of CKD and cardiovascular disease (CVD) to determine the cost-effectiveness of low-cost generic statins for primary CVD prevention in men and women with hypertension and mild-to-moderate CKD. Outcomes included MI and stroke rates, discounted quality-adjusted life years (QALYs) and lifetime costs (2010 USD), and incremental cost-effectiveness ratios.For 65-year-old men with moderate hypertension and mild-to-moderate CKD, statins reduced the combined rate of MI and stroke, yielded 0.10 QALYs, and increased costs by $1,800 ($18,000 per QALY gained). For patients with lower baseline cardiovascular risks, health and economic benefits were smaller; for 65-year-old women, statins yielded 0.06 QALYs and increased costs by $1,900 ($33,400 per QALY gained). Results were sensitive to rates of rhabdomyolysis and drug costs. Statins are less cost-effective when obtained at average retail prices, particularly in patients at lower CVD risk.Although statins reduce absolute CVD risk in patients with CKD, the increased risk of rhabdomyolysis, and competing risks associated with progressive CKD, partly offset these gains. Low-cost generic statins appear cost-effective for primary prevention of CVD in patients with mild-to-moderate CKD and hypertension.

Abstract

Recent studies suggest certain antiretroviral therapy (ART) drugs are associated with increases in cardiovascular disease.We performed a systematic review and meta-analysis to summarize the available evidence, with the goal of elucidating whether specific ART drugs are associated with an increased risk of myocardial infarction (MI).We searched Medline, Web of Science, the Cochrane Library, and abstract archives from the Conference on Retroviruses and Opportunistic Infections and International AIDS Society up to June 2011 to identify published articles and abstracts.Eligible studies were comparative and included MI, strokes, or other cardiovascular events as outcomes.Eligibility screening, data extraction, and quality assessment were performed independently by two investigators.Random effects methods and Fisher's combined probability test were used to summarize evidence.Twenty-seven studies met inclusion criteria, with 8 contributing to a formal meta-analysis. Findings based on two observational studies indicated an increase in risk of MI for patients recently exposed (usually defined as within last 6 months) to abacavir (RR 1.92, 95% CI 1.51-2.42) and protease inhibitors (PI) (RR 2.13, 95% CI 1.06-4.28). Our analysis also suggested an increased risk associated with each additional year of exposure to indinavir (RR 1.11, 95% CI 1.05-1.17) and lopinavir (RR 1.22, 95% CI 1.01-1.47). Our findings of increased cardiovascular risk from abacavir and PIs were in contrast to four published meta-analyses based on secondary analyses of randomized controlled trials, which found no increased risk from cardiovascular disease.Although observational studies implicated specific drugs, the evidence is mixed. Further, meta-analyses of randomized trials did not find increased risk from abacavir and PIs. Our findings that implicate specific ARTs in the observational setting provide sufficient evidence to warrant further investigation of this relationship in studies designed for that purpose.

Abstract

The American College of Physicians (ACP) developed this guideline in collaboration with the American College of Cardiology Foundation (ACCF), American Heart Association (AHA), American Association for Thoracic Surgery, Preventive Cardiovascular Nurses Association, and Society of Thoracic Surgeons to help clinicians diagnose known or suspected stable ischemic heart disease.Literature on this topic published before November 2011 was identified by using MEDLINE, Embase, Cochrane CENTRAL, PsychINFO, AMED, and SCOPUS. Searches were limited to human studies published in English. This guideline grades the evidence and recommendations according to a translation of the ACCF/AHA grading system into ACP's clinical practice guidelines grading system.This guideline includes 28 recommendations that address the following issues: the initial diagnosis of the patient who might have stable ischemic heart disease, cardiac stress testing to assess the risk for death or myocardial infarction in patients diagnosed with stable ischemic heart disease, and coronary angiography for risk assessment.

Abstract

The American College of Physicians (ACP) developed this guideline with the American College of Cardiology Foundation (ACCF), American Heart Association (AHA), American Association for Thoracic Surgery, Preventive Cardiovascular Nurses Association, and Society of Thoracic Surgeons to present the available evidence on the management of stable known or suspected ischemic heart disease.Literature on this topic published before November 2011 was identified by using MEDLINE, Embase, Cochrane CENTRAL, PsychINFO, AMED, and SCOPUS. Searches were limited to human studies published in English. This guideline grades the evidence and recommendations according to a translation of the ACCF/AHA grading system into ACP's clinical practice guidelines grading system.The guideline includes 48 specific recommendations that address the following issues: patient education, management of proven risk factors (dyslipidemia, hypertension, diabetes, physical activity body weight, and smoking), risk factor reduction strategies of unproven benefit, medical therapy to prevent myocardial infarction and death and to relieve symptoms, alternative therapy, revascularization to improve survival and symptoms, and patient follow-up.

Abstract

We developed a mathematical model to identify the timing of antiretroviral therapy (ART) initiation that optimizes patient outcomes as a function of patient CD4 count, age, cardiac mortality risk, sex, and personal preferences. Our goal was to find the conditions that maximize patient quality-adjusted life expectancy (QALE) in the context of our model. Under the assumption that ART confers disease progression and mortality benefits at any CD4 count, immediate treatment initiation yields the greatest remaining QALE for young patients under most circumstances. The timing of ART initiation depends on the magnitude of benefit from ART at high CD4 counts, the magnitude of increases in cardiac risk, and patients' preferences. If ART reduces HIV progression at high CD4 counts, immediate ART is preferable for most newly infected individuals <35 years even if ART doubles age- and sex-specific cardiac risk.

Abstract

Prisons of the former Soviet Union (FSU) have high rates of multidrug-resistant tuberculosis (MDR-TB) and are thought to drive general population tuberculosis (TB) epidemics. Effective prison case detection, though employing more expensive technologies, may reduce long-term treatment costs and slow MDR-TB transmission.We developed a dynamic transmission model of TB and drug resistance matched to the epidemiology and costs in FSU prisons. We evaluated eight strategies for TB screening and diagnosis involving, alone or in combination, self-referral, symptom screening, mass miniature radiography (MMR), and sputum PCR with probes for rifampin resistance (Xpert MTB/RIF). Over a 10-y horizon, we projected costs, quality-adjusted life years (QALYs), and TB and MDR-TB prevalence. Using sputum PCR as an annual primary screening tool among the general prison population most effectively reduced overall TB prevalence (from 2.78% to 2.31%) and MDR-TB prevalence (from 0.74% to 0.63%), and cost US$543/QALY for additional QALYs gained compared to MMR screening with sputum PCR reserved for rapid detection of MDR-TB. Adding sputum PCR to the currently used strategy of annual MMR screening was cost-saving over 10 y compared to MMR screening alone, but produced only a modest reduction in MDR-TB prevalence (from 0.74% to 0.69%) and had minimal effect on overall TB prevalence (from 2.78% to 2.74%). Strategies based on symptom screening alone were less effective and more expensive than MMR-based strategies. Study limitations included scarce primary TB time-series data in FSU prisons and uncertainties regarding screening test characteristics.In prisons of the FSU, annual screening of the general inmate population with sputum PCR most effectively reduces TB and MDR-TB prevalence, doing so cost-effectively. If this approach is not feasible, the current strategy of annual MMR is both more effective and less expensive than strategies using self-referral or symptom screening alone, and the addition of sputum PCR for rapid MDR-TB detection may be cost-saving over time.

Abstract

To estimate the cost, effectiveness, and cost effectiveness of HIV and HCV screening of injection drug users (IDUs) in opioid replacement therapy (ORT).Dynamic compartmental model of HIV and HCV in a population of IDUs and non-IDUs for a representative U.S. urban center with 2.5 million adults (age 15-59).We considered strategies of screening individuals in ORT for HIV, HCV, or both infections by antibody or antibody and viral RNA testing. We evaluated one-time and repeat screening at intervals from annually to once every 3 months. We calculated the number of HIV and HCV infections, quality-adjusted life years (QALYs), costs, and incremental cost-effectiveness ratios (ICERs).Adding HIV and HCV viral RNA testing to antibody testing averts 14.8-30.3 HIV and 3.7-7.7 HCV infections in a screened population of 26,100 IDUs entering ORT over 20 years, depending on screening frequency. Screening for HIV antibodies every 6 months costs $30,700/QALY gained. Screening for HIV antibodies and viral RNA every 6 months has an ICER of $65,900/QALY gained. Strategies including HCV testing have ICERs exceeding $100,000/QALY gained unless awareness of HCV-infection status results in a substantial reduction in needle-sharing behavior.Although annual screening for antibodies to HIV and HCV is modestly cost effective compared to no screening, more frequent screening for HIV provides additional benefit at less cost. Screening individuals in ORT every 3-6 months for HIV infection using both antibody and viral RNA technologies and initiating ART for acute HIV infection appears cost effective.

Abstract

State-transition modeling (STM) is an intuitive, flexible, and transparent approach of computer-based decision-analytic modeling, including both Markov model cohort simulation as well as individual-based (first-order Monte Carlo) microsimulation. Conceptualizing a decision problem in terms of a set of (health) states and transitions among these states, STM is one of the most widespread modeling techniques in clinical decision analysis, health technology assessment, and health-economic evaluation. STMs have been used in many different populations and diseases, and their applications range from personalized health care strategies to public health programs. Most frequently, state-transition models are used in the evaluation of risk factor interventions, screening, diagnostic procedures, treatment strategies, and disease management programs.

Abstract

State-transition modeling is an intuitive, flexible, and transparent approach of computer-based decision-analytic modeling including both Markov model cohort simulation and individual-based (first-order Monte Carlo) microsimulation. Conceptualizing a decision problem in terms of a set of (health) states and transitions among these states, state-transition modeling is one of the most widespread modeling techniques in clinical decision analysis, health technology assessment, and health-economic evaluation. State-transition models have been used in many different populations and diseases, and their applications range from personalized health care strategies to public health programs. Most frequently, state-transition models are used in the evaluation of risk factor interventions, screening, diagnostic procedures, treatment strategies, and disease management programs. The goal of this article was to provide consensus-based guidelines for the application of state-transition models in the context of health care. We structured the best practice recommendations in the following sections: choice of model type (cohort vs. individual-level model), model structure, model parameters, analysis, reporting, and communication. In each of these sections, we give a brief description, address the issues that are of particular relevance to the application of state-transition models, give specific examples from the literature, and provide best practice recommendations for state-transition modeling. These recommendations are directed both to modelers and to users of modeling results such as clinicians, clinical guideline developers, manufacturers, or policymakers.

The Cost-Effectiveness of Preexposure Prophylaxis for HIV Prevention in the United States in Men Who Have Sex With MenANNALS OF INTERNAL MEDICINEJuusola, J. L., Brandeau, M. L., Owens, D. K., Bendavid, E.2012; 156 (8): 541-U144

Abstract

A recent randomized, controlled trial showed that daily oral preexposure chemoprophylaxis (PrEP) was effective for HIV prevention in men who have sex with men (MSM). The Centers for Disease Control and Prevention recently provided interim guidance for PrEP in MSM at high risk for HIV. Previous studies did not reach a consistent estimate of its cost-effectiveness.To estimate the effectiveness and cost-effectiveness of PrEP in MSM in the United States.Dynamic model of HIV transmission and progression combined with a detailed economic analysis.Published literature.MSM aged 13 to 64 years in the United States.Lifetime.Societal.PrEP was evaluated in both the general MSM population and in high-risk MSM and was assumed to reduce infection risk by 44% on the basis of clinical trial results.New HIV infections, discounted quality-adjusted life-years (QALYs) and costs, and incremental cost-effectiveness ratios.Initiating PrEP in 20% of MSM in the United States would reduce new HIV infections by an estimated 13% and result in a gain of 550,166 QALYs over 20 years at a cost of $172,091 per QALY gained. Initiating PrEP in a larger proportion of MSM would prevent more infections but at an increasing cost per QALY gained (up to $216,480 if all MSM receive PrEP). Preexposure chemoprophylaxis in only high-risk MSM can improve cost-effectiveness. For MSM with an average of 5 partners per year, PrEP costs approximately $50,000 per QALY gained. Providing PrEP to all high-risk MSM for 20 years would cost $75 billion more in health care-related costs than the status quo and $600,000 per HIV infection prevented, compared with incremental costs of $95 billion and $2 million per infection prevented for 20% coverage of all MSM.PrEP in the general MSM population would cost less than $100,000 per QALY gained if the daily cost of antiretroviral drugs for PrEP was less than $15 or if PrEP efficacy was greater than 75%.When examining PrEP in high-risk MSM, the investigators did not model a mix of low- and high-risk MSM because of lack of data on mixing patterns.PrEP in the general MSM population could prevent a substantial number of HIV infections, but it is expensive. Use in high-risk MSM compares favorably with other interventions that are considered cost-effective but could result in annual PrEP expenditures of more than $4 billion.National Institute on Drug Abuse, Department of Veterans Affairs, and National Institute of Allergy and Infectious Diseases.

Abstract

Chronic hepatitis C virus is difficult to treat and affects approximately 3 million Americans. Protease inhibitors increase the effectiveness of standard therapy, but they are costly. A genetic assay may identify patients most likely to benefit from this treatment advance.To assess the cost-effectiveness of new protease inhibitors and an interleukin (IL)-28B genotyping assay for treating chronic hepatitis C virus.Decision-analytic Markov model.Published literature and expert opinion.Treatment-naive patients with chronic, genotype 1 hepatitis C virus monoinfection.Lifetime.Societal.Strategies are defined by the use of IL-28B genotyping and type of treatment (standard therapy [pegylated interferon with ribavirin]; triple therapy [standard therapy and a protease inhibitor]). Interleukin-28B-guided triple therapy stratifies patients with CC genotypes to standard therapy and those with non-CC types to triple therapy.Discounted costs (in 2010 U.S. dollars) and quality-adjusted life-years (QALYs); incremental cost-effectiveness ratios.For patients with mild and advanced fibrosis, universal triple therapy reduced the lifetime risk for hepatocellular carcinoma by 38% and 28%, respectively, and increased quality-adjusted life expectancy by 3% and 8%, respectively, compared with standard therapy. Gains from IL-28B-guided triple therapy were smaller. If the protease inhibitor costs $1100 per week, universal triple therapy costs $102,600 per QALY (mild fibrosis) or $51,500 per QALY (advanced fibrosis) compared with IL-28B-guided triple therapy and $70,100 per QALY (mild fibrosis) and $36,300 per QALY (advanced fibrosis) compared with standard therapy.Results were sensitive to the cost of protease inhibitors and treatment adherence rates.Data on the long-term comparative effectiveness of the new protease inhibitors are lacking.Both universal triple therapy and IL-28B-guided triple therapy are cost-effective when the least-expensive protease inhibitor are used for patients with advanced fibrosis.Stanford University.

Abstract

The effect of antiretroviral therapy (ART) interruption or intensification on health-related quality of life (HRQoL) in advanced HIV patients is unknown.To assess the impact of temporary treatment interruption and intensification of ART on HRQoL.A 2 x 2 factorial open label randomized controlled trial.Hospitals in the United States, Canada, and the United Kingdom.Multidrug resistant (MDR) HIV patients.Patients were randomized to receive a 12-wk interruption or not, and ART intensification or standard ART.The Health Utilities Index (HUI3), EQ-5D, standard gamble (SG), time tradeoff (TTO), visual analog scale (VAS), and the Medical Outcomes Study HIV Health Survey (MOS-HIV).There were no significant differences in HRQoL among the four groups during follow-up; however, there was a temporary significant decline in HRQoL on some measures within the interruption group during interruption (HUI3 -0.05, P = 0.03; VAS -5.9, P = 0.002; physical health summary -2.9, P = 0.001; mental health summary -1.9, P = 0.02). Scores declined slightly overall during follow-up. Multivariate analysis showed significantly lower HRQoL associated with some clinical events. Limitations. The results may not apply to HIV patients who have not experienced multiple treatment failures or who have not developed MDR HIV.Temporary ART interruption and ART intensification provided neither superior nor inferior HRQoL compared with no interruption and standard ART. Among surviving patients, HRQoL scores declined only slightly over years of follow-up in this advanced HIV cohort; however, approximately one-third of patients died during the trial follow up. Lower HRQoL was associated with adverse clinical events.

The cost-effectiveness of symptom-based testing and routine screening for acute HIV infection in men who have sex with men in the USAAIDSJuusola, J. L., Brandeau, M. L., Long, E. F., Owens, D. K., Bendavid, E.2011; 25 (14): 1779-1787

Abstract

Acute HIV infection often causes influenza-like illness (ILI) and is associated with high infectivity. We estimated the effectiveness and cost-effectiveness of strategies to identify and treat acute HIV infection in men who have sex with men (MSM) in the USA.Dynamic model of HIV transmission and progression.We evaluated three testing approaches: viral load testing for individuals with ILI, expanded screening with antibody testing, and expanded screening with antibody and viral load testing. We included treatment with antiretroviral therapy for individuals identified as acutely infected.New HIV infections, discounted quality-adjusted life years (QALYs) and costs, and incremental cost-effectiveness ratios.At the present rate of HIV-antibody testing, we estimated that 538,000 new infections will occur among MSM over the next 20 years. Expanding antibody screening coverage to 90% of MSM annually reduces new infections by 2.8% and costs US$ 12,582 per QALY gained. Symptom-based viral load testing with ILI is more expensive than expanded antibody screening, but is more effective and costs US$ 22,786 per QALY gained. Combining expanded antibody screening with symptom-based viral load testing prevents twice as many infections compared to expanded antibody screening alone, and costs US$ 29,923 per QALY gained. Adding viral load testing to all annual HIV tests costs more than US$ 100,000 per QALY gained.Use of HIV viral load testing in MSM with ILI prevents more infections than does expanded annual antibody screening alone and is inexpensive relative to other screening interventions. Clinicians should consider symptom-based viral load testing in MSM, in addition to encouraging annual antibody screening.

Abstract

The effect of adherence, treatment failure, and comorbidities on the cost of HIV care is not well understood.To characterize the cost of HIV care including combination antiretroviral treatment (ART).Observational study of administrative data.Total 1896 randomly selected HIV-infected patients and 288 trial participants with multidrug-resistant HIV seen at the US Veterans Health Administration (VHA).Comorbidities, cost, pharmacy, and laboratory data.Many HIV-infected patients (24.5%) of the random sample did not receive ART. Outpatient pharmacy accounted for 62.8% of the costs of patients highly adherent with ART, 32.2% of the cost of those with lower adherence, and 6.2% of the cost of those not receiving ART. Compared with patients not receiving ART, high adherence was associated with lower hospital cost, but no greater total cost. Individuals with a low CD4 count (<50 cells/mm) incurred 1.9 times the cost of patients with counts >500. Most patients had medical, psychiatric, or substance abuse comorbidities. These conditions were associated with greater cost. Trial participants were less likely to have psychiatric and substance abuse comorbidities than the random sample of VHA patients with HIV.Patients receiving combination ART had higher medication costs but lower acute hospital cost. Poor control of HIV was associated with higher cost. The cost of psychiatric, substance abuse, rehabilitation, and long-term care and medications other than ART, often overlooked in HIV studies, was substantial.

The cost-effectiveness of a modestly effective HIV vaccine in the United StatesVACCINELong, E. F., Owens, D. K.2011; 29 (36): 6113-6124

Abstract

The recent RV144 clinical trial showed that an ALVAC/AIDSVAX prime-boost HIV vaccine regimen may confer partial immunity in recipients and reduce transmission by 31%. Trial data suggest that efficacy may initially exceed 70% but decline over the following 3.5 years. Estimating the potential health benefits associated with a one-time vaccination campaign, as well as the projected benefits of repeat booster vaccination, may inform future HIV vaccine research and licensing decisions.We developed a mathematical model to project the future course of the HIV epidemic in the United States under varying HIV vaccine scenarios. The model accounts for disease progression, infection transmission, antiretroviral therapy, and HIV-related morbidity and mortality. We projected HIV prevalence and incidence over time in multiple risk groups, and we estimated quality-adjusted life years (QALYs) and costs over a 10-year time horizon. We assumed an exponentially declining efficacy curve fit to trial data, and that subsequent vaccine boosters confer similar immunity. Variations in vaccine parameters were examined in sensitivity analysis.Under existing HIV prevention and treatment efforts, an estimated 590,000 HIV infections occur over 10 years. One-time vaccination achieving 60% coverage of adults could prevent 9.8% of projected new infections over 10 years (and prevent 34% of new infections in the first year) and cost approximately $91,000/QALY gained relative to the status quo, assuming $500 per vaccination series. Targeted vaccination strategies result in net cost savings for vaccines costing less than $750. One-time vaccination of 60% of all adults coupled with three-year boosters only for men who have sex with men and people who inject drugs could prevent 21% of infections for $81,000/QALY gained relative to vaccination of higher risk sub-populations only. A program attaining 90% vaccination coverage prevents 15% of new HIV cases over 10 years (and approximately 50% of infections in the first year).A partially effective HIV vaccine with effectiveness similar to that observed in the RV144 trial would provide large health benefits in the United States and could meet conventionally accepted cost-effectiveness thresholds. Strategies that prioritize key populations are most efficient, but broader strategies provide greater total population health benefit.

Abstract

The prime-boost HIV vaccine regimen used in the recent RV144 trial resulted in modest efficacy of 31% over 3.5 years, but was substantially higher in the first year post-vaccination. We sought to explore the potential impact of a vaccine with rapidly waning efficacy in a South African population.We explored two strategies using a dynamic compartmental epidemic model for heterosexual transmission of HIV: [1] vaccination of a single cohort (30%, 60% or 90% of the initial population), with exponentially waning efficacy, but booster vaccinations at 5- or 2-year intervals, and [2] continuous vaccination of the unvaccinated population at the same coverage levels (30%, 60% or 90%) but with a constant efficacy vaccine of short duration. We also examined potential changes in post-vaccination condom use.The single cohort vaccination strategies did not have a substantial impact on HIV prevalence, although without boosters they still prevented 2-6% of the expected infections at 20 years, depending on the population coverage. The 5-year and 2-year booster strategies prevented 8-24% and 17-45% of the expected infections, respectively. Continuous vaccination to maintain population coverage levels resulted in more substantial reductions in population HIV prevalence and greater numbers of infections prevented: HIV prevalence at 20 years was reduced from 23% to 8-14% and the number of expected infections was decreased by 34-59%, depending on the population coverage level. Moderate changes in post-vaccination condom use did not substantially affect these outcomes.An HIV vaccine with partial efficacy and declining protection similar to the RV144 vaccine could prevent a substantial proportion of HIV infections if booster vaccinations were effective and available. Our estimates of the population impact of vaccination would be improved by further understanding of the duration of protection, the effectiveness of booster vaccination, and whether the vaccine efficacy varies between subpopulations at higher and lower risk of exposure.

Abstract

Circumcision significantly reduces female-to-male transmission of HIV infection, but changes in behavior may influence the overall impact on transmission. We sought to explore these effects, particularly for societies where women have less power to negotiate safe sex. We developed a compartmental epidemic model to simulate the population-level impact of various circumcision programs on heterosexual HIV transmission in Soweto. We incorporated gender-specific negotiation of condom use in sexual partnerships and explored post-circumcision changes in condom use. A 5-year prevention program in which only an additional 10% of uncircumcised males undergo circumcision each year, for example, would prevent 13% of the expected new HIV infections over 20 years. Outcomes were sensitive to potential changes in behavior and differed by gender. For Southern Africa, even modest programs offering circumcision would result in significant benefits. Because decreases in male condom use could diminish these benefits, particularly for women, circumcision programs should emphasize risk-reduction counseling.

Abstract

Recombinant factor VIIa (rFVIIa), a hemostatic agent approved for hemophilia, is increasingly used for off-label indications.To evaluate the benefits and harms of rFVIIa use for 5 off-label, in-hospital indications: intracranial hemorrhage, cardiac surgery, trauma, liver transplantation, and prostatectomy.Ten databases (including PubMed, EMBASE, and the Cochrane Library) queried from inception through December 2010. Articles published in English were analyzed.Two reviewers independently screened titles and abstracts to identify clinical use of rFVIIa for the selected indications and identified all randomized, controlled trials (RCTs) and observational studies for full-text review.Two reviewers independently assessed study characteristics and rated study quality and indication-wide strength of evidence.16 RCTs, 26 comparative observational studies, and 22 noncomparative observational studies met inclusion criteria. Identified comparators were limited to placebo (RCTs) or usual care (observational studies). For intracranial hemorrhage, mortality was not improved with rFVIIa use across a range of doses. Arterial thromboembolism was increased with medium-dose rFVIIa use (risk difference [RD], 0.03 [95% CI, 0.01 to 0.06]) and high-dose rFVIIa use (RD, 0.06 [CI, 0.01 to 0.11]). For adult cardiac surgery, there was no mortality difference, but there was an increased risk for thromboembolism (RD, 0.05 [CI, 0.01 to 0.10]) with rFVIIa. For body trauma, there were no differences in mortality or thromboembolism, but there was a reduced risk for the acute respiratory distress syndrome (RD, -0.05 [CI, -0.02 to -0.08]). Mortality was higher in observational studies than in RCTs.The amount and strength of evidence were low for most outcomes and indications. Publication bias could not be excluded.Limited available evidence for 5 off-label indications suggests no mortality reduction with rFVIIa use. For some indications, it increases thromboembolism.

Abstract

Guidance is needed on best medical management for advanced HIV disease with multidrug resistance (MDR) and limited retreatment options. We assessed two novel antiretroviral (ARV) treatment approaches in this setting.We conducted a 2×2 factorial randomized open label controlled trial in patients with a CD4 count?300 cells/µl who had ARV treatment (ART) failure requiring retreatment, to two options (a) re-treatment with either standard (?4 ARVs) or intensive (?5 ARVs) ART and b) either treatment starting immediately or after a 12-week monitored ART interruption. Primary outcome was time to developing a first AIDS-defining event (ADE) or death from any cause. Analysis was by intention to treat. From 2001 to 2006, 368 patients were randomized. At baseline, mean age was 48 years, 2% were women, median CD4 count was 106/µl, mean viral load was 4.74 log(10) copies/ml, and 59% had a prior AIDS diagnosis. Median follow-up was 4.0 years in 1249 person-years of observation. There were no statistically significant differences in the primary composite outcome of ADE or death between re-treatment options of standard versus intensive ART (hazard ratio 1.17; CI 0.86-1.59), or between immediate retreatment initiation versus interruption before re-treatment (hazard ratio 0.93; CI 0.68-1.30), or in the rate of non-HIV associated serious adverse events between re-treatment options.We did not observe clinical benefit or harm assessed by the primary outcome in this largest and longest trial exploring both ART interruption and intensification in advanced MDR HIV infection with poor retreatment options.Clinicaltrials.gov NCT00050089.

Abstract

Injection drug use (IDU) and heterosexual virus transmission both contribute to the growing mixed HIV epidemics in Eastern Europe and Central Asia. In Ukraine-chosen in this study as a representative country-IDU-related risk behaviors cause half of new infections, but few injection drug users (IDUs) receive methadone substitution therapy. Only 10% of eligible individuals receive antiretroviral therapy (ART). The appropriate resource allocation between these programs has not been studied. We estimated the effectiveness and cost-effectiveness of strategies for expanding methadone substitution therapy programs and ART in mixed HIV epidemics, using Ukraine as a case study.We developed a dynamic compartmental model of the HIV epidemic in a population of non-IDUs, IDUs using opiates, and IDUs on methadone substitution therapy, stratified by HIV status, and populated it with data from the Ukraine. We considered interventions expanding methadone substitution therapy, increasing access to ART, or both. We measured health care costs, quality-adjusted life years (QALYs), HIV prevalence, infections averted, and incremental cost-effectiveness. Without incremental interventions, HIV prevalence reached 67.2% (IDUs) and 0.88% (non-IDUs) after 20 years. Offering methadone substitution therapy to 25% of IDUs reduced prevalence most effectively (to 53.1% IDUs, 0.80% non-IDUs), and was most cost-effective, averting 4,700 infections and adding 76,000 QALYs compared with no intervention at US$530/QALY gained. Expanding both ART (80% coverage of those eligible for ART according to WHO criteria) and methadone substitution therapy (25% coverage) was the next most cost-effective strategy, adding 105,000 QALYs at US$1,120/QALY gained versus the methadone substitution therapy-only strategy and averting 8,300 infections versus no intervention. Expanding only ART (80% coverage) added 38,000 QALYs at US$2,240/QALY gained versus the methadone substitution therapy-only strategy, and averted 4,080 infections versus no intervention. Offering ART to 80% of non-IDUs eligible for treatment by WHO criteria, but only 10% of IDUs, averted only 1,800 infections versus no intervention and was not cost effective.Methadone substitution therapy is a highly cost-effective option for the growing mixed HIV epidemic in Ukraine. A strategy that expands both methadone substitution therapy and ART to high levels is the most effective intervention, and is very cost effective by WHO criteria. When expanding ART, access to methadone substitution therapy provides additional benefit in infections averted. Our findings are potentially relevant to other settings with mixed HIV epidemics. Please see later in the article for the Editors' Summary.

Abstract

Health care costs in the United States are increasing unsustainably, and further efforts to control costs are inevitable and essential. Efforts to control expenditures should focus on the value, in addition to the costs, of health care interventions. Whether an intervention provides high value depends on assessing whether its health benefits justify its costs. High-cost interventions may provide good value because they are highly beneficial; conversely, low-cost interventions may have little or no value if they provide little benefit. Thus, the challenge becomes determining how to slow the rate of increase in costs while preserving high-value, high-quality care. A first step is to decrease or eliminate care that provides no benefit and may even be harmful. A second step is to provide medical interventions that provide good value: medical benefits that are commensurate with their costs. This article discusses 3 key concepts for understanding how to assess the value of health care interventions. First, assessing the benefits, harms, and costs of an intervention is essential to understand whether it provides good value. Second, assessing the cost of an intervention should include not only the cost of the intervention itself but also any downstream costs that occur because the intervention was performed. Third, the incremental cost-effectiveness ratio estimates the additional cost required to obtain additional health benefits and provides a key measure of the value of a health care intervention.

Abstract

Diagnostic imaging is indicated for patients with low back pain only if they have severe progressive neurologic deficits or signs or symptoms that suggest a serious or specific underlying condition. In other patients, evidence indicates that routine imaging is not associated with clinically meaningful benefits but can lead to harms. Addressing inefficiencies in diagnostic testing could minimize potential harms to patients and have a large effect on use of resources by reducing both direct and downstream costs. In this area, more testing does not equate to better care. Implementing a selective approach to low back imaging, as suggested by the American College of Physicians and American Pain Society guideline on low back pain, would provide better care to patients, improve outcomes, and reduce costs.

Abstract

To investigate the cost-effectiveness of elective induction of labor at 41 weeks in nulliparous women.A decision analytic model comparing induction of labor at 41 weeks vs expectant management with antenatal testing until 42 weeks in nulliparas was designed. Baseline assumptions were derived from the literature as well as from analysis of the National Birth Cohort dataset and included an intrauterine fetal demise rate of 0.12% in the 41st week and a cesarean rate of 27% in women induced at 41 weeks. One-way and multiway sensitivity analyses were conducted to examine the robustness of the findings.Compared with expectant management, induction of labor is cost-effective with an incremental cost of $10,945 per quality-adjusted life year gained. Induction of labor at 41 weeks also resulted in a lower rate of adverse obstetric outcomes, including neonatal demise, shoulder dystocia, meconium aspiration syndrome, and severe perineal lacerations.Elective induction of labor at 41 weeks is cost-effective and improves outcomes.

Abstract

Warfarin reduces the risk for ischemic stroke in patients with atrial fibrillation (AF) but increases the risk for hemorrhage. Dabigatran is a fixed-dose, oral direct thrombin inhibitor with similar or reduced rates of ischemic stroke and intracranial hemorrhage in patients with AF compared with those of warfarin.To estimate the quality-adjusted survival, costs, and cost-effectiveness of dabigatran compared with adjusted-dose warfarin for preventing ischemic stroke in patients 65 years or older with nonvalvular AF.Markov decision model.The RE-LY (Randomized Evaluation of Long-Term Anticoagulation Therapy) trial and other published studies of anticoagulation. The cost of dabigatran was estimated on the basis of pricing in the United Kingdom.Patients aged 65 years or older with nonvalvular AF and risk factors for stroke (CHADS? score ?1 or equivalent) and no contraindications to anticoagulation.Lifetime.Societal.Warfarin anticoagulation (target international normalized ratio, 2.0 to 3.0); dabigatran, 110 mg twice daily (low dose); and dabigatran, 150 mg twice daily (high dose).Quality-adjusted life-years (QALYs), costs (in 2008 U.S. dollars), and incremental cost-effectiveness ratios.The quality-adjusted life expectancy was 10.28 QALYs with warfarin, 10.70 QALYs with low-dose dabigatran, and 10.84 QALYs with high-dose dabigatran. Total costs were $143 193 for warfarin, $164 576 for low-dose dabigatran, and $168 398 for high-dose dabigatran. The incremental cost-effectiveness ratios compared with warfarin were $51 229 per QALY for low-dose dabigatran and $45 372 per QALY for high-dose dabigatran.The model was sensitive to the cost of dabigatran but was relatively insensitive to other model inputs. The incremental cost-effectiveness ratio increased to $50 000 per QALY at a cost of $13.70 per day for high-dose dabigatran but remained less than $85 000 per QALY over the full range of model inputs evaluated. The cost-effectiveness of high-dose dabigatran improved with increasing risk for stroke and intracranial hemorrhage.Event rates were largely derived from a single randomized clinical trial and extrapolated to a 35-year time frame from clinical trials with approximately 2-year follow-up.In patients aged 65 years or older with nonvalvular AF at increased risk for stroke (CHADS? score ?1 or equivalent), dabigatran may be a cost-effective alternative to warfarin depending on pricing in the United States.American Heart Association and Veterans Affairs Health Services Research & Development Service.

Abstract

Family members of patients with established long-QT syndrome (LQTS) often lack definitive clinical findings, yet may have inherited an LQTS mutation and be at risk of sudden death. Genetic testing can identify mutations in 75% of patients with LQTS, but genetic testing of family members remains controversial.We used a Markov model to assess the cost-effectiveness of 3 strategies for treating an asymptomatic 10-year-old, first-degree relative of a patient with clinically evident LQTS. In the genetic testing strategy, relatives undergo genetic testing only for the mutation identified in the index patient, and relatives who test positive for the mutation are treated with ?-blockers. This strategy was compared with (1) empirical treatment of relatives with ?-blockers and (2) watchful waiting, with treatment only after development of symptoms. The genetic testing strategy resulted in better survival and quality-adjusted life years at higher cost, with a cost-effectiveness ratio of $67 400 per quality-adjusted life year gained compared with watchful waiting. The cost-effectiveness of the genetic testing strategy improved to less than $50 000 per quality-adjusted life year gained when applied selectively either to (1) relatives with higher clinical suspicion of LQTS (pretest probability 65% to 81%), or to (2) families with a higher than average risk of sudden death, or to (3) larger families (2 or more first-degree relatives tested).Genetic testing of young first-degree relatives of patients with definite LQTS is moderately expensive, but can reach acceptable thresholds of cost-effectiveness when applied to selected patients.

Abstract

the World Health Organization (WHO) recently changed its first-line antiretroviral treatment guidelines in resource-limited settings. The cost-effectiveness of the new guidelines is unknown.comparative effectiveness and cost-effectiveness analysis using a model of HIV disease progression and treatment.using a simulation of HIV disease and treatment in South Africa, we compared the life expectancy, quality-adjusted life expectancy, lifetime costs, and cost-effectiveness of five initial regimens. Four are currently recommended by the WHO: tenofovir/lamivudine/efavirenz; tenofovir/lamivudine/nevirapine; zidovudine/lamivudine/efavirenz; and zidovudine/lamivudine/nevirapine. The fifth is the most common regimen in current use: stavudine/lamivudine/nevirapine. Virologic suppression and toxicities determine regimen effectiveness and cost-effectiveness.choice of first-line regimen is associated with a difference of nearly 12 months of quality-adjusted life expectancy, from 135.2 months (tenofovir/lamivudine/efavirenz) to 123.7 months (stavudine/lamivudine/nevirapine). Stavudine/lamivudine/nevirapine is more costly and less effective than zidovudine/lamivudine/nevirapine. Initiating treatment with a regimen containing tenofovir/lamivudine/nevirapine is associated with an incremental cost-effectiveness ratio of $1045 per quality-adjusted life year compared with zidovudine/lamivudine/nevirapine. Using tenofovir/lamivudine/efavirenz was associated with the highest survival, fewest opportunistic diseases, lowest rate of regimen substitution, and an incremental cost-effectiveness ratio of $5949 per quality-adjusted life year gained compared with tenofovir/lamivudine/nevirapine. Zidovudine/lamivudine/efavirenz was more costly and less effective than tenofovir/lamivudine/nevirapine. Results were sensitive to the rates of toxicities and the disutility associated with each toxicity.among the options recommended by WHO, we estimate only three should be considered under normal circumstances. Choice among those depends on available resources and willingness to pay. Stavudine/lamivudine/nevirapine is associated with the poorest quality-adjusted survival and higher costs than zidovudine/lamivudine/nevirapine.

The Cost-Effectiveness and Population Outcomes of Expanded HIV Screening and Antiretroviral Treatment in the United StatesANNALS OF INTERNAL MEDICINELong, E. F., Brandeau, M. L., Owens, D. K.2010; 153 (12): 778-?

Abstract

Although recent guidelines call for expanded routine screening for HIV, resources for antiretroviral therapy (ART) are limited, and all eligible persons are not currently receiving treatment.To evaluate the effects on the U.S. HIV epidemic of expanded ART, HIV screening, or interventions to reduce risk behavior.Dynamic mathematical model of HIV transmission and disease progression and cost-effectiveness analysis.Published literature.High-risk (injection drug users and men who have sex with men) and low-risk persons aged 15 to 64 years in the United States.Twenty years and lifetime (costs and quality-adjusted life-years [QALYs]).Societal.Expanded HIV screening and counseling, treatment with ART, or both.New HIV infections, discounted costs and QALYs, and incremental cost-effectiveness ratios.One-time HIV screening of low-risk persons coupled with annual screening of high-risk persons could prevent 6.7% of a projected 1.23 million new infections and cost $22,382 per QALY gained, assuming a 20% reduction in sexual activity after screening. Expanding ART utilization to 75% of eligible persons prevents 10.3% of infections and costs $20,300 per QALY gained. A combination strategy prevents 17.3% of infections and costs $21,580 per QALY gained.With no reduction in sexual activity, expanded screening prevents 3.7% of infections. Earlier ART initiation when a CD4 count is greater than 0.350 × 10(9) cells/L prevents 20% to 28% of infections. Additional efforts to halve high-risk behavior could reduce infections by 65%.The model of disease progression and treatment was simplified, and acute HIV screening was excluded.Expanding HIV screening and treatment simultaneously offers the greatest health benefit and is cost-effective. However, even substantial expansion of HIV screening and treatment programs is not sufficient to markedly reduce the U.S. HIV epidemic without substantial reductions in risk behavior.National Institute on Drug Abuse, National Institutes of Health, and Department of Veterans Affairs.

Abstract

Many myocardial infarctions and strokes occur in individuals with low-density lipoprotein cholesterol levels below recommended treatment thresholds. High sensitivity C-reactive protein (hs-CRP) testing has been advocated to identify low- and intermediate-risk individuals who may benefit from statin therapy.A decision analytic Markov model was used to follow hypothetical cohorts of individuals with normal lipid levels but without coronary artery disease, peripheral arterial disease, or diabetes mellitus. The model compared current Adult Treatment Panel III practice guidelines, a strategy of hs-CRP screening in those without an indication for statin treatment by current practice guidelines followed by treatment only in those with elevated hs-CRP levels, and a strategy of statin therapy at specified predicted risk thresholds without hs-CRP testing. Risk-based treatment without hs-CRP testing was the most cost-effective strategy, assuming that statins were equally effective regardless of hs-CRP status. However, if normal hs-CRP levels identified a subgroup with little or no benefit from statin therapy (<20% relative risk reduction), then hs-CRP screening would be the optimal strategy. If harms from statin use were greater than generally recognized, then use of current clinical guidelines would be the optimal strategy.Risk-based statin treatment without hs-CRP testing is more cost-effective than hs-CRP screening, assuming that statins have good long-term safety and provide benefits among low-risk people with normal hs-CRP.

Abstract

Universal testing and treatment holds promise for reducing the burden of human immunodeficiency virus (HIV) in sub-Saharan Africa, but linkage from testing to treatment sites and retention in care are inadequate.We developed a simulation of the HIV epidemic and HIV disease progression in South Africa to compare the outcomes of the present HIV treatment campaign (status quo) with 4 HIV testing and treating strategies that increase access to antiretroviral therapy: (1) universal testing and treatment without changes in linkage to care and loss to follow-up; (2) universal testing and treatment with improved linkage to care; (3) universal testing and treatment with reduced loss to follow-up; and (4) comprehensive HIV care with universal testing and treatment, improved linkage to care, and reduced loss to follow-up. The main outcome measures were survival benefits, new HIV infections, and HIV prevalence.Compared with the status quo strategy, universal testing and treatment (1) was associated with a mean (95% uncertainty bounds) life expectancy gain of 12.0 months (11.3-12.2 months), and 35.3% (32.7%-37.5%) fewer HIV infections over a 10-year time horizon. Improved linkage to care (2), prevention of loss to follow-up (3), and comprehensive HIV care (4) provided substantial additional benefits: life expectancy gains compared with the status quo strategy were 16.1, 18.6, and 22.2 months, and new infections were 55.5%, 51.4%, and 73.2% lower, respectively. In sensitivity analysis, comprehensive HIV care reduced new infections by 69.7% to 76.7% under a broad set of assumptions.Universal testing and treatment with current levels of linkage to care and loss to follow-up could substantially reduce the HIV death toll and new HIV infections. However, increasing linkage to care and preventing loss to follow-up provides nearly twice the benefits of universal testing and treatment alone.

The Development of Clinical Practice Guidelines and Guidance Statements of the American College of Physicians: Summary of MethodsANNALS OF INTERNAL MEDICINEQaseem, A., Snow, V., Owens, D. K., Shekelle, P.2010; 153 (3): 194-U95

Abstract

The American College of Physicians (ACP) established its evidence-based clinical practice guidelines program in 1981. The ACP's Guidelines Committee and the staff of the Clinical Programs and Quality of Care Department develop the clinical recommendations. The ACP develops 2 different types of clinical recommendations: clinical practice guidelines and clinical guidance statements. The ACP clinical practice guidelines and guidance statements follow a multistep development process that includes a systematic review of the evidence, deliberation of the evidence by the committee, summary recommendations, and evidence and recommendation grading. All ACP clinical practice guidelines and clinical guidance statements, if not updated, are considered automatically withdrawn or invalid 5 years after publication or once an update has been issued.

Abstract

The CDC recommends routine voluntary HIV testing of all patients 13-64 years of age. Despite this recommendation, HIV testing rates are low even among those at identifiable risk, and many patients do not return to receive their results.To examine the costs and benefits of strategies to improve HIV testing and receipt of results.Cost-effectiveness analysis based on a Markov model. Acceptance of testing, return rates, and related costs were derived from a randomized trial of 251 patients; long-term costs and health outcomes were derived from the literature. SETTING/TARGET POPULATION: Primary-care patients with unknown HIV status.Comparison of three intervention models for HIV counseling and testing: Model A = traditional HIV counseling and testing; Model B = nurse-initiated routine screening with traditional HIV testing and counseling; Model C = nurse-initiated routine screening with rapid HIV testing and streamlined counseling.Life-years, quality-adjusted life-years (QALYs), costs and incremental cost-effectiveness.Without consideration of the benefit from reduced HIV transmission, Model A resulted in per-patient lifetime discounted costs of $48,650 and benefits of 16.271 QALYs. Model B increased lifetime costs by $53 and benefits by 0.0013 QALYs (corresponding to 0.48 quality-adjusted life days). Model C cost $66 more than Model A with an increase of 0.0018 QALYs (0.66 quality-adjusted life days) and an incremental cost-effectiveness of $36,390/QALY. When we included the benefit from reduced HIV transmission, Model C cost $10,660/QALY relative to Model A. The cost-effectiveness of Model C was robust in sensitivity analyses.In a primary-care population, nurse-initiated routine screening with rapid HIV testing and streamlined counseling increased rates of testing and receipt of test results and was cost-effective compared with traditional HIV testing strategies.

Abstract

To establish guidance on grading strength of evidence for the Evidence-based Practice Center (EPC) program of the U.S. Agency for Healthcare Research and Quality.Authors reviewed authoritative systems for grading strength of evidence, identified domains and methods that should be considered when grading bodies of evidence in systematic reviews, considered public comments on an earlier draft, and discussed the approach with representatives of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) working group.The EPC approach is conceptually similar to the GRADE system of evidence rating; it requires assessment of four domains: risk of bias, consistency, directness, and precision. Additional domains to be used when appropriate include dose-response association, presence of confounders that would diminish an observed effect, strength of association, and publication bias. Strength of evidence receives a single grade: high, moderate, low, or insufficient. We give definitions, examples, mechanisms for scoring domains, and an approach for assigning strength of evidence.EPCs should grade strength of evidence separately for each major outcome and, for comparative effectiveness reviews, all major comparisons. We will collaborate with the GRADE group to address ongoing challenges in assessing the strength of evidence.

Abstract

Sodium consumption raises blood pressure, increasing the risk for heart attack and stroke. Several countries, including the United States, are considering strategies to decrease population sodium intake.To assess the cost-effectiveness of 2 population strategies to reduce sodium intake: government collaboration with food manufacturers to voluntarily cut sodium in processed foods, modeled on the United Kingdom experience, and a sodium tax.A Markov model was constructed with 4 health states: well, acute myocardial infarction (MI), acute stroke, and history of MI or stroke.Medical Panel Expenditure Survey (2006), Framingham Heart Study (1980 to 2003), Dietary Approaches to Stop Hypertension trial, and other published data.U.S. adults aged 40 to 85 years.Lifetime.Societal.Incremental costs (2008 U.S. dollars), quality-adjusted life-years (QALYs), and MIs and strokes averted.Collaboration with industry that decreases mean population sodium intake by 9.5% averts 513 885 strokes and 480 358 MIs over the lifetime of adults aged 40 to 85 years who are alive today compared with the status quo, increasing QALYs by 2.1 million and saving $32.1 billion in medical costs. A tax on sodium that decreases population sodium intake by 6% increases QALYs by 1.3 million and saves $22.4 billion over the same period.Results are sensitive to the assumption that consumers have no disutility with modest reductions in sodium intake.Efforts to reduce population sodium intake could result in other dietary changes that are difficult to predict.Strategies to reduce sodium intake on a population level in the United States are likely to substantially reduce stroke and MI incidence, which would save billions of dollars in medical expenses.Department of Veterans Affairs, Stanford University, and National Science Foundation.

Abstract

The cost effectiveness of alternative approaches to the diagnosis of small-bowel Crohn's disease is unknown. This study evaluates whether computed tomographic enterography (CTE) is a cost-effective alternative to small-bowel follow-through (SBFT) and whether capsule endoscopy is a cost-effective third test in patients in whom a high suspicion of disease remains after 2 previous negative tests.A decision-analytic model was developed to compare the lifetime costs and benefits of each diagnostic strategy. Patients were considered with low (20%) and high (75%) pretest probability of small-bowel Crohn's disease. Effectiveness was measured in quality-adjusted life-years (QALYs) gained. Parameter assumptions were tested with sensitivity analyses.With a moderate to high pretest probability of small-bowel Crohn's disease, and a higher likelihood of isolated jejunal disease, follow-up evaluation with CTE has an incremental cost-effectiveness ratio of less than $54,000/QALY-gained compared with SBFT. The addition of capsule endoscopy after ileocolonoscopy and negative CTE or SBFT costs greater than $500,000 per QALY-gained in all scenarios. Results were not sensitive to costs of tests or complications but were sensitive to test accuracies.The cost effectiveness of strategies depends critically on the pretest probability of Crohn's disease and if the terminal ileum is examined at ileocolonoscopy. CTE is a cost-effective alternative to SBFT in patients with moderate to high suspicion of small-bowel Crohn's disease. The addition of capsule endoscopy as a third test is not a cost-effective third test, even in patients with high pretest probability of disease.

Abstract

The optimal community-level approach to control pandemic influenza is unknown.We estimated the health outcomes and costs of combinations of 4 social distancing strategies and 2 antiviral medication strategies to mitigate an influenza pandemic for a demographically typical US community. We used a social network, agent-based model to estimate strategy effectiveness and an economic model to estimate health resource use and costs. We used data from the literature to estimate clinical outcomes and health care utilization.At 1% influenza mortality, moderate infectivity (R(o) of 2.1 or greater), and 60% population compliance, the preferred strategy is adult and child social distancing, school closure, and antiviral treatment and prophylaxis. This strategy reduces the prevalence of cases in the population from 35% to 10%, averts 2480 cases per 10,000 population, costs $2700 per case averted, and costs $31,300 per quality-adjusted life-year gained, compared with the same strategy without school closure. The addition of school closure to adult and child social distancing and antiviral treatment and prophylaxis, if available, is not cost-effective for viral strains with low infectivity (R(o) of 1.6 and below) and low case fatality rates (below 1%). High population compliance lowers costs to society substantially when the pandemic strain is severe (R(o) of 2.1 or greater).Multilayered mitigation strategies that include adult and child social distancing, use of antivirals, and school closure are cost-effective for a moderate to severe pandemic. Choice of strategy should be driven by the severity of the pandemic, as defined by the case fatality rate and infectivity.

Abstract

Decisions on the timing and extent of vaccination against pandemic (H1N1) 2009 virus are complex.To estimate the effectiveness and cost-effectiveness of pandemic influenza (H1N1) vaccination under different scenarios in October or November 2009.Compartmental epidemic model in conjunction with a Markov model of disease progression.Literature and expert opinion.Residents of a major U.S. metropolitan city with a population of 8.3 million.Lifetime.Societal.Vaccination in mid-October or mid-November 2009.Infections and deaths averted, costs, quality-adjusted life-years (QALYs), and incremental cost-effectiveness.Assuming each primary infection causes 1.5 secondary infections, vaccinating 40% of the population in October or November would be cost-saving. Vaccination in October would avert 2051 deaths, gain 69 679 QALYs, and save $469 million compared with no vaccination; vaccination in November would avert 1468 deaths, gain 49 422 QALYs, and save $302 million.Vaccination is even more cost-saving if longer incubation periods, lower rates of infectiousness, or increased implementation of nonpharmaceutical interventions delay time to the peak of the pandemic. Vaccination saves fewer lives and is less cost-effective if the epidemic peaks earlier than mid-October.The model assumed homogenous mixing of case-patients and contacts; heterogeneous mixing would result in faster initial spread, followed by slower spread. Additional costs and savings not included in the model would make vaccination more cost-saving.Earlier vaccination against pandemic (H1N1) 2009 prevents more deaths and is more cost-saving. Complete population coverage is not necessary to reduce the viral reproductive rate sufficiently to help shorten the pandemic.Agency for Healthcare Research and Quality and National Institute on Drug Abuse.

Abstract

The pandemic potential of influenza A (H5N1) virus is a prominent public health concern of the 21st century.To estimate the effectiveness and cost-effectiveness of alternative pandemic (H5N1) mitigation and response strategies.Compartmental epidemic model in conjunction with a Markov model of disease progression.Literature and expert opinion.Residents of a U.S. metropolitan city with a population of 8.3 million.Lifetime.Societal.3 scenarios: 1) vaccination and antiviral pharmacotherapy in quantities similar to those currently available in the U.S. stockpile (stockpiled strategy), 2) stockpiled strategy but with expanded distribution of antiviral agents (expanded prophylaxis strategy), and 3) stockpiled strategy but with adjuvanted vaccine (expanded vaccination strategy). All scenarios assumed standard nonpharmaceutical interventions.Infections and deaths averted, costs, quality-adjusted life-years (QALYs), and incremental cost-effectiveness.Expanded vaccination was the most effective and cost-effective of the 3 strategies, averting 68% of infections and deaths and gaining 404 030 QALYs at $10 844 per QALY gained relative to the stockpiled strategy.Expanded vaccination remained incrementally cost-effective over a wide range of assumptions.The model assumed homogenous mixing of cases and contacts; heterogeneous mixing would result in faster initial spread, followed by slower spread. We did not model interventions for children or older adults; the model is not designed to target interventions to specific groups.Expanded adjuvanted vaccination is an effective and cost-effective mitigation strategy for an influenza A (H5N1) pandemic. Expanded antiviral prophylaxis can help delay the pandemic while additional strategies are implemented.National Institutes of Health and Agency for Healthcare Research and Quality.

Abstract

The American College of Physicians developed this guideline to present the available evidence on hormonal testing in and pharmacologic management of erectile dysfunction. Current pharmacologic therapies include phosphodiesterase-5 (PDE-5) inhibitors, such as sildenafil, vardenafil, tadalafil, mirodenafil, and udenafil, and hormonal treatment.Published literature on this topic was identified by using MEDLINE (1966 to May 2007), EMBASE (1980 to week 22 of 2007), Cochrane Central Register of Controlled Trials (second quarter of 2007), PsycINFO (1985 to June 2007), AMED (1985 to June 2007), and SCOPUS (2006). The literature search was updated by searching for articles in MEDLINE and EMBASE published between May 2007 and April 2009. Searches were limited to English-language publications. This guideline grades the evidence and recommendations by using the American College of Physicians' clinical practice guidelines grading system. RECOMMENDATION 1: The American College of Physicians recommends that clinicians initiate therapy with a PDE-5 inhibitor in men who seek treatment for erectile dysfunction and who do not have a contraindication to PDE-5 inhibitor use (Grade: strong recommendation; high-quality evidence). RECOMMENDATION 2: The American College of Physicians recommends that clinicians base the choice of a specific PDE-5 inhibitor on the individual preferences of men with erectile dysfunction, including ease of use, cost of medication, and adverse effects profile (Grade: weak recommendation; low-quality evidence). RECOMMENDATION 3: The American College of Physicians does not recommend for or against routine use of hormonal blood tests or hormonal treatment in the management of patients with erectile dysfunction (Grade: insufficient evidence to determine net benefits and harms).

Abstract

Little research has studied experimentally whether an opt-out policy will increase testing rates or whether this strategy is especially effective in the case of stigmatized diseases such as HIV.In Study 1, a 2 x 2 factorial design asked participants to make moral judgments about a person's decision to test for stigmatized diseases under an opt-in versus an opt-out policy. In Study 2, a 2 x 2 factorial design measuring testing rates explored whether opt-out methods reduce stigma and increase testing for stigmatized diseases.Study 1 results suggest that getting tested draws suspicion regarding moral conduct in an opt-in system, whereas not getting tested draws suspicion in an opt-out system. Study 2 results suggest that an opt-out policy may increase testing rates for stigmatized diseases and lessen the effects of stigma in people's reluctance to test.A social psychological approach to health services can be used to show how testing policies can influence both the stigmatization associated with testing and participation rates. An understanding of how testing policies may affect patient decision making and behavior is imperative for creating effective testing policies.

Abstract

Helicobacter pylori vaccines are under development to prevent infection. We quantified the cost-effectiveness of such a vaccine in the United States, using a dynamic transmission model.We compartmentalized the population by age, infection status, and clinical disease state and measured effectiveness in quality-adjusted life years (QALYs). We simulated no intervention, vaccination of infants, and vaccination of school-age children. Variables included costs of vaccine, vaccine administration, and gastric cancer treatment (in 2007 US dollars), vaccine efficacy, quality adjustment due to gastric cancer, and discount rate. We evaluated possible outcomes for periods of 10-75 years.H. pylori vaccination of infants would cost $2.9 billion over 10 years; savings from cancer prevention would be realized decades later. Over a long time horizon (75 years), incremental costs of H. pylori vaccination would be $1.8 billion, and incremental QALYs would be 0.5 million, yielding a cost-effectiveness ratio of $3871/QALY. With school-age vaccination, the cost-effectiveness ratio would be $22,137/QALY. With time limited to <40 years, the cost-effectiveness ratio exceeded $50,000/QALY.When evaluated with a time horizon beyond 40 years, the use of a prophylactic H. pylori vaccine was cost-effective in the United States, especially with infant vaccination.

Abstract

Current World Health Organization (WHO) guidelines for treatment of HIV in resource-limited settings call for 2 antiretroviral regimens. The effectiveness and cost-effectiveness of increasing the number of antiretroviral regimens is unknown.Using a simulation model, we compared the survival and costs of current WHO regimens with two 3-regimen strategies: an initial regimen of 3 nucleoside reverse transcriptase inhibitors followed by the WHO regimens and the WHO regimens followed by a regimen with a second-generation boosted protease inhibitor (2bPI). We evaluated monitoring with CD4 counts only and with both CD4 counts and viral load. We used cost and effectiveness data from Cape Town and tested all assumptions in sensitivity analyses.Over the lifetime of the cohort, 25.6% of individuals failed both WHO regimens by virologic criteria. However, when patients were monitored using CD4 counts alone, only 6.5% were prescribed additional highly active antiretroviral therapy due to missed and delayed detection of failure. The life expectancy gain for individuals who took a 2bPI was 6.7-8.9 months, depending on the monitoring strategy. When CD4 alone was available, adding a regimen with a 2bPI was associated with an incremental cost-effectiveness ratio of $2581 per year of life gained, and when viral load was available, the ratio was $6519 per year of life gained. Strategies with triple-nucleoside reverse transcriptase inhibitor regimens in initial therapy were dominated. Results were sensitive to the price of 2bPIs.About 1 in 4 individuals who start highly active antiretroviral therapy in sub-Saharan Africa will fail currently recommended regimens. At current prices, adding a regimen with a 2bPI is cost effective for South Africa and other middle-income countries by WHO standards.

Abstract

To assess the concurrent validity and responsiveness of the Health Utility Index 3 (HUI3) in patients with advanced HIV/AIDS, and to determine the responsiveness of this measure, the MOS-HIV and EQ-5D to HIV-related clinical events.Data from the OPTIMA (OPTions In Management with Antiretrovirals) trial was analyzed. Two aspects of the validity of the HUI3 were considered: concurrent validity was evaluated using Spearman correlations with MOS-HIV component and summary scores. Responsiveness to AIDS-defining events (ADE) and all adverse events (our external change criterion) was assessed using area under the receiver operating characteristic (AUROC) curves.The study enrolled 368 patients (mean follow-up: 3.66 years); 82% had at least one severe adverse event and 27% had at least one ADE. The HUI3 scale and items showed good concurrent validity, with 85% of the expected relationships with the MOS-HIV subscales verified. The HUI3 was responsive to both adverse events (AUROC [95%CI]: 0.68 [0.57, 0.80]) and ADEs (0.62 [0.51, 0.74]). The EQ-5D was responsive to ADEs (0.66 [0.56, 0.76]), but not responsive to adverse events (0.56 [0.46, 0.68]).The HUI3 is a valid and responsive measure of the change in HRQoL associated with clinical events in an advanced HIV/AIDS population.

Abstract

Estimating the potential health benefits and expenditures of a partially effective HIV vaccine is an important consideration in the debate about whether HIV vaccine research should continue. We developed an epidemic model to estimate HIV prevalence, new infections, and the cost-effectiveness of vaccination strategies in the U.S. Vaccines with modest efficacy could prevent 300,000-700,000 HIV infections and save $30 billion in healthcare expenditures over 20 years. Targeted vaccination of high-risk individuals is economically efficient, but difficulty in reaching these groups may mitigate these benefits. Universal vaccination is cost-effective for vaccines with 50% efficacy and price similar to other infectious disease vaccines.

Abstract

The rates of induction of labor and elective induction of labor are increasing. Whether elective induction of labor improves outcomes or simply leads to greater complications and health care costs is commonly debated in the literature.To compare the benefits and harms of elective induction of labor and expectant management of pregnancy.MEDLINE (through February 2009), Web of Science, CINAHL, Cochrane Central Register of Controlled Trials (through March 2009), bibliographies of included studies, and previous systematic reviews.Experimental and observational studies of elective induction of labor reported in English.Two authors abstracted study design; patient characteristics; quality criteria; and outcomes, including cesarean delivery and maternal and neonatal morbidity.Of 6117 potentially relevant articles, 36 met inclusion criteria: 11 randomized, controlled trials (RCTs) and 25 observational studies. Overall, expectant management of pregnancy was associated with a higher odds ratio (OR) of cesarean delivery than was elective induction of labor (OR, 1.22 [95% CI, 1.07 to 1.39]; absolute risk difference, 1.9 percentage points [CI, 0.2 to 3.7 percentage points]) in 9 RCTs. Women at or beyond 41 completed weeks of gestation who were managed expectantly had a higher risk for cesarean delivery (OR, 1.21 [CI, 1.01 to 1.46]), but this difference was not statistically significant in women at less than 41 completed weeks of gestation (OR, 1.73 [CI, 0.67 to 4.5]). Women who were expectantly managed were more likely to have meconium-stained amniotic fluid than those who were electively induced (OR, 2.04 [CI, 1.34 to 3.09]). Limitations: There were no recent RCTs of elective induction of labor at less than 41 weeks of gestation. The 2 studies conducted at less than 41 weeks of gestation were of poor quality and were not generalizable to current practice.RCTs suggest that elective induction of labor at 41 weeks of gestation and beyond is associated with a decreased risk for cesarean delivery and meconium-stained amniotic fluid. There are concerns about the translation of these findings into actual practice; thus, future studies should examine elective induction of labor in settings where most obstetric care is provided.

Abstract

Despite recommendations for voluntary HIV screening, few medical centres have implemented screening programmes. The objective of the study was to determine whether an intervention with computer-based reminders and feedback would increase screening for HIV in a Department of Veterans Affairs (VA) health-care system. The design of the study was a randomized controlled trial at five primary care clinics at the VA Palo Alto Health Care System. All primary care providers were eligible to participate in the study. The study intervention was computer-based reminders to either assess HIV risk behaviours or to offer HIV testing; feedback on adherence to reminders was provided. The main outcome measure was the difference in HIV testing rates between intervention and control group providers. The control group providers tested 1.0% (n = 67) and 1.4% (n = 106) of patients in the preintervention and intervention period, respectively; intervention providers tested 1.8% (n = 98) and 1.9% (n = 114), respectively (P = 0.75). In our random sample of 753 untested patients, 204 (27%) had documented risk behaviours. Providers were more likely to adhere to reminders to test rather than with reminders to perform risk assessment (11% versus 5%, P < 0.01). Sixty-one percent of providers felt that lack of time prevented risk assessment. In conclusion, in primary care clinics in our setting, HIV testing rates were low. Providers were unaware of the high rates of risky behaviour in their patient population and perceived important barriers to testing. Low-intensity clinical reminders and feedback did not increase rates of screening.

Abstract

To evaluate the evidence that quality improvement (QI) strategies can improve the processes and outcomes of outpatient pediatric asthma care.Cochrane Effective Practice and Organisation of Care Group database (January 1966 to April 2006), MEDLINE (January 1966 to April 2006), Cochrane Consumers and Communication Group database (January 1966 to May 2006), and bibliographies of retrieved articles.Randomized controlled trials, controlled before-after trials, or interrupted time series trials of English-language QI evaluations.Must have included 1 or more QI strategies for the outpatient management of children with asthma.Clinical status (eg, spirometric measures); functional status (eg, days lost from school); and health services use (eg, hospital admissions).Seventy-nine studies met inclusion criteria: 69 included at least some component of patient education, self-monitoring, or self-management; 13 included some component of organizational change; and 7 included provider education. Self-management interventions increased symptom-free days by approximately 10 days/y (P = .02) and reduced school absenteeism by about 0.1 day/mo (P = .03). Interventions of provider education and those that incorporated organizational changes were likely to report improvements in medication use. Quality improvement interventions that provided multiple educational sessions, had longer durations, and used combinations of instructional modalities were more likely to result in improvements for patients than interventions lacking these characteristics.A variety of QI interventions improve the outcomes and processes of care for children with asthma. Use of similar outcome measures and thorough descriptions of interventions would advance the study of QI for pediatric asthma care.

Abstract

Clinical practice guideline.To develop evidence-based recommendations on use of interventional diagnostic tests and therapies, surgeries, and interdisciplinary rehabilitation for low back pain of any duration, with or without leg pain.Management of patients with persistent and disabling low back pain remains a clinical challenge. A number of interventional diagnostic tests and therapies and surgery are available and their use is increasing, but in some cases their utility remains uncertain or controversial. Interdisciplinary rehabilitation has also been proposed as a potentially effective noninvasive intervention for persistent and disabling low back pain.A multidisciplinary panel was convened by the American Pain Society. Its recommendations were based on a systematic review that focused on evidence from randomized controlled trials. Recommendations were graded using methods adapted from the US Preventive Services Task Force and the Grading of Recommendations, Assessment, Development, and Evaluation Working Group.Investigators reviewed 3348 abstracts. A total of 161 randomized trials were deemed relevant to the recommendations in this guideline. The panel developed a total of 8 recommendations.Recommendations on use of interventional diagnostic tests and therapies, surgery, and interdisciplinary rehabilitation are presented. Due to important trade-offs between potential benefits, harms, costs, and burdens of alternative therapies, shared decision-making is an important component of a number of the recommendations.

Abstract

Coronary artery bypass graft (CABG) and percutaneous coronary intervention (PCI) are alternative treatments for multivessel coronary disease. Although the procedures have been compared in several randomised trials, their long-term effects on mortality in key clinical subgroups are uncertain. We undertook a collaborative analysis of data from randomised trials to assess whether the effects of the procedures on mortality are modified by patient characteristics.We pooled individual patient data from ten randomised trials to compare the effectiveness of CABG with PCI according to patients' baseline clinical characteristics. We used stratified, random effects Cox proportional hazards models to test the effect on all-cause mortality of randomised treatment assignment and its interaction with clinical characteristics. All analyses were by intention to treat.Ten participating trials provided data on 7812 patients. PCI was done with balloon angioplasty in six trials and with bare-metal stents in four trials. Over a median follow-up of 5.9 years (IQR 5.0-10.0), 575 (15%) of 3889 patients assigned to CABG died compared with 628 (16%) of 3923 patients assigned to PCI (hazard ratio [HR] 0.91, 95% CI 0.82-1.02; p=0.12). In patients with diabetes (CABG, n=615; PCI, n=618), mortality was substantially lower in the CABG group than in the PCI group (HR 0.70, 0.56-0.87); however, mortality was similar between groups in patients without diabetes (HR 0.98, 0.86-1.12; p=0.014 for interaction). Patient age modified the effect of treatment on mortality, with hazard ratios of 1.25 (0.94-1.66) in patients younger than 55 years, 0.90 (0.75-1.09) in patients aged 55-64 years, and 0.82 (0.70-0.97) in patients 65 years and older (p=0.002 for interaction). Treatment effect was not modified by the number of diseased vessels or other baseline characteristics.Long-term mortality is similar after CABG and PCI in most patient subgroups with multivessel coronary artery disease, so choice of treatment should depend on patient preferences for other outcomes. CABG might be a better option for patients with diabetes and patients aged 65 years or older because we found mortality to be lower in these subgroups.

Abstract

Induction of labor is on the rise in the U.S., increasing from 9.5 percent in 1990 to 22.1 percent in 2004. Although, it is not entirely clear what proportion of these inductions are elective (i.e. without a medical indication), the overall rate of induction of labor is rising faster than the rate of pregnancy complications that would lead to a medically indicated induction. However, the maternal and neonatal effects of induction of labor are unclear. Many studies compare women with induction of labor to those in spontaneous labor. This is problematic, because at any point in the management of the woman with a term gestation, the clinician has the choice between induction of labor and expectant management, not spontaneous labor. Expectant management of the pregnancy involves nonintervention at any particular point in time and allowing the pregnancy to progress to a future gestational age. Thus, women undergoing expectant management may go into spontaneous labor or may require indicated induction of labor at a future gestational age.The Stanford-UCSF Evidence-Based Practice Center examined the evidence regarding four Key Questions: What evidence describes the maternal risks of elective induction versus expectant management? What evidence describes the fetal/neonatal risks of elective induction versus expectant management? What is the evidence that certain physical conditions/patient characteristics are predictive of a successful induction of labor? How is a failed induction defined?We performed a systematic review to answer the Key Questions. We searched MEDLINE(1966-2007) and bibliographies of prior systematic reviews and the included studies for English language studies of maternal and fetal outcomes after elective induction of labor. We evaluated the quality of included studies. When possible, we synthesized study data using random effects models. We also evaluated the potential clinical outcomes and cost-effectiveness of elective induction of labor versus expectant management of pregnancy labor at 41, 40, and 39 weeks' gestation using decision-analytic models.Our searches identified 3,722 potentially relevant articles, of which 76 articles met inclusion criteria. Nine RCTs compared expectant management with elective induction of labor. We found that overall, expectant management of pregnancy was associated with an approximately 22 percent higher odds of cesarean delivery than elective induction of labor (OR 1.22, 95 percent CI 1.07-1.39; absolute risk difference 1.9, 95 percent CI: 0.2-3.7 percent). The majority of these studies were in women at or beyond 41 weeks of gestation (OR 1.21, 95 percent CI 1.01-1.46). In studies of women at or beyond 41 weeks of gestation, the evidence was rated as moderate because of the size and number of studies and consistency of the findings. Among women less than 41 weeks of gestation, there were three trials which reported no difference in risk of cesarean delivery among women who were induced as compared to expectant management (OR 1.73; 95 percent CI: 0.67-4.5, P=0.26), but all of these trials were small, non-U.S., older, and of poor quality. When we stratified the analysis by country, we found that the odds of cesarean delivery were higher in women who were expectantly managed compared to elective induction of labor in studies conducted outside the U.S. (OR 1.22; 95 percent CI 1.05-1.40) but were not statistically different in studies conducted in the U.S. (OR 1.28; 95 percent CI 0.65-2.49). Women who were expectantly managed were also more likely to have meconium-stained amniotic fluid than those who were electively induced (OR 2.04; 95 percent CI: 1.34-3.09). Observational studies reported a consistently lower risk of cesarean delivery among women who underwent spontaneous labor (6 percent) compared with women who had an elective induction of labor (8 percent) with a statistically significant decrease when combined (OR 0.63; 95 percent CI: 0.49-0.79), but again utilized the wrong control group and did not appropriately adjust for gestational age. We found moderate to high quality evidence that increased parity, a more favorable cervical status as assessed by a higher Bishop score, and decreased gestational age were associated with successful labor induction (58 percent of the included studies defined success as achieving a vaginal delivery anytime after the onset of the induction of labor; in these instances, induction was considered a failure when it led to a cesarean delivery). In the decision analytic model, we utilized a baseline assumption of no difference in cesarean delivery between the two arms as there was no statistically significant difference in the U.S. studies or in women prior to 41 0/7 weeks of gestation. In each of the models, women who were electively induced had better overall outcomes among both mothers and neonates as estimated by total quality-adjusted life years (QALYs) as well as by reduction in specific perinatal outcomes such as shoulder dystocia, meconium aspiration syndrome, and preeclampsia. Additionally, induction of labor was cost-effective at $10,789 per QALY with elective induction of labor at 41 weeks of gestation, $9,932 per QALY at 40 weeks of gestation, and $20,222 per QALY at 39 weeks of gestation utilizing a cost-effectiveness threshold of $50,000 per QALY. At 41 weeks of gestation, these results were generally robust to variations in the assumed ranges in univariate and multi-way sensitivity analyses. However, the findings of cost-effectiveness at 40 and 39 weeks of gestation were not robust to the ranges of the assumptions. In addition, the strength of evidence for some model inputs was low, therefore our analyses are exploratory rather than definitive.Randomized controlled trials suggest that elective induction of labor at 41 weeks of gestation and beyond may be associated with a decrease in both the risk of cesarean delivery and of meconium-stained amniotic fluid. The evidence regarding elective induction of labor prior to 41 weeks of gestation is insufficient to draw any conclusion. There is a paucity of information from prospective RCTs examining other maternal or neonatal outcomes in the setting of elective induction of labor. Observational studies found higher rates of cesarean delivery with elective induction of labor, but compared women undergoing induction of labor to women in spontaneous labor and were subject to potential confounding bias, particularly from gestational age. Such studies do not inform the question of how elective induction of labor affects maternal or neonatal outcomes. Elective induction of labor at 41 weeks of gestation and potentially earlier also appears to be a cost-effective intervention, but because of the need for further data to populate these models our analyses are not definitive. Despite the evidence from the prospective, RCTs reported above, there are concerns about the translation of such findings into actual practice, thus, there is a great need for studying the translation of such research into settings where the majority of obstetric care is provided.

Abstract

The American College of Physicians (ACP) developed this guidance statement to present the available evidence on screening for HIV in health care settings.This guidance statement is derived from an appraisal of available guidelines on screening for HIV. Authors searched the National Guideline Clearinghouse to identify guidelines on screening for HIV in the United States and used the AGREE (Appraisal of Guidelines Research and Evaluation) instrument to evaluate guidelines from the U.S. Preventive Services Task Force and the Centers for Disease Control and Prevention. GUIDANCE STATEMENT 1: ACP recommends that clinicians adopt routine screening for HIV and encourage patients to be tested. GUIDANCE STATEMENT 2: ACP recommends that clinicians determine the need for repeat screening on an individual basis.

Abstract

Russia has one of the world's fastest growing HIV epidemics, and HIV screening has been widespread. Whether such screening is an effective use of resources is unclear. We used epidemiologic and economic data from Russia to develop a Markov model to estimate costs, quality of life and survival associated with a voluntary HIV screening programme compared with no screening in Russia. We measured discounted lifetime health-care costs and quality-adjusted life years (QALYs) gained. We varied our inputs in sensitivity analysis. Early identification of HIV through screening provided a substantial benefit to persons with HIV, increasing life expectancy by 2.1 years and 1.7 QALYs. At a base-case prevalence of 1.2%, once-per-lifetime screening cost $13,396 per QALY gained, exclusive of benefit from reduced transmission. Cost-effectiveness of screening remained favourable until prevalence dropped below 0.04%. When HIV-transmission-related costs and benefits were included, once-per-lifetime screening cost $6910 per QALY gained and screening every two years cost $27,696 per QALY gained. An important determinant of the cost-effectiveness of screening was effectiveness of counselling about risk reduction. Early identification of HIV infection through screening in Russia is effective and cost-effective in all but the lowest prevalence groups.

Abstract

The American College of Physicians developed this guideline to present the available evidence on the pharmacologic management of the acute, continuation, and maintenance phases of major depressive disorder; dysthymia; subsyndromal depression; and accompanying symptoms, such as anxiety, insomnia, or neurovegetative symptoms, by using second-generation antidepressants.Published literature on this topic was identified by using MEDLINE, EMBASE, PsychLit, the Cochrane Central Register of Controlled Trials, and International Pharmaceutical Abstracts from 1980 to April 2007. Searches were limited to English-language studies in adults older than 19 years of age. Keywords for search included terms for depressive disorders and 12 specific second-generation antidepressants-bupropion, citalopram, duloxetine, escitalopram, fluoxetine, fluvoxamine, mirtazapine, nefazodone, paroxetine, sertraline, trazodone, and venlafaxine-and their specific trade names. This guideline grades the evidence and recommendations by using the American College of Physicians clinical practice guidelines grading system. RECOMMENDATION 1: The American College of Physicians recommends that when clinicians choose pharmacologic therapy to treat patients with acute major depression, they select second-generation antidepressants on the basis of adverse effect profiles, cost, and patient preferences (Grade: strong recommendation; moderate-quality evidence). RECOMMENDATION 2: The American College of Physicians recommends that clinicians assess patient status, therapeutic response, and adverse effects of antidepressant therapy on a regular basis beginning within 1 to 2 weeks of initiation of therapy (Grade: strong recommendation; moderate-quality evidence). RECOMMENDATION 3: The American College of Physicians recommends that clinicians modify treatment if the patient does not have an adequate response to pharmacotherapy within 6 to 8 weeks of the initiation of therapy for major depressive disorder (Grade: strong recommendation; moderate-quality evidence). RECOMMENDATION 4: The American College of Physicians recommends that clinicians continue treatment for 4 to 9 months after a satisfactory response in patients with a first episode of major depressive disorder. For patients who have had 2 or more episodes of depression, an even longer duration of therapy may be beneficial (Grade: strong recommendation; moderate-quality evidence).

Abstract

This study sought to systematically compare the effectiveness of percutaneous coronary intervention and coronary artery bypass surgery in patients with single-vessel disease of the proximal left anterior descending (LAD) coronary artery.It is uncertain whether percutaneous coronary interventions (PCI) or coronary artery bypass grafting (CABG) surgery provides better clinical outcomes among patients with single-vessel disease of the proximal LAD.We searched relevant databases (MEDLINE, EMBASE, and Cochrane from 1966 to 2006) to identify randomized controlled trials that compared outcomes for patients with single-vessel proximal LAD assigned to either PCI or CABG.We identified 9 randomized controlled trials that enrolled a total of 1,210 patients (633 received PCI and 577 received CABG). There were no differences in survival at 30 days, 1 year, or 5 years, nor were there differences in the rates of procedural strokes or myocardial infarctions, whereas the rate of repeat revascularization was significantly less after CABG than after PCI (at 1 year: 7.3% vs. 19.5%; at 5 years: 7.3% vs. 33.5%). Angina relief was significantly greater after CABG than after PCI (at 1 year: 95.5% vs. 84.6%; at 5 years: 84.2% vs. 75.6%). Patients undergoing CABG spent 3.2 more days in the hospital than those receiving PCI (95% confidence interval: 2.3 to 4.1 days, p < 0.0001), required more transfusions, and were more likely to have arrhythmias immediately post-procedure.In patients with single-vessel, proximal LAD disease, survival was similar in CABG-assigned and PCI-assigned patients; CABG was significantly more effective in relieving angina and led to fewer repeat revascularizations.

Abstract

Although the number of infected persons receiving highly active antiretroviral therapy (HAART) in low- and middle-income countries has increased dramatically, optimal disease management is not well defined.We developed a model to compare the costs and benefits of 3 types of human immunodeficiency virus monitoring strategies: symptom-based strategies, CD4-based strategies, and CD4 counts plus viral load strategies for starting, switching, and stopping HAART. We used clinical and cost data from southern Africa and performed a cost-effectiveness analysis. All assumptions were tested in sensitivity analyses.Compared with the symptom-based approaches, monitoring CD4 counts every 6 months and starting treatment at a threshold of 200/muL was associated with a gain in life expectancy of 6.5 months (61.9 months vs 68.4 months) and a discounted lifetime cost savings of US $464 per person (US $4069 vs US $3605, discounted 2007 dollars). The CD4-based strategies in which treatment was started at the higher threshold of 350/microL provided an additional gain in life expectancy of 5.3 months at a cost-effectiveness of US $107 per life-year gained compared with a threshold of 200/microL. Monitoring viral load with CD4 was more expensive than monitoring CD4 counts alone, added 2.0 months of life, and had an incremental cost-effectiveness ratio of US $5414 per life-year gained relative to monitoring of CD4 counts. In sensitivity analyses, the cost savings from CD4 count monitoring compared with the symptom-based approaches was sensitive to cost of inpatient care, and the cost-effectiveness of viral load monitoring was influenced by the per test costs and rates of virologic failure.Use of CD4 monitoring and early initiation of HAART in southern Africa provides large health benefits relative to symptom-based approaches for HAART management. In southern African countries with relatively high costs of hospitalization, CD4 monitoring would likely reduce total health care expenditures. The cost-effectiveness of viral load monitoring depends on test prices and rates of virologic failure.

Abstract

The American College of Physicians (ACP) developed this guideline to present the available evidence on various pharmacologic treatments to prevent fractures in men and women with low bone density or osteoporosis.Published literature on this topic was identified by using MEDLINE (1966 to December 2006), the ACP Journal Club database, the Cochrane Central Register of Controlled Trials (no date limits), the Cochrane Database of Systematic Reviews (no date limits), Web sites of the United Kingdom National Institute of Health and Clinical Excellence (no date limits), and the United Kingdom Health Technology Assessment Program (January 1998 to December 2006). Searches were limited to English-language publications and human studies. Keywords for search included terms for osteoporosis, osteopenia, low bone density, and the drugs listed in the key questions. This guideline grades the evidence and recommendations according to the ACP's clinical practice guidelines grading system. RECOMMENDATION 1: ACP recommends that clinicians offer pharmacologic treatment to men and women who have known osteoporosis and to those who have experienced fragility fractures (Grade: strong recommendation; high-quality evidence). RECOMMENDATION 2: ACP recommends that clinicians consider pharmacologic treatment for men and women who are at risk for developing osteoporosis (Grade: weak recommendation; moderate-quality evidence). RECOMMENDATION 3: ACP recommends that clinicians choose among pharmacologic treatment options for osteoporosis in men and women on the basis of an assessment of risk and benefits in individual patients (Grade: strong recommendation; moderate-quality evidence). RECOMMENDATION 4: ACP recommends further research to evaluate treatment of osteoporosis in men and women.

Abstract

Although evidence suggests that a higher hemodialysis dose and/or frequency may be associated with improved outcomes, the cost-effectiveness of a daily hemodialysis strategy for critically ill patients with acute kidney injury (AKI) is unknown.We developed a Markov model of the cost, quality of life, survival, and incremental cost-effectiveness of daily hemodialysis, compared with alternate-day hemodialysis, for patients with AKI in the intensive care unit (ICU). We employed a societal perspective with a lifetime analytic time horizon. We modeled the efficacy of daily hemodialysis as a reduction in the relative risk of death on the basis of data reported in the 2004 clinical trial published by Schiffl et al. We performed 1- and 2-way sensitivity analyses across cost, efficacy, and utility input variables. The main outcome measure was cost per quality-adjusted life-year (QALY).In the base case for a 60-year-old man, daily hemodialysis was projected to add 2.14 QALYs and $10,924 in cost. We found that the cost-effectiveness of daily hemodialysis compared with alternate-day hemodialysis was $5084 per QALY gained. The incremental cost-effectiveness ratio became less favorable (>$50,000 per QALY gained) when the maintenance hemodialysis rate of the daily hemodialysis group was varied to more than 27% and when the difference in 14-day postdischarge mortality between the alternatives was varied to less than 0.5%.Daily hemodialysis is a cost-effective strategy compared with alternate-day hemodialysis for patients with severe AKI in the ICU.

Abstract

Although HIV infection is more prevalent in people younger than age 45 years, a substantial number of infections occur in older persons. Recent guidelines recommend HIV screening in patients age 13 to 64 years. The cost-effectiveness of HIV screening in patients age 55 to 75 years is uncertain.To examine the costs and benefits of HIV screening in patients age 55 to 75 years.Markov model.Derived from the literature.Patients age 55 to 75 years with unknown HIV status.Lifetime.Societal.HIV screening program for patients age 55 to 75 years compared with current practice.Life-years, quality-adjusted life-years (QALYs), costs, and incremental cost-effectiveness.For a 65-year-old patient, HIV screening using traditional counseling costs $55,440 per QALY compared with current practice when the prevalence of HIV was 0.5% and the patient did not have a sexual partner at risk. In sexually active patients, the incremental cost-effectiveness ratio was $30,020 per QALY. At a prevalence of 0.1%, HIV screening cost less than $60,000 per QALY for patients younger than age 75 years with a partner at risk if less costly streamlined counseling is used.Cost-effectiveness of HIV screening depended on HIV prevalence, age of the patient, counseling costs, and whether the patient was sexually active. Sensitivity analyses with other variables did not change the results substantially.The effects of age on the toxicity and efficacy of highly active antiretroviral therapy and death from AIDS were uncertain. Sensitivity analyses exploring these variables did not qualitatively affect the results.If the tested population has an HIV prevalence of 0.1% or greater, HIV screening in persons from age 55 to 75 years reaches conventional levels of cost-effectiveness when counseling is streamlined and if the screened patient has a partner at risk. Screening patients with advanced age for HIV is economically attractive in many circumstances.

Abstract

The American College of Physicians developed this guideline to present the available evidence on risk factors and screening tests for osteoporosis in men.Published literature on this topic was identified by using MEDLINE (1990 to July 2007). Reference mining was done on the retrieved articles, references of previous reviews, and solicited articles from experts. The inclusion criteria for the studies were measuring risk factors for low bone mineral density or osteoporotic fracture in men or comparing 2 different methods of assessment for the presence of osteoporosis in men. This guideline grades the evidence and recommendations by using the American College of Physicians' clinical practice guidelines grading system. RECOMMENDATION 1: The American College of Physicians recommends that clinicians periodically perform individualized assessment of risk factors for osteoporosis in older men (Grade: strong recommendation; moderate-quality evidence). RECOMMENDATION 2: The American College of Physicians recommends that clinicians obtain dual-energy x-ray absorptiometry for men who are at increased risk for osteoporosis and are candidates for drug therapy (Grade: strong recommendation; moderate-quality evidence). RECOMMENDATION 3: The American College of Physicians recommends further research to evaluate osteoporosis screening tests in men.

Abstract

A bioterrorism attack with an agent such as anthrax will require rapid deployment of medical and pharmaceutical supplies to exposed individuals. How should such a logistical system be organized? How much capacity should be built into each element of the bioterrorism response supply chain?The authors developed a compartmental model to evaluate the costs and benefits of various strategies for preattack stockpiling and postattack distribution and dispensing of medical and pharmaceutical supplies, as well as the benefits of rapid attack detection.The authors show how the model can be used to address a broad range of logistical questions as well as related, nonlogistical questions (e.g., the cost-effectiveness of strategies to improve patient adherence to antibiotic regimens). They generate several key insights about appropriate strategies for local communities. First, stockpiling large local inventories of medical and pharmaceutical supplies is unlikely to be the most effective means of reducing mortality from an attack, given the availability of national and regional supplies. Instead, communities should create sufficient capacity for dispensing prophylactic antibiotics in the event of a large-scale bioterror attack. Second, improved surveillance systems can significantly reduce deaths from such an attack but only if the local community has sufficient antibiotic-dispensing capacity. Third, mortality from such an attack is significantly affected by the number of unexposed individuals seeking prophylaxis and treatment. Fourth, full adherence to treatment regimens is critical for reducing expected mortality.Effective preparation for response to potential bioterror attacks can avert deaths in the event of an attack. Models such as this one can help communities more effectively prepare for response to potential bioterror attacks.

Abstract

Inadequate adherence to highly active antiretroviral therapy (HAART) may lead to poor health outcomes and the development of HIV strains that are resistant to HAART. The authors developed a model to evaluate the cost-effectiveness of counseling interventions to improve adherence to HAART among men who have sex with men (MSM).The authors developed a dynamic compartmental model that incorporates HIV treatment, adherence to treatment, and infection transmission and progression. All data estimates were obtained from secondary sources. The authors evaluated a counseling intervention given prior to initiation of HAART and before all changes in drug regimens, combined with phone-in support while on HAART. They considered a moderate-prevalence and a high-prevalence population of MSM.If the impact of HIV transmission is ignored, the counseling intervention has a cost-effectiveness ratio of $25,500 per quality-adjusted life year (QALY) gained. When HIV transmission is included, the cost-effectiveness ratio is much lower: $7400 and $8700 per QALY gained in the moderate- and high-prevalence populations, respectively. When the intervention is twice as costly per counseling session and half as effective as estimated in the base case (in terms of the number of individuals who become highly adherent, and who remain highly adherent), then the intervention costs $17,100 and $19,600 per QALY gained in the 2 populations, respectively.Counseling to improve adherence to HAART increased length of life, modestly reduced HIV transmission, and cost substantially less than $50,000 per QALY gained over a wide range of assumptions but did not reduce the proportion of drug-resistant strains. Such counseling provides only modest benefit as a tool for HIV prevention but can provide significant benefit for individual patients at an affordable cost.

Abstract

Many quality improvement strategies have focused on improving blood pressure control, and these strategies can target the patient, the provider, and/or the system. Strategies that seem to have the biggest effect on blood pressure outcomes are team change, patient education, facilitated relay of clinical information, and promotion of self-management. Barriers to effective blood pressure control can affect the patient, the physician, the system, and/or "cues to action."We review the barriers to achieving blood pressure control and describe current and potential creative strategies for optimizing blood pressure control. These include home-based disease management, combined patient and provider education, and automatic decision support systems. Future research must address which components of quality improvement interventions are most successful in achieving blood pressure control.

Abstract

Effective strategies for managing patients with solitary pulmonary nodules (SPN) depend critically on the pre-test probability of malignancy.To validate two previously developed models that estimate the probability that an indeterminate SPN is malignant, based on clinical characteristics and radiographic findings.Data on age, smoking and cancer history, nodule size, location and spiculation were collected retrospectively from the medical records of 151 veterans (145 men, 6 women; age range 39-87 years) with an SPN measuring 7-30 mm (inclusive) and a final diagnosis established by histopathology or 2-year follow-up. Each patient's final diagnosis was compared with the probability of malignancy predicted by two models: one developed by investigators at the Mayo Clinic and the other developed from patients enrolled in a VA Cooperative Study. The accuracy of each model was assessed by calculating areas under the receiver operating characteristic (ROC) curve and the models were calibrated by comparing predicted and observed rates of malignancy.The area under the ROC curve for the Mayo Clinic model (0.80; 95% CI 0.72 to 0.88) was higher than that of the VA model (0.73; 95% CI 0.64 to 0.82), but this difference was not statistically significant (Delta = 0.07; 95% CI -0.03 to 0.16). Calibration curves showed that the probability of malignancy was underestimated by the Mayo Clinic model and overestimated by the VA model.Two existing prediction models are sufficiently accurate to guide decisions about the selection and interpretation of subsequent diagnostic tests in patients with SPNs, although clinicians should also consider the prevalence of malignancy in their practice setting when choosing a model.

Abstract

The American College of Physicians and American Academy of Family Physicians developed this guideline to present the available evidence on current pharmacologic treatment of dementia.The targeted literature search included evidence related to the effectiveness of 5 U.S. Food and Drug Administration-approved pharmacologic therapies for dementia for outcomes in the domains of cognition, global function, behavior/mood, and quality of life/activities of daily living. RECOMMENDATION 1: Clinicians should base the decision to initiate a trial of therapy with a cholinesterase inhibitor or memantine on individualized assessment. (Grade: weak recommendation, moderate-quality evidence.) RECOMMENDATION 2: Clinicians should base the choice of pharmacologic agents on tolerability, adverse effect profile, ease of use, and cost of medication. The evidence is insufficient to compare the effectiveness of different pharmacologic agents for the treatment of dementia. (Grade: weak recommendation, low-quality evidence.) RECOMMENDATION 3: There is an urgent need for further research on the clinical effectiveness of pharmacologic management of dementia.

Evidence-based interventions to improve the palliative care of pain, dyspnea, and depression at the end of life: A clinical practice guideline from the American college of physiciansANNALS OF INTERNAL MEDICINEClaseem, A., Snow, V., Shekelle, P., Casey, D. E., Cross, J. T., Owens, D. K.2008; 148 (2): 141-146

Abstract

RECOMMENDATION 1: In patients with serious illness at the end of life, clinicians should regularly assess patients for pain, dyspnea, and depression. (Grade: strong recommendation, moderate quality of evidence.) RECOMMENDATION 2: In patients with serious illness at the end of life, clinicians should use therapies of proven effectiveness to manage pain. For patients with cancer, this includes nonsteroidal anti-inflammatory drugs, opioids, and bisphosphonates. (Grade: strong recommendation, moderate quality of evidence.) RECOMMENDATION 3: In patients with serious illness at the end of life, clinicians should use therapies of proven effectiveness to manage dyspnea, which include opioids in patients with unrelieved dyspnea and oxygen for short-term relief of hypoxemia. (Grade: strong recommendation, moderate quality of evidence.) RECOMMENDATION 4: In patients with serious illness at the end of life, clinicians should use therapies of proven effectiveness to manage depression. For patients with cancer, this includes tricyclic antidepressants, selective serotonin reuptake inhibitors, or psychosocial intervention. (Grade: strong recommendation, moderate quality of evidence.) RECOMMENDATION 5: Clinicians should ensure that advance care planning, including completion of advance directives, occurs for all patients with serious illness. (Grade: strong recommendation, low quality of evidence.).

Abstract

We sought to determine the prevalence of HIV in both inpatient and outpatient settings in 6 Department of Veterans Affairs (VA) health care sites.We collected demographic data and data on comorbid conditions and then conducted blinded, anonymous HIV testing. We conducted a multivariate analysis to determine predictors of HIV infection.We tested 4500 outpatient blood specimens and 4205 inpatient blood specimens; 326 (3.7%) patients tested positive for HIV. Inpatient HIV prevalence ranged from 1.2% to 6.9%; outpatient HIV prevalence ranged from 0.9% to 8.9%. Having a history of hepatitis B or C infection, a sexually transmitted disease, or pneumonia also predicted HIV infection. The prevalence of previously undocumented HIV infection varied from 0.1% to 2.8% among outpatients and from 0.0% to 1.7% among inpatients.The prevalence of undocumented HIV infection was sufficiently high for routine voluntary screening to be cost effective in each of the 6 sites we evaluated. Many VA health care systems should consider expanded routine voluntary HIV screening.

Abstract

The comparative effectiveness of coronary artery bypass graft (CABG) surgery and percutaneous coronary intervention (PCI) for patients in whom both procedures are feasible remains poorly understood.To compare the effectiveness of PCI and CABG in patients for whom coronary revascularization is clinically indicated.MEDLINE, EMBASE, and Cochrane databases (1966-2006); conference proceedings; and bibliographies of retrieved articles.Randomized, controlled trials (RCTs) reported in any language that compared clinical outcomes of PCI with those of CABG, and selected observational studies.Information was extracted on study design, sample characteristics, interventions, and clinical outcomes.The authors identified 23 RCTs in which 5019 patients were randomly assigned to PCI and 4944 patients were randomly assigned to CABG. The difference in survival after PCI or CABG was less than 1% over 10 years of follow-up. Survival did not differ between PCI and CABG for patients with diabetes in the 6 trials that reported on this subgroup. Procedure-related strokes were more common after CABG than after PCI (1.2% vs. 0.6%; risk difference, 0.6%; P = 0.002). Angina relief was greater after CABG than after PCI, with risk differences ranging from 5% to 8% at 1 to 5 years (P < 0.001). The absolute rates of angina relief at 5 years were 79% after PCI and 84% after CABG. Repeated revascularization was more common after PCI than after CABG (risk difference, 24% at 1 year and 33% at 5 years; P < 0.001); the absolute rates at 5 years were 46.1% after balloon angioplasty, 40.1% after PCI with stents, and 9.8% after CABG. In the observational studies, the CABG-PCI hazard ratio for death favored PCI among patients with the least severe disease and CABG among those with the most severe disease.The RCTs were conducted in leading centers in selected patients. The authors could not assess whether comparative outcomes vary according to clinical factors, such as extent of coronary disease, ejection fraction, or previous procedures. Only 1 small trial used drug-eluting stents.Compared with PCI, CABG was more effective in relieving angina and led to fewer repeated revascularizations but had a higher risk for procedural stroke. Survival to 10 years was similar for both procedures.

Abstract

RECOMMENDATION 1: Clinicians should conduct a focused history and physical examination to help place patients with low back pain into 1 of 3 broad categories: nonspecific low back pain, back pain potentially associated with radiculopathy or spinal stenosis, or back pain potentially associated with another specific spinal cause. The history should include assessment of psychosocial risk factors, which predict risk for chronic disabling back pain (strong recommendation, moderate-quality evidence). RECOMMENDATION 2: Clinicians should not routinely obtain imaging or other diagnostic tests in patients with nonspecific low back pain (strong recommendation, moderate-quality evidence). RECOMMENDATION 3: Clinicians should perform diagnostic imaging and testing for patients with low back pain when severe or progressive neurologic deficits are present or when serious underlying conditions are suspected on the basis of history and physical examination (strong recommendation, moderate-quality evidence). RECOMMENDATION 4: Clinicians should evaluate patients with persistent low back pain and signs or symptoms of radiculopathy or spinal stenosis with magnetic resonance imaging (preferred) or computed tomography only if they are potential candidates for surgery or epidural steroid injection (for suspected radiculopathy) (strong recommendation, moderate-quality evidence). RECOMMENDATION 5: Clinicians should provide patients with evidence-based information on low back pain with regard to their expected course, advise patients to remain active, and provide information about effective self-care options (strong recommendation, moderate-quality evidence). RECOMMENDATION 6: For patients with low back pain, clinicians should consider the use of medications with proven benefits in conjunction with back care information and self-care. Clinicians should assess severity of baseline pain and functional deficits, potential benefits, risks, and relative lack of long-term efficacy and safety data before initiating therapy (strong recommendation, moderate-quality evidence). For most patients, first-line medication options are acetaminophen or nonsteroidal anti-inflammatory drugs. RECOMMENDATION 7: For patients who do not improve with self-care options, clinicians should consider the addition of nonpharmacologic therapy with proven benefits-for acute low back pain, spinal manipulation; for chronic or subacute low back pain, intensive interdisciplinary rehabilitation, exercise therapy, acupuncture, massage therapy, spinal manipulation, yoga, cognitive-behavioral therapy, or progressive relaxation (weak recommendation, moderate-quality evidence).

Abstract

This guidance statement is derived from other organizations' guidelines and is based on an evaluation of the strengths and weaknesses of the available guidelines. We used the Appraisal of Guidelines, Research and Evaluation in Europe (AGREE) appraisal instrument to evaluate the guidelines from various organizations. On the basis of the review of the available guidelines, we recommend: STATEMENT 1: To prevent microvascular complications of diabetes, the goal for glycemic control should be as low as is feasible without undue risk for adverse events or an unacceptable burden on patients. Treatment goals should be based on a discussion of the benefits and harms of specific levels of glycemic control with the patient. A hemoglobin A1c level less than 7% based on individualized assessment is a reasonable goal for many but not all patients. STATEMENT 2: The goal for hemoglobin A1c level should be based on individualized assessment of risk for complications from diabetes, comorbidity, life expectancy, and patient preferences. STATEMENT 3: We recommend further research to assess the optimal level of glycemic control, particularly in the presence of comorbid conditions.

Abstract

We developed a mathematical model to simulate the impact of various partially effective preventive HIV vaccination scenarios in a population at high risk for heterosexually transmitted HIV. We considered an adult population defined by gender (male/female), disease stage (HIV-negative, HIV-positive, AIDS, and death), and vaccination status (unvaccinated/vaccinated) in Soweto, South Africa. Input data included initial HIV prevalence of 20% (women) and 12% (men), vaccination coverage of 75%, and exclusive male negotiation of condom use. We explored how changes in vaccine efficacy and postvaccination condom use would affect HIV prevalence and total HIV infections prevented over a 10-year period. In the base-case scenario, a 40% effective HIV vaccine would avert 61,000 infections and reduce future HIV prevalence from 20% to 13%. A 25% increase (or decrease) in condom use among vaccinated individuals would instead avert 75,000 (or only 46,000) infections and reduce the HIV prevalence to 12% (or only 15%). Furthermore, certain combinations of increased risk behavior and vaccines with <43% efficacy could worsen the epidemic. Even modestly effective HIV vaccines can confer enormous benefits in terms of HIV infections averted and decreased HIV prevalence. However, programs to reduce risk behavior may be important components of successful vaccination campaigns.

Abstract

Responses to bioterrorism require rapid procurement and distribution of medical and pharmaceutical supplies, trained personnel, and information. Thus, they present significant logistical challenges. On the basis of a review of the manufacturing and service supply chain literature, the authors identified five supply chain strategies that can potentially increase the speed of response to a bioterrorism attack, reduce inventories, and save money: effective supply chain network design; effective inventory management; postponement of product customization and modularization of component parts; coordination of supply chain stakeholders and appropriate use of incentives; and effective information management. The authors describe how concepts learned from published evaluations of manufacturing and service supply chains, as well as lessons learned from responses to natural disasters, naturally occurring outbreaks, and the 2001 US anthrax attacks, can be applied to design, evaluate, and improve the bioterrorism response supply chain. Such lessons could also be applied to the response supply chains for disease outbreaks and natural and manmade disasters.

Abstract

To systematically review all published case reports of children with anthrax to evaluate the predictors of disease progression and mortality.Fourteen selected journal indexes (1900-1966), MEDLINE (1966-2005), and the bibliographies of all retrieved articles.Case reports (any language) of anthrax in persons younger than 18 years published between January 1, 1900, and December 31, 2005. Main Exposures Cases with symptoms and culture or Gram stain or autopsy evidence of anthrax infection.Disease progression, treatment responses, and mortality.Of 2499 potentially relevant articles, 73 case reports of pediatric anthrax (5 inhalational cases, 22 gastrointestinal cases, 37 cutaneous cases, 6 cases of primary meningoencephalitis, and 3 atypical cases) met the inclusion criteria. Only 10% of the patients were younger than 2 years, and 24% were girls. Of the few children with inhalational anthrax, none had nonheadache neurologic symptoms, a key finding that distinguishes adult inhalational anthrax from more common illnesses, such as influenza. Overall, observed mortality was 60% (3 of 5) for inhalational anthrax, 65% (13 of 20) for gastrointestinal anthrax, 14% (5 of 37) for cutaneous anthrax, and 100% (6 of 6) for primary meningoencephalitis. Nineteen of the 30 children (63%) who received penicillin-based antibiotics survived, and 9 of the 11 children (82%) who received anthrax antiserum survived.The clinical presentation of children with anthrax is varied. The mortality rate is high in children with inhalational anthrax, gastrointestinal anthrax, and anthrax meningoencephalitis. Rapid diagnosis and effective treatment of anthrax in children requires recognition of the broad spectrum of clinical presentations of pediatric anthrax.

Abstract

Breast cancer is one of the most common causes of death for women in their 40s in the United States. Individualized risk assessment plays an important role when making decisions about screening mammography, especially for women 49 years of age or younger. The purpose of this guideline is to present the available evidence for screening mammography in women 40 to 49 years of age and to increase clinicians' understanding of the benefits and risks of screening mammography.

Abstract

This guideline summarizes the current approaches for the diagnosis of venous thromboembolism. The importance of early diagnosis to prevent mortality and morbidity associated with venous thromboembolism cannot be overstressed. This field is highly dynamic, however, and new evidence is emerging periodically that may change the recommendations. The purpose of this guideline is to present recommendations based on current evidence to clinicians to aid in the diagnosis of lower extremity deep venous thrombosis and pulmonary embolism.

Abstract

Early identification of HIV infection is critical for patients to receive life-prolonging treatment and risk-reduction counseling. Understanding HIV screening practices and barriers to HIV testing is an important prelude to designing successful HIV screening programs. Our objective was to evaluate current practice patterns for identification of HIV.We used a retrospective cohort analysis of 13,991 at-risk patients seen at 4 large Department of Veterans Affairs (VA) health-care systems. We also reviewed 1,100 medical records of tested patients. We assessed HIV testing rates among at-risk patients, the rationale for HIV testing, and predictors of HIV testing and of HIV infection.Of the 13,991 patients at risk for HIV, only 36% had been HIV-tested. The prevalence of HIV ranged from 1% to 20% among tested patients at the 4 sites. Approximately 90% of patients who were tested had a documented reason for testing.One-half to two-thirds of patients at risk for HIV had not been tested within our selected VA sites. Among tested patients, the rationale for HIV testing was well documented. Further testing of at-risk patients could clearly benefit patients who have unidentified HIV infection by providing earlier access to life-prolonging therapy.

Abstract

Venous thromboembolism is a common condition affecting 7.1 persons per 10,000 person-years among community residents. Incidence rates for venous thromboembolism are higher in men and African Americans and increase substantially with age. It is critical to treat deep venous thrombosis at an early stage to avoid development of further complications, such as pulmonary embolism or recurrent deep venous thrombosis. The target audience for this guideline is all clinicians caring for patients who have been given a diagnosis of deep venous thrombosis or pulmonary embolism. The target patient population is patients receiving a diagnosis of pulmonary embolism or lower-extremity deep venous thrombosis.

Abstract

Venous thromboembolism is a common condition affecting 7.1 persons per 10,000 person-years among community residents. Incidence rates for venous thromboembolism are higher in men, African-Americans, and increase substantially with age. It is critical to treat deep venous thrombosis at an early stage to avoid development of further complications, such as pulmonary embolism or recurrent deep venous thrombosis. The target audience for this guideline is all clinicians caring for patients who have been given a diagnosis of deep venous thrombosis or pulmonary embolism. The target patient population is patients receiving a diagnosis of pulmonary embolism or lower-extremity deep venous thrombosis.

Abstract

This guideline summarizes the current approaches for the diagnosis of venous thromboembolism. The importance of early diagnosis to prevent mortality and morbidity associated with venous thromboembolism cannot be overstressed. This field is highly dynamic, however, and new evidence is emerging periodically that may change the recommendations. The purpose of this guideline is to present recommendations based on current evidence to clinicians to aid in the diagnosis of lower extremity deep venous thrombosis and pulmonary embolism.

Abstract

Timely detection of an inhalational anthrax outbreak is critical for clinical and public health management. Syndromic surveillance has received considerable investment, but little is known about how it will perform relative to routine clinical case finding for detection of an inhalational anthrax outbreak. We conducted a simulation study to compare clinical case finding with syndromic surveillance for detection of an outbreak of inhalational anthrax. After simulated release of 1 kg of anthrax spores, the proportion of outbreaks detected first by syndromic surveillance was 0.59 at a specificity of 0.9 and 0.28 at a specificity of 0.975. The mean detection benefit of syndromic surveillance was 1.0 day at a specificity of 0.9 and 0.32 days at a specificity of 0.975. When syndromic surveillance was sufficiently sensitive to detect a substantial proportion of outbreaks before clinical case finding, it generated frequent false alarms.

Abstract

To assess the effectiveness and cost-effectiveness of treating HIV-infected injection drug users (IDUs) and non-IDUs in Russia with highly active antiretroviral therapy HAART.A dynamic HIV epidemic model was developed for a population of IDUs and non-IDUs. The location for the study was St. Petersburg, Russia. The adult population aged 15 to 49 years was subdivided on the basis of injection drug use and HIV status. HIV treatment targeted to IDUs and non-IDUs, and untargeted treatment interventions were considered. Health care costs and quality-adjusted life years (QALYs) experienced in the population were measured, and HIV prevalence, HIV infections averted, and incremental cost-effectiveness ratios of different HAART strategies were calculated.With no incremental HAART programs, HIV prevalence reached 64% among IDUs and 1.7% among non-IDUs after 20 years. If treatment were targeted to IDUs, over 40 000 infections would be prevented (75% among non-IDUs), adding 650 000 QALYs at a cost of USD 1501 per QALY gained. If treatment were targeted to non-IDUs, fewer than 10 000 infections would be prevented, adding 400 000 QALYs at a cost of USD 2572 per QALY gained. Untargeted strategies prevented the most infections, adding 950 000 QALYs at a cost of USD 1827 per QALY gained. Our results were sensitive to HIV transmission parameters.Expanded use of antiretroviral therapy in St. Petersburg, Russia would generate enormous population-wide health benefits and be economically efficient. Exclusively treating non-IDUs provided the least health benefit, and was the least economically efficient. Our findings highlight the urgency of initiating HAART for both IDUs and non-IDUs in Russia.

Abstract

To systematically review the literature about children with anthrax to describe their clinical course, treatment responses, and the predictors of disease progression and mortality.MEDLINE (1966-2005), 14 selected journal indexes (1900-1966) and bibliographies of all retrieved articles.We sought case reports of pediatric anthrax published between 1900 and 2005 meeting predefined criteria. We abstracted three types of data from the English-language reports: (1) Patient information (e.g., age, gender, nationality), (2) symptom and disease progression information (e.g., whether the patient developed meningitis); (3) treatment information (e.g., treatments received, year of treatment). We compared the clinical symptoms and disease progression variables for the pediatric cases with data on adult anthrax cases reviewed previously.We identified 246 titles of potentially relevant articles from our MEDLINE(R) search and 2253 additional references from our manual search of the bibliographies of retrieved articles and the indexes of the 14 selected journals. We included 62 case reports of pediatric anthrax including two inhalational cases, 20 gastrointestinal cases, 37 cutaneous cases, and three atypical cases. Anthrax is a relatively common and historically well-recognized disease and yet rarely reported among children, suggesting the possibility of significant under-diagnosis, underreporting, and/or publication bias. Children with anthrax present with a wide range of clinical signs and symptoms, which differ somewhat from the presenting features of adults with anthrax. Like adults, children with gastrointestinal anthrax have two distinct clinical presentations: Upper tract disease characterized by dysphagia and oropharyngeal findings and lower tract disease characterized by fever, abdominal pain, and nausea and vomiting. Additionally, children with inhalational disease may have "atypical" presentations including primary meningoencephalitis. Children with inhalational anthrax have abnormal chest roentgenograms; however, children with other forms of anthrax usually have normal roentgenograms. Nineteen of the 30 children (63%) who received penicillin-based antibiotics survived; whereas nine of 11 children (82%) who received anthrax antiserum survived.There is a broad spectrum of clinical signs and symptoms associated with pediatric anthrax. The limited data available regarding disease progression and treatment responses for children infected with anthrax suggest some differences from adult populations. Preparedness planning efforts should specifically address the needs of pediatric victims.

Abstract

There have been numerous reports of interventions designed to improve the care of patients with diabetes, but the effectiveness of such interventions is unclear.To assess the impact on glycemic control of 11 distinct strategies for quality improvement (QI) in adults with type 2 diabetes.MEDLINE (1966-April 2006) and the Cochrane Collaboration's Effective Practice and Organisation of Care Group database, which covers multiple bibliographic databases. Eligible studies included randomized or quasi-randomized controlled trials and controlled before-after studies that evaluated a QI intervention targeting some aspect of clinician behavior or organizational change and reported changes in glycosylated hemoglobin (HbA1c) values.Postintervention difference in HbA1c values were estimated using a meta-regression model that included baseline glycemic control and other key intervention and study features as predictors.Fifty randomized controlled trials, 3 quasi-randomized trials, and 13 controlled before-after trials met all inclusion criteria. Across these 66 trials, interventions reduced HbA(1c) values by a mean of 0.42% (95% confidence interval [CI], 0.29%-0.54%) over a median of 13 months of follow-up. Trials with fewer patients than the median for all included trials reported significantly greater effects than did larger trials (0.61% vs 0.27%, P = .004), strongly suggesting publication bias. Trials with mean baseline HbA1c values of 8.0% or greater also reported significantly larger effects (0.54% vs 0.20%, P = .005). Adjusting for these effects, 2 of the 11 categories of QI strategies were associated with reductions in HbA(1c) values of at least 0.50%: team changes (0.67%; 95% CI, 0.43%-0.91%; n = 26 trials) and case management (0.52%; 95% CI, 0.31%-0.73%; n = 26 trials); these also represented the only 2 strategies conferring significant incremental reductions in HbA1c values. Interventions involving team changes reduced values by 0.33% more (95% CI, 0.12%-0.54%; P = .004) than those without this strategy, and those involving case management reduced values by 0.22% more (95% CI, 0.00%-0.44%; P = .04) than those without case management. Interventions in which nurse or pharmacist case managers could make medication adjustments without awaiting physician authorization reduced values by 0.80% (95% CI, 0.51%-1.10%), vs only 0.32% (95% CI, 0.14%-0.49%) for all other interventions (P = .002).Most QI strategies produced small to modest improvements in glycemic control. Team changes and case management showed more robust improvements, especially for interventions in which case managers could adjust medications without awaiting physician approval. Estimates of the effectiveness of other specific QI strategies may have been limited by difficulty in classifying complex interventions, insufficient numbers of studies, and publication bias.

Abstract

Care remains suboptimal for many patients with hypertension.The purpose of this study was to assess the effectiveness of quality improvement (QI) strategies in lowering blood pressure.MEDLINE, Cochrane databases, and article bibliographies were searched for this study.Trials, controlled before-after studies, and interrupted time series evaluating QI interventions targeting hypertension control and reporting blood pressure outcomes were studied.Two reviewers abstracted data and classified QI strategies into categories: provider education, provider reminders, facilitated relay of clinical information, patient education, self-management, patient reminders, audit and feedback, team change, or financial incentives were extracted.Forty-four articles reporting 57 comparisons underwent quantitative analysis. Patients in the intervention groups experienced median reductions in systolic blood pressure (SBP) and diastolic blood pressure (DBP) that were 4.5 mm Hg (interquartile range [IQR]: 1.5 to 11.0) and 2.1 mm Hg (IQR: -0.2 to 5.0) greater than observed for control patients. Median increases in the percentage of individuals achieving target goals for SBP and DBP were 16.2% (IQR: 10.3 to 32.2) and 6.0% (IQR: 1.5 to 17.5). Interventions that included team change as a QI strategy were associated with the largest reductions in blood pressure outcomes. All team change studies included assignment of some responsibilities to a health professional other than the patient's physician.Not all QI strategies have been assessed equally, which limits the power to compare differences in effects between strategies.QI strategies are associated with improved hypertension control. A focus on hypertension by someone in addition to the patient's physician was associated with substantial improvement. Future research should examine the contributions of individual QI strategies and their relative costs.

Abstract

Teriparatide is a promising new agent for the treatment of osteoporosis.The objective of this study was to evaluate the cost-effectiveness of teriparatide-based strategies compared with alendronate sodium for the first-line treatment of high-risk osteoporotic women. We developed a microsimulation with a societal perspective. Key data sources include the Study of Osteoporotic Fractures, the Fracture Intervention Trial, and the Fracture Prevention Trial. We evaluated postmenopausal white women with low bone density and prevalent vertebral fracture. The interventions were usual care (UC) (calcium or vitamin D supplementation) compared with 3 strategies: 5 years of alendronate therapy, 2 years of teriparatide therapy, and 2 years of teriparatide therapy followed by 5 years of alendronate therapy (sequential teriparatide/alendronate). The main outcome measure was cost per quality-adjusted life-year (QALY).For the base-case analysis, the cost of alendronate treatment was 11,600 dollars per QALY compared with UC. The cost of sequential teriparatide/alendronate therapy was 156,500 dollars per QALY compared with alendronate. Teriparatide treatment alone was more expensive and produced a smaller increase in QALYs than alendronate. For sensitivity analysis, teriparatide alone was less cost-effective than alendronate even if its efficacy lasted 15 years after treatment cessation. Sequential teriparatide/alendronate therapy was less cost-effective than alendronate even if fractures were eliminated during the alendronate phase, although its cost-effectiveness was less than 50,000 dollars per QALY if the price of teriparatide decreased 60%, if used in elderly women with T scores of -4.0 or less, or if 6 months of teriparatide therapy had comparable efficacy to 2 years of treatment.Alendronate compares favorably to interventions accepted as cost-effective. Therapy with teriparatide alone is more expensive and produces a smaller increase in QALYs than therapy with alendronate. Sequential teriparatide/alendronate therapy appear expensive but could become more cost-effective with reductions in teriparatide price, with restriction to use in exceptionally high-risk women, or if short courses of treatment have comparable efficacy to that observed in clinical trials.

Abstract

There is increased interest in quantitative ultrasound for osteoporosis screening because it predicts fracture risk, is portable, and is relatively inexpensive. However, there is no consensus regarding its accuracy for identifying patients with osteoporosis.To determine the sensitivity and specificity of calcaneal quantitative ultrasound for identifying patients who meet the World Health Organization's diagnostic criteria for osteoporosis. Dual-energy x-ray absorptiometry (DXA) was used as the reference standard.MEDLINE (1966 to October 2005), EMBASE (1993 to May 2004), Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews (1952 to March 2004), and the Science Citation Index (1945 to April 2004).English-language articles that evaluated the sensitivity and specificity of calcaneal quantitative ultrasound for identifying adults with DXA T-scores of -2.5 or less at the hip or spine.Two authors independently reviewed articles and abstracted data.The authors identified 1908 potentially relevant articles, of which 25 met the inclusion criteria, and calculated the sensitivity and specificity of quantitative ultrasound over a range of thresholds. For the quantitative ultrasound index parameter T-score cutoff threshold of -1, sensitivity was 79% (95% CI, 69% to 86%) and specificity was 58% (CI, 44% to 70%) for identifying individuals with DXA T-scores of -2.5 or less at the hip or spine. For a T-score threshold of 0, sensitivity improved to 93% (CI, 87% to 97%) but specificity decreased to 24% (CI, 10% to 47%). At a pretest probability of 22% (for example, a 65-year-old white woman at average risk), the post-test probability of DXA-determined osteoporosis was 34% (CI, 26% to 41%) after a positive result and 10% (CI, 5% to 12%) after a negative result when using a T-score cutoff threshold of -1. Analysis of other quantitative ultrasound parameters (for example, broadband ultrasound attenuation) revealed similar estimates of accuracy.The relatively small number of included studies limited the authors' ability to evaluate the effects of heterogeneous study characteristics on the diagnostic accuracy of quantitative ultrasound.The currently available literature suggests that results of calcaneal quantitative ultrasound at commonly used cutoff thresholds do not definitively exclude or confirm DXA-determined osteoporosis. Additional research is needed before use of this test can be recommended in evidence-based screening programs for osteoporosis.

Abstract

Postoperative pulmonary complications play an important role in the risk for patients undergoing noncardiothoracic surgery. Postoperative pulmonary complications are as prevalent as cardiac complications and contribute similarly to morbidity, mortality, and length of stay. Pulmonary complications may even be more likely than cardiac complications to predict long-term mortality after surgery. The purpose of this guideline is to provide guidance to clinicians on clinical and laboratory predictors of perioperative pulmonary risk before noncardiothoracic surgery. It also evaluates strategies to reduce the perioperative pulmonary risk and focuses on atelectasis, pneumonia, and respiratory failure. The target audience for this guideline is general internists or other clinicians involved in perioperative management of surgical patients. The target patient population is all adult persons undergoing noncardiothoracic surgery.

Abstract

Mortality from inhalational anthrax during the 2001 U.S. attack was substantially lower than that reported historically.To systematically review all published inhalational anthrax case reports to evaluate the predictors of disease progression and mortality.MEDLINE (1966-2005), 14 selected journal indexes (1900-1966), and bibliographies of all retrieved articles.Case reports (in any language) between 1900 and 2005 that met predefined criteria.Two authors (1 author for non-English-language reports) independently abstracted patient data.The authors found 106 reports of 82 cases of inhalational anthrax. Mortality was statistically significantly lower for patients receiving antibiotics or anthrax antiserum during the prodromal phase of disease, multidrug antibiotic regimens, or pleural fluid drainage. Patients in the 2001 U.S. attack were less likely to die than historical anthrax case-patients (45% vs. 92%; P < 0.001) and were more likely to receive antibiotics during the prodromal phase (64% vs. 13%; P < 0.001), multidrug regimens (91% vs. 50%; P = 0.027), or pleural fluid drainage (73% vs. 11%; P < 0.001). Patients who progressed to the fulminant phase had a mortality rate of 97% (regardless of the treatment they received), and all patients with anthrax meningoencephalitis died.This was a retrospective case review of previously published heterogeneous reports.Despite advances in supportive care, fulminant-phase inhalational anthrax is usually fatal. Initiation of antibiotic or anthrax antiserum therapy during the prodromal phase is associated with markedly improved survival, although other aspects of care, differences in clinical circumstances, or unreported factors may contribute to this observed reduction in mortality. Efforts to improve early diagnosis and timely initiation of appropriate antibiotics are critical to reducing mortality.

Abstract

We sought to understand how diagnosis with HIV affects health-related quality of life. We assessed health-related quality of life using utility-based measures in a Department of Veterans Affairs (VA) clinic and a University-based clinic. Respondents assessed health-related quality of life regarding their current health, and retrospectively assessed their health 1 month prior to and 2 months after diagnosis with HIV infection. Sixty-six patients completed the study. The overall mean utilities for health 1 month before and 2 months after diagnosis were 0.87 (standard error 0.037), and 0.80 (0.043) (p<0.005 by rank sign test), but the effect of diagnosis differed between the two clinics, with a substantial decrease in the university clinic and a small non-significant decrease in the VA clinic. The overall mean utility for current health was 0.85 (0.034), assessed on average 7.5 years after diagnosis. When asked directly whether diagnosis of HIV decreased health-related quality of life, 47% agreed, but 35% stated that HIV diagnosis positively affected health-related quality of life. Diagnosis with HIV decreased health-related quality of life at 2 months on average, but this effect diminished over time, and differed among patient populations. Years after diagnosis, although half of the patients believed that diagnosis reduced health-related quality of life, one-third reported improved health-related quality of life.

Abstract

Most panels that develop clinical practice guidelines are poorly equipped to address resource allocation or cost issues associated with management options. This risks neglect, arbitrariness, lack of transparency, and methodological flaws in consideration of resource allocation. We provide recommendations for guideline panels to promote greater transparency and rigor. We suggest focusing on resource allocation issues for only a limited number of recommendations and provide criteria for selecting those in which economic considerations are likely to influence the direction or strength of the recommendation. Panels should involve a health economist to assist with the systematic review and critical interpretation of relevant economic analyses. They should carefully define the intended audience and may consider issuing alternative recommendations when available resources vary widely across target clinical settings. Targeting a limited number of recommendations for the consideration of resource allocation issues, and ensuring methodologically high-quality review, will best serve guideline panels, and the health-care providers and patients they hope to assist.

Abstract

Economic outcomes are now included in many contemporary randomized trials and provide an additional dimension to the assessment of interventions. Economic data collection and analysis pose several methodologic challenges, however.This paper reviews methods of incorporating economic outcomes in clinical trials.Data on medical resource utilization and cost can readily be collected along with data on clinical outcomes. The cost of planned interventions can be measured with reasonable accuracy, but costs due to unplanned clinical events are more difficult to measure reliably. The total cost depends critically on these relatively infrequent, yet costly, adverse outcomes, which may partially, or even completely, offset any difference between the planned costs of the randomized therapies. Newer therapies are typically more expensive than older therapies, so the most important question is whether patient outcomes are improved sufficiently to justify the added expense. Cost-effectiveness analysis helps gauge the value provided by a new therapy. The cost-effectiveness of an intervention compared with an alternative is defined as the ratio of the incremental costs and the incremental clinical benefits, measured as dollars per quality-adjusted life-year added. The follow-up period in most clinical trials is generally long enough to measure the added cost of therapy, but may not capture the full benefits of treatment. The limited time horizon of clinical trials makes it necessary to use a model to extrapolate the observed effect of treatment and project the increase in life expectancy. The resulting cost-effectiveness ratio is sensitive to assumptions about the long-term efficacy of treatment, particularly whether the treatment effect will continue or dissipate over time.Economic outcomes can be measured alongside clinical outcomes in randomized trial. While the use of cost-effectiveness models falls outside the strictly empirical, within-trial analysis framework that is embraced by most clinical trialists, it provides an explicit approach to assessing whether the intervention under study provides a clinically meaningful improvement in outcome that is worthwhile.

Abstract

A critical question in planning a response to bioterrorism is how antibiotics and medical supplies should be stockpiled and dispensed. The objective of this work was to evaluate the costs and benefits of alternative strategies for maintaining and dispensing local and regional inventories of antibiotics and medical supplies for responses to anthrax bioterrorism. We modeled the regional and local supply chain for antibiotics and medical supplies as well as local dispensing capacity. We found that mortality was highly dependent on the local dispensing capacity, the number of individuals requiring prophylaxis, adherence to prophylactic antibiotics, and delays in attack detection. For an attack exposing 250,000 people and requiring the prophylaxis of 5 million people, expected mortality fell from 243,000 to 145,000 as the dispensing capacity increased from 14,000 to 420,000 individuals per day. At low dispensing capacities (<14,000 individuals per day), nearly all exposed individuals died, regardless of the rate of adherence to prophylaxis, delays in attack detection, or availability of local inventories. No benefit was achieved by doubling local inventories at low dispensing capacities; however, at higher dispensing capacities, the cost-effectiveness of doubling local inventories fell from 100,000 US dollars to 20,000 US dollars/life year gained as the annual probability of an attack increased from 0.0002 to 0.001. We conclude that because of the reportedly rapid availability of regional inventories, the critical determinant of mortality following anthrax bioterrorism is local dispensing capacity. Bioterrorism preparedness efforts directed at improving local dispensing capacity are required before benefits can be reaped from enhancing local inventories.

Abstract

Eight randomized trials have evaluated whether the prophylactic use of an implantable cardioverter-defibrillator (ICD) improves survival among patients who are at risk for sudden death due to left ventricular systolic dysfunction but who have not had a life-threatening ventricular arrhythmia. We assessed the cost-effectiveness of the ICD in the populations represented in these primary-prevention trials.We developed a Markov model of the cost, quality of life, survival, and incremental cost-effectiveness of the prophylactic implantation of an ICD, as compared with control therapy, among patients with survival and mortality rates similar to those in each of the clinical trials. We modeled the efficacy of the ICD as a reduction in the relative risk of death on the basis of the hazard ratios reported in the individual clinical trials.Use of the ICD increased lifetime costs in every trial. Two trials--the Coronary Artery Bypass Graft (CABG) Patch Trial and the Defibrillator in Acute Myocardial Infarction Trial (DINAMIT)--found that the prophylactic implantation of an ICD did not reduce the risk of death and thus was both more expensive and less effective than control therapy. For the other six trials--the Multicenter Automatic Defibrillator Implantation Trial (MADIT) I, MADIT II, the Multicenter Unsustained Tachycardia Trial (MUSTT), the Defibrillators in Non-Ischemic Cardiomyopathy Treatment Evaluation (DEFINITE) trial, the Comparison of Medical Therapy, Pacing, and Defibrillation in Heart Failure (COMPANION) trial, and the Sudden Cardiac Death in Heart Failure Trial (SCD-HeFT)--the use of an ICD was projected to add between 1.01 and 2.99 quality-adjusted life-years (QALY) and between 68,300 dollars and 101,500 dollars in cost. Using base-case assumptions, we found that the cost-effectiveness of the ICD as compared with control therapy in these six populations ranged from 34,000 dollars to 70,200 dollars per QALY gained. Sensitivity analyses showed that this cost-effectiveness ratio would remain below 100,000 dollars per QALY as long as the ICD reduced mortality for seven or more years.Prophylactic implantation of an ICD has a cost-effectiveness ratio below 100,000 dollars per QALY gained in populations in which a significant device-related reduction in mortality has been demonstrated.

Abstract

Hereditary hemochromatosis is a genetic disorder of iron metabolism. Diagnosis of hereditary hemochromatosis is usually based on a combination of various genetic or phenotypic criteria. Decisions regarding screening are difficult because of the variable penetrance of mutations of the HFE gene and the absence of any definitive trials addressing the benefits and risks of therapeutic phlebotomy in asymptomatic patients or those with only laboratory abnormalities. The purpose of this guideline is to increase physician awareness of hereditary hemochromatosis, particularly the variable penetrance of genetic mutations; aid in case finding; and explain the role of genetic testing. This guideline provides recommendations based on a review of evidence in the accompanying background paper by Schmitt and colleagues. The target audience for this guideline is internists and other primary care physicians. The target patient population is all persons who have a probability or susceptibility of developing hereditary hemochromatosis, including the relatives of individuals who already have the disease.

Abstract

Syndromic surveillance offers the potential to rapidly detect outbreaks resulting from terrorism. Despite considerable experience with implementing syndromic surveillance, limited evidence exists to describe the performance of syndromic surveillance systems in detecting outbreaks.To describe a model for simulating cases that might result from exposure to inhalational anthrax and then use the model to evaluate the ability of syndromic surveillance to detect an outbreak of inhalational anthrax after an aerosol release.Disease progression and health-care use were simulated for persons infected with anthrax. Simulated cases were then superimposed on authentic surveillance data to create test data sets. A temporal outbreak detection algorithm was applied to each test data set, and sensitivity and timeliness of outbreak detection were calculated by using syndromic surveillance.The earliest detection using a temporal algorithm was 2 days after a release. Earlier detection tended to occur when more persons were infected, and performance worsened as the proportion of persons seeking care in the prodromal disease state declined. A shorter median incubation state led to earlier detection, as soon as 1 day after release when the incubation state was < or =5 days.Syndromic surveillance of a respiratory syndrome using a temporal detection algorithm tended to detect an anthrax attack within 3-4 days after exposure if >10,000 persons were infected. The performance of surveillance (i.e., timeliness and sensitivity) worsened as the number of persons infected decreased.

Abstract

Some important health policy topics, such as those related to the delivery, organization, and financing of health care, present substantial challenges to established methods for evidence synthesis. For example, such reviews may ask: What is the effect of for-profit versus not-for-profit delivery of care on patient outcomes? Or, which strategies are the most effective for promoting preventive care? This paper describes innovative methods for synthesizing evidence related to the delivery, organization, and financing of health care. We found 13 systematic reviews on these topics that described novel methodologic approaches. Several of these syntheses used 3 approaches: conceptual frameworks to inform problem formulation, systematic searches that included nontraditional literature sources, and hybrid synthesis methods that included simulations to address key gaps in the literature. As the primary literature on these topics expands, so will opportunities to develop additional novel methods for performing high-quality comprehensive syntheses.

Abstract

Weaponized Bacillus anthracis is one of the few biological agents that can cause death and disease in sufficient numbers to devastate an urban setting.To evaluate the cost-effectiveness of strategies for prophylaxis and treatment of an aerosolized B. anthracis bioterror attack.Decision analytic model.We derived probabilities of anthrax exposure, vaccine and treatment characteristics, and their costs and associated clinical outcomes from the medical literature and bioterrorism-preparedness experts.Persons living and working in a large metropolitan U.S. city.Patient lifetime.Societal.We evaluated 4 postattack strategies: no prophylaxis, vaccination alone, antibiotic prophylaxis alone, or vaccination and antibiotic prophylaxis, as well as preattack vaccination versus no vaccination.Costs, quality-adjusted life-years, life-years, and incremental cost-effectiveness.If an aerosolized B. anthracis bioweapon attack occurs, postexposure prophylactic vaccination and antibiotic therapy for those potentially exposed is the most effective (0.33 life-year gained per person) and least costly (355 dollars saved per person) strategy, as compared with vaccination alone. At low baseline probabilities of attack and exposure, mass previous vaccination of a metropolitan population is more costly (815 million dollars for a city of 5 million people) and not more effective than no vaccination.If prophylactic antibiotics cannot be promptly distributed after exposure, previous vaccination may become cost-effective.The probability of exposure and disease critically depends on the probability and mechanism of bioweapon release.In the event of an aerosolized B. anthracis bioweapon attack over an unvaccinated metropolitan U.S. population, postattack prophylactic vaccination and antibiotic therapy is the most effective and least expensive strategy.

Abstract

The costs, benefits, and cost-effectiveness of screening for human immunodeficiency virus (HIV) in health care settings during the era of highly active antiretroviral therapy (HAART) have not been determined.We developed a Markov model of costs, quality of life, and survival associated with an HIV-screening program as compared with current practice. In both strategies, symptomatic patients were identified through symptom-based case finding. Identified patients started treatment when their CD4 count dropped to 350 cells per cubic millimeter. Disease progression was defined on the basis of CD4 levels and viral load. The likelihood of sexual transmission was based on viral load, knowledge of HIV status, and efficacy of counseling.Given a 1 percent prevalence of unidentified HIV infection, screening increased life expectancy by 5.48 days, or 4.70 quality-adjusted days, at an estimated cost of 194 dollars per screened patient, for a cost-effectiveness ratio of 15,078 dollars per quality-adjusted life-year. Screening cost less than 50,000 dollars per quality-adjusted life-year if the prevalence of unidentified HIV infection exceeded 0.05 percent. Excluding HIV transmission, the cost-effectiveness of screening was 41,736 dollars per quality-adjusted life-year. Screening every five years, as compared with a one-time screening program, cost 57,138 dollars per quality-adjusted life-year, but was more attractive in settings with a high incidence of infection. Our results were sensitive to the efficacy of behavior modification, the benefit of early identification and therapy, and the prevalence and incidence of HIV infection.The cost-effectiveness of routine HIV screening in health care settings, even in relatively low-prevalence populations, is similar to that of commonly accepted interventions, and such programs should be expanded.

Abstract

The cost-effectiveness of cardiopulmonary resuscitation (CPR) and defibrillation training for laypersons unselected for risk of encountering cases of cardiac arrest is not known. We compared the costs and health benefits of alternative resuscitation training strategies for adults without professional first-responder duties who are at average risk of encountering cases of out-of-hospital cardiac arrest.We constructed a cost-effectiveness analytic model. Data on cardiac arrest epidemiology and the effectiveness of CPR/defibrillation training were obtained from the medical literature. Instructional costs were determined from a survey of training programs. Downstream cardiac arrest survivor quality-adjusted life expectancy and long-term health care costs were derived from prior studies. We compared three strategies for training unselected laypersons: CPR/defibrillation training alone, training combined with home defibrillator purchase, and no training. The main outcome measures were total instructional costs for trainees combined with health care costs for additional cardiac arrest survivors, and quality-adjusted survival for additional patients resuscitated by trainees.CPR/defibrillation training yielded 2.7 quality-adjusted hours of life at a cost of 62 US dollars per trainee (202,400 US dollars per quality-adjusted life-year [QALY] gained). Training laypersons in CPR/defibrillation with subsequent defibrillator purchase cost 2,489,700 US dollars per QALY. In contrast, CPR/defibrillation training cost less than 75,000 US dollars per QALY if trainees lived with persons older than 75 years or with persons who had cardiac disease, or if total training costs were less than 10 US dollars.Training unselected laypersons in CPR/defibrillation is costly compared with other public health initiatives. Conversely, training laypersons selected by occupation, low training costs, or having high-risk household companions is substantially more efficient.

Abstract

The implantable cardioverter defibrillator (ICD) is a costly new treatment for patients at high risk of sudden cardiac death. Randomized trials of the ICD showed it to be effective in some groups of patients but not in others. While new trials testing the ICD were ongoing to clarify the evidence, policymakers faced immediate decisions about providing insurance coverage for the device. The high cost of ICDs, the large population of patients potentially eligible to receive them, the potential to reduce preventable deaths, and the unsettled state of the medical evidence provided a challenge to evidence-based medicine and to policymakers.

Abstract

Photodynamic therapy appears to be effective in ablating high-grade dysplasia in Barrett's esophagus. Our aim was to identify the most effective and cost-effective strategy for managing high-grade dysplasia in Barrett's esophagus without associated endoscopically visible abnormalities.By using decision analysis, the lifetime costs and benefits of 4 strategies for which long-term data exist were estimated by us: esophagectomy, endoscopic surveillance, photodynamic therapy, followed by esophagectomy for residual high-grade dysplasia; and photodynamic therapy followed by endoscopic surveillance for residual high-grade dysplasia. It was assumed by us that there was a 30% prevalence of cancer in high-grade dysplasia patients and a 77% efficacy of photodynamic therapy for high-grade dysplasia and early cancer.Esophagectomy cost 24,045 dollars, with life expectancy of 11.82 quality-adjusted life years. In comparison, photodynamic therapy followed by surveillance for residual high-grade dysplasia was the most effective strategy, with a quality-adjusted life expectancy of 12.31 quality-adjusted life years, but it also incurred the greatest lifetime cost (47,310 dollars) for an incremental cost-effectiveness of 47,410 dollars/quality-adjusted life years. The results were sensitive to post-surgical quality of life and survival, and to cancer prevalence if photodynamic therapy efficacy for cancer was less than 50%.Photodynamic therapy followed by endoscopic surveillance for residual high-grade dysplasia appears to be cost effective compared with esophagectomy for patients diagnosed with high-grade dysplasia in Barrett's esophagus. Clinical trials directly comparing these strategies are warranted.

Abstract

In 1999, the American College of Physicians (ACP), then the American College of Physicians-American Society of Internal Medicine, and the American College of Cardiology/American Heart Association (ACC/AHA) developed joint guidelines on the management of patients with chronic stable angina. The ACC/AHA then published an updated guideline in 2002, which ACP recognized as a scientifically valid review of the evidence and background paper. This ACP guideline summarizes the recommendations of the 2002 ACC/AHA updated guideline and underscores the recommendations most likely to be important to physicians seeing patients in the primary care setting. This guideline is the second of 2 that provide guidance on the management of patients with chronic stable angina. This document covers treatment and follow-up of symptomatic patients who have not had an acute myocardial infarction or revascularization procedure in the previous 6 months. Sections addressing asymptomatic patients are also included. Asymptomatic refers to patients with known or suspected coronary disease based on a history or electrocardiographic evidence of previous myocardial infarction, coronary angiography, or abnormal results on noninvasive tests. A previous guideline covered diagnosis and risk stratification for symptomatic patients who have not had an acute myocardial infarction or revascularization procedure in the previous 6 months and asymptomatic patients with known or suspected coronary disease based on a history or electrocardiographic evidence of previous myocardial infarction, coronary angiography, or abnormal results on noninvasive tests.

Abstract

In 1999, the American College of Physicians (ACP), then the American College of Physicians-American Society of Internal Medicine, and the American College of Cardiology/American Heart Association (ACC/AHA) developed joint guidelines on the management of patients with chronic stable angina. The ACC/AHA then published an updated guideline in 2002, which the ACP recognized as a scientifically valid review of the evidence and background paper. This ACP guideline summarizes the recommendations of the 2002 ACC/AHA updated guideline and underscores the recommendations most likely to be important to physicians seeing patients in the primary care setting. This guideline is the first of 2 that will provide guidance on the management of patients with chronic stable angina. This document will cover diagnosis and risk stratification for symptomatic patients who have not had an acute myocardial infarction or revascularization procedure in the previous 6 months. Sections addressing asymptomatic patients are also included. Asymptomatic refers to patients with known or suspected coronary disease based on history or on electrocardiographic evidence of previous myocardial infarction, coronary angiography, or abnormal results on noninvasive tests. A future guideline will cover pharmacologic therapy and follow-up.

Abstract

Clopidogrel is more effective than aspirin in preventing recurrent vascular events, but concerns about its cost-effectiveness have limited its use. We evaluated the cost-effectiveness of clopidogrel and aspirin as secondary prevention in patients with a prior myocardial infarction, a prior stroke, or peripheral arterial disease.We constructed Markov models assuming a societal perspective, and based analyses on the lifetime treatment of a 63-year-old patient facing event probabilities derived from the Clopidogrel versus Aspirin in Patients at Risk of Ischemic Events (CAPRIE) trial as the base case. Outcome measures included costs, life expectancy in quality-adjusted life-years (QALYs), incremental cost-effectiveness ratios, and events averted.In patients with peripheral arterial disease, clopidogrel increased life expectancy by 0.55 QALYs at an incremental cost-effectiveness ratio of $25,100 per QALY, as compared with aspirin. In poststroke patients, clopidogrel increased life expectancy by 0.17 QALYs at a cost of $31,200 per QALY. Aspirin was both less expensive and more effective than clopidogrel in post-myocardial infarction patients. In probabilistic sensitivity analyses, our evaluation for patients with peripheral vascular disease was robust. Evaluations of stroke and myocardial infarction patients were sensitive predominantly to the cost and efficacy of clopidogrel, with aspirin therapy more effective and less expensive in 153 of 1000 simulations (15.3%) in poststroke patients and clopidogrel more effective in 119 of 1000 simulations (11.9%) in the myocardial infarction sample.Clopidogrel provides a substantial increase in quality-adjusted life expectancy at a cost that is within traditional societal limits for patients with either peripheral arterial disease or a recent stroke. Current evidence does not support increased efficacy with clopidogrel relative to aspirin in patients following myocardial infarction.

Abstract

Given the threat of bioterrorism and the increasing availability of electronic data for surveillance, surveillance systems for the early detection of illnesses and syndromes potentially related to bioterrorism have proliferated.To critically evaluate the potential utility of existing surveillance systems for illnesses and syndromes related to bioterrorism.Databases of peer-reviewed articles (for example, MEDLINE for articles published from January 1985 to April 2002) and Web sites of relevant government and nongovernment agencies.Reports that described or evaluated systems for collecting, analyzing, or presenting surveillance data for bioterrorism-related illnesses or syndromes.From each included article, the authors abstracted information about the type of surveillance data collected; method of collection, analysis, and presentation of surveillance data; and outcomes of evaluations of the system.17,510 article citations and 8088 government and nongovernmental Web sites were reviewed. From these, the authors included 115 systems that collect various surveillance reports, including 9 syndromic surveillance systems, 20 systems collecting bioterrorism detector data, 13 systems collecting influenza-related data, and 23 systems collecting laboratory and antimicrobial resistance data. Only the systems collecting syndromic surveillance data and detection system data were designed, at least in part, for bioterrorism preparedness applications. Syndromic surveillance systems have been deployed for both event-based and continuous bioterrorism surveillance. Few surveillance systems have been comprehensively evaluated. Only 3 systems have had both sensitivity and specificity evaluated.Data from some existing surveillance systems (particularly those developed by the military) may not be publicly available.Few surveillance systems have been specifically designed for collecting and analyzing data for the early detection of a bioterrorist event. Because current evaluations of surveillance systems for detecting bioterrorism and emerging infections are insufficient to characterize the timeliness or sensitivity and specificity, clinical and public health decision making based on these systems may be compromised.

Abstract

The authors sought to develop a conceptual framework for evaluating whether existing information technologies and decision support systems (IT/DSSs) would assist the key decisions faced by clinicians and public health officials preparing for and responding to bioterrorism.They reviewed reports of natural and bioterrorism related infectious outbreaks, bioterrorism preparedness exercises, and advice from experts to identify the key decisions, tasks, and information needs of clinicians and public health officials during a bioterrorism response. The authors used task decomposition to identify the subtasks and data requirements of IT/DSSs designed to facilitate a bioterrorism response. They used the results of the task decomposition to develop evaluation criteria for IT/DSSs for bioterrorism preparedness. They then applied these evaluation criteria to 341 reports of 217 existing IT/DSSs that could be used to support a bioterrorism response. Main Results: In response to bioterrorism, clinicians must make decisions in 4 critical domains (diagnosis, management, prevention, and reporting to public health), and public health officials must make decisions in 4 other domains (interpretation of bioterrorism surveillance data, outbreak investigation, outbreak control, and communication). The time horizons and utility functions for these decisions differ. From the task decomposition, the authors identified critical subtasks for each of the 8 decisions. For example, interpretation of diagnostic tests is an important subtask of diagnostic decision making that requires an understanding of the tests' sensitivity and specificity. Therefore, an evaluation criterion applied to reports of diagnostic IT/DSSs for bioterrorism asked whether the reports described the systems' sensitivity and specificity. Of the 217 existing IT/DSSs that could be used to respond to bioterrorism, 79 studies evaluated 58 systems for at least 1 performance metric.The authors identified 8 key decisions that clinicians and public health officials must make in response to bioterrorism. When applying the evaluation system to 217 currently available IT/DSSs that could potentially support the decisions of clinicians and public health officials, the authors found that the literature provides little information about the accuracy of these systems.

Abstract

We evaluated the usefulness of detection systems and diagnostic decision support systems for bioterrorism response. We performed a systematic review by searching relevant databases (e.g., MEDLINE) and Web sites for reports of detection systems and diagnostic decision support systems that could be used during bioterrorism responses. We reviewed over 24,000 citations and identified 55 detection systems and 23 diagnostic decision support systems. Only 35 systems have been evaluated: 4 reported both sensitivity and specificity, 13 were compared to a reference standard, and 31 were evaluated for their timeliness. Most evaluations of detection systems and some evaluations of diagnostic systems for bioterrorism responses are critically deficient. Because false-positive and false-negative rates are unknown for most systems, decision making on the basis of these systems is seriously compromised. We describe a framework for the design of future evaluations of such systems.

Abstract

The Joint Panel of the American Academy of Family Physicians and the American College of Physicians, in collaboration with the Johns Hopkins Evidence-based Practice Center, systematically reviewed the available evidence on the management of newly detected atrial fibrillation and developed recommendations for adult patients with first-detected atrial fibrillation. The recommendations do not apply to patients with postoperative or post-myocardial infarction atrial fibrillation, patients with class IV heart failure, patients already taking antiarrhythmic drugs, or patients with valvular disease. The target physician audience is internists and family physicians dedicated to primary care. The recommendations are as follows: RECOMMENDATION 1: Rate control with chronic anticoagulation is the recommended strategy for the majority of patients with atrial fibrillation. Rhythm control has not been shown to be superior to rate control (with chronic anticoagulation) in reducing morbidity and mortality and may be inferior in some patient subgroups to rate control. Rhythm control is appropriate when based on other special considerations, such as patient symptoms, exercise tolerance, and patient preference. Grade: 2A. RECOMMENDATION 2: Patients with atrial fibrillation should receive chronic anticoagulation with adjusted-dose warfarin, unless they are at low risk of stroke or have a specific contraindication to the use of warfarin (thrombocytopenia, recent trauma or surgery, alcoholism). Grade: 1A. RECOMMENDATION 3: For patients with atrial fibrillation, the following drugs are recommended for their demonstrated efficacy in rate control during exercise and while at rest: atenolol, metoprolol, diltiazem, and verapamil (drugs listed alphabetically by class). Digoxin is only effective for rate control at rest and therefore should only be used as a second-line agent for rate control in atrial fibrillation. Grade: 1B. RECOMMENDATION 4: For those patients who elect to undergo acute cardioversion to achieve sinus rhythm in atrial fibrillation, both direct-current cardioversion (Grade: 1C+) and pharmacological conversion (Grade: 2A) are appropriate options. RECOMMENDATION 5: Both transesophageal echocardiography with short-term prior anticoagulation followed by early acute cardioversion (in the absence of intracardiac thrombus) with postcardioversion anticoagulation versus delayed cardioversion with pre- and postanticoagulation are appropriate management strategies for those patients who elect to undergo cardioversion. Grade: 2A. RECOMMENDATION 6: Most patients converted to sinus rhythm from atrial fibrillation should not be placed on rhythm maintenance therapy since the risks outweigh the benefits. In a selected group of patients whose quality of life is compromised by atrial fibrillation, the recommended pharmacologic agents for rhythm maintenance are amiodarone, disopyramide, propafenone, and sotalol (drugs listed in alphabetical order). The choice of agent predominantly depends on specific risk of side effects based on patient characteristics. Grade: 2A.

Abstract

Many clinicians and policymakers are concerned whether use of the implantable defibrillator (ICD) is justified in view of its high cost. Three randomized trials of the ICD have reported economic outcomes. Each trial found a large difference in cost between patients assigned to an ICD versus patients assigned to conventional therapy that persisted over three to six years of follow-up. Each trial also found better survival among ICD patients, and calculated ICD cost-effectiveness (CE) ratios between 27,000 dollars per life year added and 139,000 dollars per life year added. The variability in the cost-effectiveness ratios among trials is mainly due to variability in the years of life added by the ICD among the trials and, by extension, among patient subgroups. A rough rule of thumb is that the ICD will be economically attractive when it prolongs mean survival by six months or more, which is attainable in higher risk patient subgroups.

Abstract

To compare the cost-effectiveness of surgical and angioplasty-based coronary artery revascularization techniques, in particular, angioplasty with primary stenting.We used data from the Study of Economics and Quality of Life, a substudy of the Bypass Angioplasty Revascularization Investigation (BARI), to measure the outcomes and costs of angioplasty and bypass surgery in patients with multivessel coronary artery disease who had not undergone prior coronary artery revascularization. Using a Markov decision model, we updated the outcomes and costs to reflect technology changes since the time of enrollment in BARI, and projected the lifetime costs and quality-adjusted life-years (QALYs) for the two procedures from the time of initial treatment through death. We accounted for the effects of improved procedural safety and efficiency, and prolonged therapeutic effects of both surgery and stenting. This study was conducted from a societal perspective.Surgical revascularization was less costly and resulted in better outcomes than catheter-based intervention including stenting. It remained the preferred strategy after adjusting the stent outcomes to eliminate the costs and events associated with target lesion restenosis. Among angioplasty-based strategies, primary stent use cost an additional 189,000 US dollars per QALY gained compared with a strategy that reserved stent use for treatment of suboptimal balloon angioplasty results.Bypass surgery results in better outcomes than angioplasty in patients with multivessel disease, and at a lower cost.

Abstract

Positron emission tomography (PET) with 18-fluorodeoxyglucose (FDG) is a potentially useful but expensive test to diagnose solitary pulmonary nodules.To evaluate the cost-effectiveness of strategies for pulmonary nodule diagnosis and to specifically compare strategies that did and did not include FDG-PET.Decision model.Accuracy and complications of diagnostic tests were estimated by using meta-analysis and literature review. Modeled survival was based on data from a large tumor registry. Cost estimates were derived from Medicare reimbursement and other sources.All adult patients with a new, noncalcified pulmonary nodule seen on chest radiograph.Patient lifetime.Societal.40 clinically plausible combinations of 5 diagnostic interventions, including computed tomography, FDG-PET, transthoracic needle biopsy, surgery, and watchful waiting.Costs, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios.The cost-effectiveness of strategies depended critically on the pretest probability of malignancy. For patients with low pretest probability (26%), strategies that used FDG-PET selectively when computed tomography results were possibly malignant cost as little as 20 000 dollars per QALY gained. For patients with high pretest probability (79%), strategies that used FDG-PET selectively when computed tomography results were benign cost as little as 16 000 dollars per QALY gained. For patients with intermediate pretest probability (55%), FDG-PET strategies cost more than 220 000 dollars per QALY gained because they were more costly but only marginally more effective than computed tomography-based strategies.The choice of strategy also depended on the risk for surgical complications, the probability of nondiagnostic needle biopsy, the sensitivity of computed tomography, and patient preferences for time spent in watchful waiting. In probabilistic sensitivity analysis, FDG-PET strategies were cost saving or cost less than 100 000 dollars per QALY gained in 76.7%, 24.4%, and 99.9% of computer simulations for patients with low, intermediate, and high pretest probability, respectively.FDG-PET should be used selectively when pretest probability and computed tomography findings are discordant or in patients with intermediate pretest probability who are at high risk for surgical complications. In most other circumstances, computed tomography-based strategies result in similar quality-adjusted life-years and lower costs.

Abstract

Implantable cardioverter defibrillators (ICDs) effectively prevent sudden cardiac death, but selection of appropriate patients for implantation is complex. We evaluated whether risk stratification based on risk of sudden cardiac death alone was sufficient to predict the effectiveness and cost-effectiveness of the ICD.We developed a Markov model to evaluate the cost-effectiveness of ICD implantation compared with empiric amiodarone treatment. The model incorporated mortality rates from sudden and nonsudden cardiac death, noncardiac death and costs for each treatment strategy. We based our model inputs on data from randomized clinical trials, registries, and meta-analyses. We assumed that the ICD reduced total mortality rates by 25%, relative to use of amiodarone.The relationship between cost-effectiveness of the ICD and the total annual cardiac mortality rate is U-shaped; cost-effectiveness becomes unfavorable at both low and high total cardiac mortality rates. If the annual total cardiac mortality rate is 12%, the cost-effectiveness of the ICD varies from $36,000 per quality-adjusted life-year (QALY) gained when the ratio of sudden cardiac death to nonsudden cardiac death is 4 to $116,000 per QALY gained when the ratio is 0.25.The cost-effectiveness of ICD use relative to amiodarone depends on total cardiac mortality rates as well as the ratio of sudden to nonsudden cardiac death. Studies of candidate diagnostic tests for risk stratification should distinguish patients who die suddenly from those who die nonsuddenly, not just patients who die suddenly from those who live.

Abstract

Routine vaccination for Streptococcus pneumoniae has been recommended as a cost-effective measure for elderly and immunocompromised patients, yet no analysis has been performed for healthy younger adults in America. The authors evaluated the cost-effectiveness of the pneumococcal vaccine and determined the net health benefits conferred for the healthy young adult population.The authors developed a decision model to compare the health and economic outcomes of vaccinate versus do not vaccinate for S. pneumoniae.Vaccinating patients for S. pneumoniae generates benefits that are dependent on incidence rates and the efficacy of the vaccine. In the 22-year-old patient with a pneumonia incidence of 0.3/1000, the vaccine would need to be > 71 percent effective for the vaccination strategy to cost less than $50,000/QALY gained. At an incidence of 0.4/1000, the threshold efficacy is 53 percent, whereas at 0.5/1000 it is 43 percent. In the 35-year-old patient where the incidence of pneumococcal pneumonia is higher (0.85/1000), the vaccine would be cost-effective with an efficacy as low as 30 percent.Use of the S. pneumoniae vaccine in young adults would provide modest reductions in pneumonia-associated morbidity and mortality. Vaccination of young adults is moderately expensive unless vaccine efficacy is above 50% to 60%. In 35-year-old adults, use of the vaccine is cost-effective even with moderate efficacy.

Abstract

Helicobacter pylori vaccines, which have been suggested as promising interventions to control infection, are under development. We sought to quantify the potential population impact of a prophylactic H. pylori vaccine.We developed a mathematical model that compartmentalized the population according to age, infection status and clinical state. A proportion of individuals was assumed to acquire infection and develop gastritis, duodenal ulcer (DU), chronic atrophic gastritis and gastric cancer (GC). We first simulated the model without vaccine intervention, to obtain estimates of H. pylori prevalence, and GC and DU incidences based on intrinsic dynamics. We then incorporated a prophylactic vaccine (80% efficacy, lifetime protection, 80% coverage) targeting all infants. We tested vaccination programs over unlimited as well as limited time spans. Analyses were performed for the US, Japan and a prototypical developing country.In the US, our model predicted a decrease in H. pylori prevalence from 12.0% in 2010 to 4.2% in 2100 without intervention. With 10 years of vaccination beginning in 2010, prevalence would decrease to 0.7% by year 2100. In the same period, incidence of H. pylori-attributable GC would decrease from 4.5 to 0.4 per 100,000 with vaccine (compared to 1.3 per 100,000 without vaccine). Incidence of H. pylori-attributable DU would decrease from 33.3 to 2.5 per 100,000 with vaccine (compared to 12.2 per 100,000 without vaccine). In Japan, incidence of H. pylori-attributable GC would decrease from 17.6 to 1.0 per 100,000 after 10 years of vaccination (compared to 3.0 per 100,000 without vaccine). In a prototypical developing country, after 10 years of vaccination, H. pylori-attributable GC would decrease from 31.8 to 22.5 per 100,000 by 2090, returning to the original level by mid-2100s. Under continuous vaccination, it would decrease to 5.8 per 100,000 by 2100.In the US and Japan, a 10-year vaccination program would confer almost the same reduction in H. pylori and associated diseases as a vaccination effort that extends beyond 10 years. In developing countries, a continuous vaccination effort would be required to eliminate the pathogen and its associated diseases.

Abstract

Clinical trials have shown that implantable cardioverter defibrillators (ICDs) improve survival in patients with sustained ventricular arrhythmias.To determine the efficacy necessary to make prophylactic ICD or amiodarone therapy cost-effective in patients with myocardial infarction.Markov model-based cost utility analysis.Survival, cardiac death, and inpatient costs were estimated on the basis of the Myocardial Infarction Triage and Intervention registry. Other data were derived from the literature.Patients with past myocardial infarction who did not have sustained ventricular arrhythmia.Lifetime.Societal.ICD or amiodarone compared with no treatment.Life-years, quality-adjusted life-years (QALYs), costs, number needed to treat, and incremental cost-effectiveness.Compared with no treatment, ICD use led to the greatest QALYs and the highest expenditures. Amiodarone use resulted in intermediate QALYs and costs. To obtain acceptable cost-effectiveness thresholds (=$75,000/QALY), ICDs had to reduce arrhythmic death by 50% and amiodarone had to reduce total death by 7% in patients with depressed ejection fraction.For moderate efficacies, in patients with ejection fractions less than or equal to 0.3, 0.31 to 0.4, and greater than 0.4, the cost-effectiveness of amiodarone compared with no therapy was $43,100/QALY, $66,500/QALY, and $132,500/QALY, respectively, and the cost-effectiveness of ICD compared with amiodarone was $71,800/QALY, $195,700/QALY, and $557,900/QALY, respectively.Use of ICD or amiodarone in patients with past myocardial infarction and severely depressed left ventricular function may provide substantial clinical benefit at an acceptable cost. These results highlight the importance of clinical trials of ICDs in patients with low ejection fractions who have had myocardial infarction.

Abstract

Coccidioidomycosis, a systemic fungal infection, affects Americans living in the Southwest. We evaluated the cost- effectiveness of a potential vaccine against Coccidioides immitis. Using a decision model we developed, we estimate that among children, vaccination would saved 1.9 quality-adjusted life days (QALD) and $33 per person. Among adults, screening followed by vaccination would save 0.5 QALD per person and cost $62,000 per quality adjusted life year gained over no vaccination. If the birth cohort in highly endemic counties of California and Arizona were immunized in 2001, 11 deaths would be averted and $3 million would be saved (in net present value) over the lifetime of these infants. Vaccination of adults to prevent disseminated coccidioidomycosis would provide a modest health benefit similar in magnitude to other vaccines but would increase net expenditures. Vaccination of children in highly endemic regions would provide a larger health benefit and would reduce total health care expenditures.

Abstract

Commonly used methods for guideline development and dissemination do not enable developers to tailor guidelines systematically to specific patient populations and update guidelines easily. We developed a web-based system, ALCHEMIST, that uses decision models and automatically creates evidence-based guidelines that can be disseminated, tailored and updated over the web. Our objective was to demonstrate the use of this system with clinical scenarios that provide challenges for guideline development. We used the ALCHEMIST system to develop guidelines for three clinical scenarios: (1) Chlamydia screening for adolescent women, (2) antiarrhythmic therapy for the prevention of sudden cardiac death; and (3) genetic testing for the BRCA breast-cancer mutation. ALCHEMIST uses information extracted directly from the decision model, combined with the additional information from the author of the decision model, to generate global guidelines. ALCHEMIST generated electronic web-based guidelines for each of the three scenarios. Using ALCHEMIST, we demonstrate that tailoring a guideline for a population at high-risk for Chlamydia changes the recommended policy for control of Chlamydia from contact tracing of reported cases to a population-based screening programme. We used ALCHEMIST to incorporate new evidence about the effectiveness of implantable cardioverter defibrillators (ICD) and demonstrate that the cost-effectiveness of use of ICDs improves from $74 400 per quality-adjusted life year (QALY) gained to $34 500 per QALY gained. Finally, we demonstrate how a clinician could use ALCHEMIST to incorporate a woman's utilities for relevant health states and thereby develop patient-specific recommendations for BRCA testing; the patient-specific recommendation improved quality-adjusted life expectancy by 37 days. The ALCHEMIST system enables guideline developers to publish both a guideline and an interactive decision model on the web. This web-based tool enables guideline developers to tailor guidelines systematically, to update guidelines easily, and to make the underlying evidence and analysis transparent for users.

Abstract

The purpose of the study is to evaluate patterns of employment and alcohol use among liver transplant recipients with alcoholic (ALD) and nonalcoholic liver disease (non-ALD). MEDLINE, EMBASE, and bibliographic searches identified 5,505 potentially relevant articles published between January 1966 and October 1998. Eighty-two studies reporting data on 5,020 transplant recipients met our inclusion criteria. Pre-orthotopic liver transplantation (OLT), 29% of transplant recipients with ALD and 59% of those with non-ALD worked versus 33% and 80% at 3 years for transplant recipients with ALD and non-ALD, respectively (P

Abstract

Focal pulmonary lesions are commonly encountered in clinical practice, and positron emission tomography (PET) with the glucose analog 18-fluorodeoxyglucose (FDG) may be an accurate test for identifying malignant lesions.To estimate the diagnostic accuracy of FDG-PET for malignant focal pulmonary lesions.Studies published between January 1966 and September 2000 in the MEDLINE and CANCERLIT databases; reference lists of identified studies; abstracts from recent conference proceedings; and direct contact with investigators.Studies that examined FDG-PET or FDG with a modified gamma camera in coincidence mode for diagnosis of focal pulmonary lesions; enrolled at least 10 participants with pulmonary nodules or masses, including at least 5 participants with malignant lesions; and presented sufficient data to permit calculation of sensitivity and specificity were included in the analysis.Two reviewers independently assessed study quality and abstracted data regarding prevalence of malignancy and sensitivity and specificity of the imaging test. Disagreements were resolved by discussion.We used a meta-analytic method to construct summary receiver operating characteristic curves. Forty studies met inclusion criteria. Study methodological quality was fair. Sample sizes were small and blinding was often incomplete. For 1474 focal pulmonary lesions of any size, the maximum joint sensitivity and specificity (the upper left point on the receiver operating characteristic curve at which sensitivity and specificity are equal) of FDG-PET was 91.2% (95% confidence interval, 89.1%-92.9%). In current practice, FDG-PET operates at a point on the summary receiver operating characteristic curve that corresponds approximately to a sensitivity and specificity of 96.8% and 77.8%, respectively. There was no difference in diagnostic accuracy for pulmonary nodules compared with lesions of any size (P =.43), for semiquantitative methods of image interpretation compared with qualitative methods (P =.52), or for FDG-PET compared with FDG imaging with a modified gamma camera in coincidence mode (P =.19).Positron emission tomography with 18-fluorodeoxyglucose is an accurate noninvasive imaging test for diagnosis of pulmonary nodules and larger mass lesions, although few data exist for nodules smaller than 1 cm in diameter. In current practice, FDG-PET has high sensitivity and intermediate specificity for malignancy.

Abstract

To evaluate the cost effectiveness of voluntary prenatal and routine postnatal HIV screening in the cohort of pregnant women and newborns in the United States.Cost-effectiveness analysis. We developed a decision model to analyze the cost effectiveness of enhanced prenatal screening and routine newborn screening for HIV. We also analyzed the incremental cost effectiveness of routine newborn screening when improved voluntary prenatal screening is already in place.Analysis of the cohort of pregnant women and newborns in the United States.Enhanced prenatal screening, or routine newborn screening for HIV.Infections averted, life expectancy, costs, and incremental cost effectiveness.Improved participation in voluntary prenatal HIV screening would result in an additional 1.1 million women being screened annually, would identify an additional 527 HIV-infected mothers annually, would avert 150 infections in newborns, and would cost $8,900 U.S. per life-year gained. Routine newborn HIV screening would test 3.9 million infants annually, would identify 1061 HIV-infected mothers, would avert 266 infections in newborns, and would cost $7,000 U.S. per life-year gained. If improved voluntary prenatal screening is already in place, routine newborn screening would avert an additional 135 infections in newborns, at an incremental cost of $10, 600 U.S. per life-year gained. The screening programs are likely to be cost effective over a wide range of assumptions regarding key factors in the analysis.Improved voluntary prenatal HIV screening of women and routine screening of newborns are cost effective. Routine newborn screening becomes less attractive as the rate of voluntary prenatal screening increases. Improved participation in voluntary prenatal screening has the added benefit that mothers maintain their right to determine whether they are tested for HIV.

Abstract

Radiofrequency ablation is an established but expensive treatment option for many forms of supraventricular tachycardia. Most cases of supraventricular tachycardia are not life-threatening; the goal of therapy is therefore to improve the patient's quality of life.To compare the cost-effectiveness of radiofrequency ablation with that of medical management of supraventricular tachycardia.Markov model.Costs were estimated from a major academic hospital and the literature, and treatment efficacy was estimated from reports from clinical studies at major medical centers. Probabilities of clinical outcomes were estimated from the literature. To account for the effect of radiofrequency ablation on quality of life, assessments by patients who had undergone the procedure were used.Cohort of symptomatic patients who experienced 4.6 unscheduled visits per year to an emergency department or a physician's office while receiving long-term drug therapy for supraventricular tachycardia.Patient lifetime.Societal.Initial radiofrequency ablation, long-term antiarrhythmic drug therapy, and treatment of acute episodes of arrhythmia with antiarrhythmic drugs.Costs, quality-adjusted life-years, life-years, and marginal cost-effectiveness ratios.Among patients who have monthly episodes of supraventricular tachycardia, radiofrequency ablation was the most effective and least expensive therapy and therefore dominated the drug therapy options. Radiofrequency ablation improved quality-adjusted life expectancy by 3.10 quality-adjusted life-years and reduced lifetime medical expenditures by $27 900 compared with long-term drug therapy. Long-term drug therapy was more effective and had lower costs than episodic drug therapy.The findings were highly robust over substantial variations in assumptions about the efficacy and complication rate of radiofrequency ablation, including analyses in which the complication rate was tripled and efficacy was decreased substantially.Radiofrequency ablation substantially improves quality of life and reduces costs when it is used to treat highly symptomatic patients. Although the benefit of radiofrequency ablation has not been studied in less symptomatic patients, a small improvement in quality of life is sufficient to give preference to radiofrequency ablation over drug therapy.

Abstract

Randomized clinical trial (RCT) results are often difficult to find, interpret, or apply to clinical care. The authors propose that RCTs be reported into electronic knowledge bases-trial banks-in addition to being reported in text. What information should these trial-bank reports contain?Using the competency decomposition method, the authors specified the ideal trial-bank contents as the information necessary and sufficient for completing the task of systematic reviewing.They decomposed the systematic reviewing task into four top-level tasks and 62 subtasks. 162 types of trial information were necessary and sufficient for completing these subtasks. These items relate to a trial's design, execution, administration, and results.Trial-bank publishing of these 162 items would capture into computer-understandable form all the trial information needed for critically appraising and synthesizing trial results. Decision-support systems that access shared, up-to-date trial banks could help clinicians manage, synthesize, and apply RCT evidence more effectively.

Abstract

Stents are now used in the majority of percutaneous coronary revascularization procedures. It is not clear whether the higher initial cost of stenting is later repaid by reducing costly complications and repeat revascularization procedures, especially for patients with multivessel disease.To project the long-term costs of using coronary stents, angioplasty, or bypass surgery to treat patients with multivessel coronary artery disease, we developed a decision model based on the outcomes documented in the Bypass Angioplasty Revascularization Investigation (BARI) randomized trial of coronary artery bypass grafting (CABG) and percutaneous transluminal coronary angioplasty (PTCA). We studied 2 clinical strategies: provisional stenting of suboptimal PTCA results and primary stenting of all angiographically eligible lesions. The cost of CABG was also updated to reflect contemporary practice.Provisional stenting had lower projected costs over a 4-year period than either traditional PTCA (-$1742, or -3.4%) or contemporary CABG (-$832, or -1.7%), mostly because of reductions in emergency CABG after PTCA. In contrast, primary stenting had higher projected costs over a 4-year period than either PTCA (+$333, or +0. 7%) or contemporary CABG (+$1243, or +2.5%), mainly because of the higher initial procedure costs. These results were not substantially altered when we systematically varied the key parameters of the models in 1-way and 2-way sensitivity analyses.A primary stenting strategy in patients with multivessel disease has higher projected long-term costs than CABG. In contrast, a provisional stenting strategy in multivessel disease has lower projected costs than either PTCA or CABG.

Abstract

In December 1997, the American Society of Gastrointestinal Endoscopy (ASGE) issued guidelines regarding periendoscopic management of patients who take anticoagulants. They recommended that physicians substitute heparin for warfarin in their patients who have highly thrombotic conditions (e.g., a mechanical valve in the mitral position), and who will undergo high-risk procedures (e.g., polypectomy). The purpose of this study was to assess whether patient outcomes and anticoagulant management changed after the publication of the 1997 guidelines.We collected utilization data on all 104 patients at the Veterans Affairs Palo Alto Health Care System who were taking chronic warfarin therapy and who underwent endoscopic procedures during the study period (1996-1999). These patients underwent 99 colonoscopies, 63 upper endoscopies, and nine endoscopic retrograde cholangiopancreatographies. According to the ASGE guidelines, 18 of these patients had highly thrombotic conditions, whereas the remaining 86 patients had relatively low thrombotic conditions. We calculated their costs for intravenous or subcutaneous heparin therapy from the perspective of society. We followed-up all patients for 3 months, to determine the incidence of thrombotic and hemorrhagic outcomes.No patient suffered a thromboembolism or a hemorrhage; thus, the adverse-event rate (95% confidence interval) was 0% (0-3%). As recommended by the ASGE guidelines, all five (100%) patients who had highly thrombotic conditions had heparin substituted for warfarin before undergoing high-risk procedures. This strategy was also followed in 44 (27%) of the 166 procedures in other patients: 16 high-risk procedures in low-risk patients, and 28 low-risk procedures (in 20 low-thrombotic patients and in eight high-thrombotic patients). There was no significant difference between the management of any patients before and after the publication of the guidelines. The average cost per course of heparin therapy (typically 2 days intravenous heparin preprocedure, and 3 days heparin administered subcutaneously postendoscopy) was $1684. In all, 44 (90%) of 49 courses of heparin substituted for warfarin therapy were not recommended by the guidelines.Patients treated by the ASGE guidelines had the same 0% rate of thrombosis as patients who received periendoscopic heparin outside of the guidelines. Following the ASGE guidelines in all patients would have reduced the use of heparin therapy by 90%, for a net savings of $74,100.

Abstract

The Veterans Health Administration (VHA) sees approximately equal to 17,000 human immunodeficiency virus (HIV)-infected patients each year, which makes it the largest provider of HIV care in the United States. HIV causes chronic progressive disease that leads to early death. Newer combination antiretro viral treatments are effective but expensive and difficult to use. The HIV Quality Enhancement Research Initiative (HIV-QUERI) uses the QUERI process to identify high-risk and high-volume populations (step 1), which includes those already under VHA care for HIV, those who do not know of their infection, and those at risk for HIV. In identifying best practices (step 2), the HIV-QUERI will benefit greatly from existing guidelines for the care of established HIV infection, but gaps in knowledge regarding adherence to medication regimens and cost-effective screening are large. To identify existing practice patterns (step 3), the HIV-QUERI will develop a clean analytic data set based on Immunology Case Registry files and expand it through a survey of veterans. Interventions to improve care (step 4) will include national, regional, and site-specific feedback on performance relative to quality standards, as well as patient-level and provider-level interventions to improve adherence and support medical decision-making. To document that best practices improve outcomes and quality of life (steps 5 and 6), HIV-QUERI will track indicators on an ongoing basis by use of the Immunology Case Registry database and possible future waves of the survey. In addition, we will require that these issues be addressed in evaluations of HIV-QUERI interventions. In the present article, we present these steps within a framework and plan.

Abstract

To assess the benefits of intervention programs against Helicobacter pylori infection, we estimated the baseline curves of its incidence and prevalence. We developed a mathematical (compartmental) model of the intrinsic dynamics of H. pylori, which represents the natural history of infection and disease progression. Our model divided the population according to age, infection status, and clinical state. Case-patients were followed from birth to death. A proportion of the population acquired H. pylori infection and became ill with gastritis, duodenal ulcer, chronic atrophic gastritis, or gastric cancer. We simulated the change in transmissibility consistent with the incidence of gastric cancer and duodenal ulcer over time, as well as current H. pylori prevalence. In the United States, transmissibility of H. pylori has decreased to values so low that, should this trend continue, the organism will disappear from the population without targeted intervention; this process, however, will take more than a century.

Abstract

Local tailoring of clinical practice guidelines (CPGs) requires experts in medicine and evidence synthesis unavailable in many practice settings. The authors' computer-based system enables developers and users to create, disseminate, and tailor CPGs, using normative decision models (DMs).ALCHEMIST, a web-based system, analyzes a DM, creates a CPG in the form of an annotated algorithm, and displays for the guideline user the optimal strategy. ALCHEMIST'S interface enables remote users to tailor the guideline by changing underlying input variables and observing the new annotated algorithm that is developed automatically. In a pilot evaluation of the system, a DM was used to evaluate strategies for staging non-small-cell lung cancer. Subjects (n = 15) compared the automatically created CPG with published guidelines for this staging and critiqued both using a previously developed instrument to rate the CPGs' usability, accountability, and accuracy on a scale of 0 (worst) to 2 (best), with higher scores reflecting higher quality.The mean overall score for the ALCHEMIST CPG was 1.502, compared with the published-CPG score of 0.987 (p = 0.002). The ALCHEMIST CPG scores for usability, accountability, and accuracy were 1.683, 1.393, and 1.430, respectively; the published CPG scores were 1.192, 0.941, and 0.830 (each comparison p < 0.05). On a scale of 1 (worst) to 5 (best), users' mean ratings of ALCHEMIST'S ease of use, usefulness of content, and presentation format were 4.76, 3.98, and 4.64, respectively.The results demonstrate the feasibility of a web-based system that automatically analyzes a DM and creates a CPG as an annotated algorithm, enabling remote users to develop site-specific CPGs. In the pilot evaluation, the ALCHEMIST guidelines met established criteria for quality and compared favorably with national CPGs. The high usability and usefulness ratings suggest that such systems can be a good tool for guideline development.

Cost-effectiveness of the pneumococcal vaccine in the United States Navy and Marine CorpsCLINICAL INFECTIOUS DISEASESPepper, P. V., Owens, D. K.2000; 30 (1): 157-164

Abstract

Vaccination for Streptococcus pneumoniae has been recommended for its efficacy and cost-effectiveness in elderly and immunocompromised populations. However, its use in active-duty military personnel has not been analyzed. We developed a Markov model to evaluate health and economic outcomes of vaccinating or not vaccinating all members of the active-duty cohort, measuring quality-adjusted life years (QALYs) gained, costs, and marginal cost-effectiveness. Pneumococcal pneumonia vaccination increased each person's life expectancy by 0. 03 days and decreased costs by $9.88 per person. The magnitude of the benefit of immunization is moderately sensitive to the rate of serious side effects caused by the vaccine, the incidence of pneumonia, the length of protection, and the efficacy of the vaccine. Vaccinating all 575,000 active-duty US Navy and Marine Corps members could save $5.7 million during the time the members are alive and on active duty and could provide a total gain of 54 QALYs. On the basis of these results, the military should consider expanding current guidelines to include pneumococcal vaccine immunization for all active-duty members of the military.

Abstract

Prophylactic vaccination has been suggested as a better strategy than antibiotics to control Helicobacter pylori infection. We evaluated the cost-effectiveness (CE) of H. pylori vaccine development and use in the United States and developing countries, using a method developed by the Institute of Medicine (IOM).The IOM model includes costs of vaccine development, vaccination program, and averted medical treatments; morbidity and mortality prevented; expected efficacy and use; and proportion of disease that is vaccine-preventable. The model employs infant mortality equivalence (IME) to estimate disease burden; with IME, the societal cost of infection-related morbidity is expressed as equivalent to a specific rate of infant deaths. We tested model assumptions by univariate sensitivity analyses.In the United States, H. pylori vaccine would save 1,176 IME and would cost $58.71 million (1997 dollars) annually, yielding a CE ratio of $49,932 per IME; the health benefits would exceed all IOM-studied vaccines, even when efficacy dropped to 55%. H. pylori vaccine could be cost-saving if priced at less than $60 per course. In developing countries, H. pylori vaccine would rank unfavorably both in terms of health benefits (33,518 IME) and costs ($5,254 million). None of the changes in assumptions improved significantly the H. pylori vaccine's ranking relative to other IOM-studied vaccines.Compared to other vaccines evaluated in the IOM study, H. pylori vaccine warrants public resource allocation for accelerated development and use in the United States but not for use in developing countries.

Abstract

We sought to determine the appropriate use of echocardiography for patients with suspected endocarditis.We constructed a decision tree and Markov model using published data to simulate the outcomes and costs of care for patients with suspected endocarditis.Transesophageal imaging was optimal for patients who had a prior probability of endocarditis that is observed commonly in clinical practice (4% to 60%). In our base-case analysis (a 45-year-old man with a prior probability of endocarditis of 20%), use of transesophageal imaging improved quality-adjusted life expectancy (QALYs) by 9 days and reduced costs by $18 per person compared with the use of transthoracic echocardiography. Sequential test strategies that reserved the use of transesophageal echocardiography for patients who had an inadequate transthoracic study provided similar QALYs compared with the use of transesophageal echocardiography alone, but cost $230 to $250 more. For patients with prior probabilities of endocarditis greater than 60%, the optimal strategy is to treat for endocarditis without reliance on echocardiography for diagnosis. Patients with a prior probability of less than 2% should receive treatment for bacteremia without imaging. Transthoracic imaging was optimal for only a narrow range of prior probabilities (2% or 3%) of endocarditis.The appropriate use of echocardiography depends on the prior probability of endocarditis. For patients whose prior probability of endocarditis is 4% to 60%, initial use of transesophageal echocardiography provides the greatest quality-adjusted survival at a cost that is within the range for commonly accepted health interventions.

Abstract

The goal of this study is to assess health-related quality of life (HRQL) after orthotopic liver transplantation (OLT). Structured MEDLINE and Embase literature searches identified 5473 potentially relevant articles. Thirty-two additional references were collected from the bibliographies. Of the 5505 identified articles, 49 studies reporting data on 3576 transplant recipients met our inclusion criteria, which were an assessment of quality of life (QOL) in adult patients reported as either pretransplantation and posttransplantation data or with a comparison group and written in English. We combined posttransplantation QOL scores from 15 studies that reported data from the same QOL scales to assess the magnitude of the effect of OLT on QOL scales. We also performed a sign test on the 49 studies to evaluate the direction (positive or negative) of the effect of transplantation on QOL. Transplantation resulted in an improvement of 32% in Karnofsky scores, 11% in Sickness Impact Profile scores, and 20% to 50% in the domains of the Nottingham Health Profile. The sign test showed significant improvement in posttransplantation physical health (P

Abstract

Strains of Helicobacter pylori that express the CagA protein are associated with a threefold increased gastric cancer risk as compared to H. pylori strains that do not express CagA. Screening and treatment only for CagA antibodies should target those individuals at highest gastric cancer risk while reducing the number of patients requiring antibiotics. We compared the costs and benefits of screening asymptomatic 50-year-old individuals for CagA, screening for all H. pylori strains, and no screening, both in the United States and abroad.We employed Markov cost-effectiveness analysis using data from randomized, case-control, and cohort studies.In the United States, CagA screening would result in 1.5 million fewer antibiotic treatments but would prevent 1,400 fewer gastric cancers than would screening for all H. pylori. The incremental cost-effectiveness of CagA screening is $23,900 per life-year gained; for H. pylori screening, it is $25,100. Screening in countries with epidemiological characteristics similar to those of Colombia, Finland, and Japan costs less than $5,000 per life-year gained, and the difference between CagA and H. pylori screening is smaller than that in the United States.Screening only for CagA-positive H. pylori is not substantially better than is screening for all H. pylori, either in the United States nor abroad. Screening is substantially more cost-effective outside the United States. Whether population screening is justified, however, is uncertain pending conclusive data regarding the reduction in gastric cancer risk from antibiotics.

Abstract

Although decision models can provide a formal foundation for guideline development and clinical decision support, their widespread use is often limited by the lack of platform-independent software that geographically dispersed users can access and use easily without extensive training. To address these limitations the authors developed a World Wide Web-based interface for previously developed decision models. They describe the use and functionality of the interface using a decision model that evaluates the cost-effectiveness of strategies for preventing sudden cardiac death. The system allows an analyst to use a web browser to interact with the decision model and to change the values of input variables within pre-specified ranges, to specify sensitivity or threshold analyses, to evaluate the decision model, and to view the results generated dynamically. The web site also provides linkages to an explanation of the model, and evidence tables for input variables. The system demonstrates a method for providing distributed decision support to remote users such as guideline developers, decision analysts, and potentially practicing physicians. The web interface provides platform-independent and almost universal access to a decision model. This approach can make distributed decision support both practical and economical, and has the potential to increase the usefulness of decision models by enabling a broader audience to incorporate systematic analyses into both policy and clinical decisions.

Abstract

Millions of dollars are spent annually to prevent infection with human immunodeficiency virus (HIV) without a thorough understanding of the most effective way to allocate these resources. The authors' objective was to determine the allocation of new resources among prevention programs targeted to a population of injection drug users (IDUs) and a population of non-injection drug users (non-IDUs) that would minimize the total number of incident cases of HIV infection over a given time horizon. They developed a dynamic model of HIV transmission in IDUs and non-IDUs and estimated the relationship between prevention program expenditures and reductions in HIV transmission. They evaluated three prevention programs: HIV testing with routine counseling, HIV testing with intensive counseling, and HIV testing and counseling linked to methadone maintenance programs. They modeled a low-risk IDU population (5% HIV prevalence) and a moderate-risk IDU population (10% HIV prevalence). For different available budgets, they determined the allocation of resources among the prevention programs and populations that would minimize the number of new cases of HIV infection over a five-year period, as well as the incremental value of additional prevention funds. The study framework provides a quantitative, systematic approach to funding programs to prevent HIV infection that accounts for HIV transmission dynamics, population size, and the costs and effectiveness of the interventions in reducing HIV transmission. The approach is general and can be used to evaluate a broader group of prevention programs and risk populations. This framework thus could enable policy makers and clinicians to identify a portfolio of programs that provide, collectively, the most benefit for a given budget.

Abstract

Many studies have now confirmed the association between inheritance of the epsilon 4 allele of the apolipoprotein E (APOE) gene and Alzheimer disease (AD). However, although the medical community holds the near-unanimous opinion that APOE genotyping should not be used for prediction in asymptomatic individuals, controversy remains about whether it should be used for diagnosis in patients who show signs of dementia. We assessed critically the recent clinical studies, on the basis of four criteria recommended to ensure safety and effectiveness of genetic tests. We also developed a formal framework for evaluating the usefulness of APOE genotyping using decision-theoretic principles. We conclude that neither the presence nor absence of an epsilon 4 allele provides diagnostic certainty, and the proper interpretation of either result in heterogeneous populations requires further investigation. The appropriate role of APOE genotyping among elements of a traditional assessment for AD has not been determined. Whether APOE genotyping provides sufficient information to change patient management decisions has not been determined. APOE genotyping presents foreseeable, significant psychosocial consequences for family members that must be weighed against any psychosocial benefits. Therefore, the diagnostic use of APOE genotyping outside research settings is premature until such testing is shown to be of practical value.

Abstract

To evaluate the population effects of potential preventive and therapeutic vaccines in early- and late-stage epidemics in a population of homosexual men.An epidemic model was used that simulated the course of the epidemic for a population of homosexual men in San Francisco, California. Vaccine programs were evaluated by the number of cases of HIV averted, the effect on the prevalence of HIV, and by the gain in quality-adjusted life years (QALY) for the total population.In the model, a preventive vaccine prevented 3877 cases of HIV infection during a 20-year period, reduced the projected prevalence of HIV infection from 12 to 7% in a late-stage epidemic, and gained 15,908 QALY. A therapeutic vaccine that did not affect the infectivity of vaccine recipients increased the number of cases of HIV infection by 210, resulted in a slight increase in the prevalence of HIV infection from 12 to 15% in a late-stage epidemic, and gained 8854 QALY. If therapeutic vaccines reduced infectivity, their use could produce net gains of QALY in the population that were similar to gains from the use of preventive vaccines. In an early-stage epidemic, the advantage of a preventive vaccine program relative to a therapeutic vaccine program was markedly enhanced.Both preventive and therapeutic vaccine programs provided substantial benefit, but their relative merit depended on which outcome measures were assessed. Evaluation of HIV vaccine programs based solely on cases averted or on prevalence of HIV in the population underestimates the benefit associated with therapeutic vaccine programs. The effect of a therapeutic HIV vaccine on the epidemic outcomes depended markedly on whether the therapeutic vaccine reduced the infectivity of the vaccine recipient. The relative merits of preventive and therapeutic vaccines depend on the stage of the epidemic. Field vaccine trials should evaluate correlates of infectivity, such as HIV viral load. HIV vaccine implementation strategies should be tailored to the dynamics of the epidemic in specific populations.

Abstract

Recent atrial fibrillation guidelines recommend the incorporation of patient preferences into the selection of antithrombotic therapy. However, no trial has examined how incorporating such preferences would affect quality-adjusted survival or medical expenditure. We compared 10-year projections of quality-adjusted survival and medical expenditure associated with two atrial fibrillation treatment strategies: warfarin-for-all therapy versus preference-based therapy. The preference-based strategy prescribed whichever antithrombotic therapy, warfarin or aspirin, had the greater projected quality-adjusted survival.We used decision analysis stratified by the number of stroke risk factors (history of stroke, transient ischemic attack, hypertension, diabetes, or heart disease). The base case focused on compliant 65-year-old patients who had nonvalvular atrial fibrillation and no contraindications to antithrombotic therapy.In patients whose only risk factor for stroke was atrial fibrillation, preference-based therapy improved projected quality-adjusted survival by 0.05 quality-adjusted life year (QALY) and saved $670. For patients who had atrial fibrillation and one additional risk factor for stroke, preference-based therapy improved quality-adjusted survival by 0.02 QALY and saved $90. In patients who had atrial fibrillation and multiple additional risk factors for stroke, preference-based therapy increased medical expenditures and did not improve quality-adjusted survival substantially. The benefits of preference-flexible therapy arose from the minority of patients who would have had a longer quality-adjusted survival if they had been prescribed aspirin rather than warfarin.As do risks of stroke and of hemorrhage, patients' preferences help to determine which antithrombotic therapy is optimal. Preference-based treatment should improve quality-adjusted survival and reduce medical expenditure in patients who have nonvalvular atrial fibrillation and not more than one additional risk factor for stroke.

Patient preferences and the development of practice guidelinesSPINEOwens, D. K.1998; 23 (9): 1073-1079

Abstract

One shortcoming of clinical practice guidelines is that generic, one-for-all guideline recommendations do not account for differences among patients' views about the desirability (or undesirability) of specific health outcomes, such as low back pain. Because differences in patients' preferences may lead to differences in the preferred therapy, a clinical practice guideline that does not consider patients' preferences may provide recommendations that are not optimal. Recently developed methodologic approaches enable guideline developers to assess the role of patients' preferences in clinical decisions and guideline recommendations, and to develop preference-based guidelines. Preference-based guidelines are more likely to meet criteria for high-quality guidelines than are guidelines developed without consideration of the role of patients' preferences. Guideline developers should identify decisions in which patient preferences are important and note these decisions clearly in the written guideline; indicate the specific health states for which preferences are important; and, if possible, provide recommendations about options for preference assessment. These options range from informal discussions with patients to computer-based utility assessments. Patients' preferences are an important factor in clinical decisions regarding management of low-back pain, particularly in decisions about surgical management and symptom control. Although further research is needed to define the role of techniques for assessing patients' preferences in routine clinical practice, guideline developers can determine when patients' preferences should play a prominent role in guideline recommendations.

Abstract

Clinical practice guidelines have enormous potential to improve the quality of and accountability in health care. Making the most of this potential should become easier as guideline developers integrate guidelines within information systems and electronic medical records. A major barrier to such integration is the lack of computing infrastructure in many clinical settings. To successfully implement guidelines in information systems, developers must create more specific recommendations than those that have been required for traditional guidelines. Using reusable software components to create guidelines can make the development of protocols faster and less expensive. In addition, using decision models to produce guidelines enables developers to structure guideline problems systematically, to prioritize information acquisition, to develop site-specific guidelines, and to evaluate the cost-effectiveness of the explicit incorporation of patient preferences into guideline recommendations. Ongoing research provides a foundation for the use of guideline development tools that can help developers tailor guidelines appropriately to their practice settings. This article explores how medical informatics can help clinicians find, use, and create practice guidelines.

Abstract

We developed a decision-support system for evaluation of treatment alternatives for supraventricular and ventricular arrhythmias. The system uses independent decision models that evaluate the costs and benefits of treatment for recurrent atrioventricular-node reentrant tachycardia (AVNRT), and of therapies to prevent sudden cardiac death (SCD) in patients at risk for life-threatening ventricular arrhythmias. Each of the decision models is accessible through a web-based interface that enables remote users to browse the model's underlying evidence and to perform analyses of effectiveness, cost effectiveness, and sensitivity to input variables. Because the web-based interface is independent of the models, we can extend the functionality of the system by adding decision models. This system illustrates that the use of a library of web-accessible decision models provides decision support economically to widely dispersed users.

Abstract

Isoniazid chemoprophylaxis effectively prevents the development of active infectious tuberculosis. Current guidelines recommend withholding this prophylaxis for low-risk tuberculin reactors older than 35 years of age because of the risk for fatal isoniazid-induced hepatitis. However, recent studies have shown that monitoring for hepatotoxicity can significantly reduce the risk for isoniazid-related death.To evaluate the effectiveness and cost-effectiveness of monitored isoniazid prophylaxis for low-risk tuberculin reactors older than 35 years of age.A Markov model was used to compare the health and economic outcomes of prescribing or withholding a course of prophylaxis for low-risk reactors 35, 50, or 70 years of age. Subsequent analyses evaluated costs and benefits when the effect of transmission of Mycobacterium tuberculosis to contacts was included.Probability of survival at 1 year, number needed to treat, life expectancy, and cost per year of life gained for individual persons and total population.Isoniazid prophylaxis increased the probability of survival at 1 year and for all subsequent years. For 35-year old, 50-year-old, and 70-year-old tuberculin reactors, life expectancy increased by 4.9 days, 4.7 days, and 3.1 days, respectively, and costs per person decreased by $101, $69, and $11, respectively. When the effect of secondary transmission to contacts was included, the gains in life expectancy per person receiving prophylaxis were 10.0 days for 35-year-old reactors, 9.0 days for 50-year-old reactors, and 6.0 days for 70-year-old reactors. Costs per person for these cohorts decreased by $259, $203, and $100, respectively. The magnitude of the benefit of isoniazid prophylaxis is moderately sensitive to the effect of isoniazid on quality of life. The hypothetical provision of isoniazid prophylaxis for all low-risk reactors older than 35 years of age in the U.S. population could prevent 35,176 deaths and save $2.11 billion.Monitored isoniazid prophylaxis reduces mortality rates and health care costs for low-risk tuberculin reactors older than 35 years of age, although reductions for individual patients are small. For the U.S. population, however, the potential health benefits and economic savings resulting from wider use of monitored isoniazid prophylaxis are substantial. We should consider expanding current recommendations to include prophylaxis for tuberculin reactors of all ages with no contraindications.

Abstract

A central problem in practice guideline development is how to develop guidelines that appropriately account for variations in clinical populations and practice settings. Despite recognition of this problem, there is no formal mechanism for assessing what the need is for flexibility in guidelines, or for deciding how to incorporate such flexibility into recommendations.This research sought to provide a formal basis to determine when clinical circumstances vary sufficiently that guideline recommendations should differ, how recommendations should be tailored for a specific clinical setting, and whether the benefit associated with such site-specific guidelines justifies the expense of their development.The authors describe an approach for estimating the maximum health benefit that developers can obtain by eliminating uncertainty about differences in the patient populations and practice settings in which a guideline will be used. This estimate, the expected value of customization, provides a mechanism to evaluate the cost-effectiveness of the development of site-specific guidelines that account explicitly for variation in clinical circumstances. Application of this method to the development of screening guidelines for human immunodeficiency virus (HIV) infection indicates that the development of site-specific guidelines potentially is cost-effective. Site-specific guidelines either improve, or leave unchanged, the efficiency of HIV screening; whether they increase or decrease total expenditures and health benefits depends on the choice of a cost-effectiveness threshold, and the clinical problem.Development of guideline recommendations based on decision models provides a normative approach for evaluating the need for and the cost-effectiveness of site-specific guidelines that have been tailored to specific practice settings. Such site-specific guidelines can improve substantially the expected health benefit and the economic efficiency of practice guidelines.

Abstract

Influence diagrams are compact representations of decision problems that are mathematically equivalent to decision trees. The authors present five important principles for structuring a decision as an influence diagram: 1) start at the value node and work back to the decision nodes; 2) draw the arcs in the direction that makes the probabilities easiest to assess; 3) use informational arcs to specify which events will have been observed at the time each decision is made; 4) ensure that missing arcs reflect intentional assertions about conditional independence and the timing of observations; and 5) ensure that there are no cycles in the influence diagram. They then build an influence diagram for the problem of staging non-small-cell lung cancer as an illustration. Influence diagrams offer several strengths for structuring medical decisions. They represent graphically and compactly the probabilistic relationships between parameters in the model. Influence diagrams also allow the model to be structured in a fashion that eases the necessary probability assessments, regardless of whether the assessments are based on available evidence or on expert judgment. Influence diagrams provide an important complement to decision trees, especially for representing probabilistic relationships among variables in a decision model.

Abstract

Influence diagrams are a powerful graphic representation for decision models, complementary to decision trees. Influence diagrams and decision trees are different graphic representations for the same underlying mathematical model and operations. This article describes the elements of an influence diagram, and shows several familiar decision problems represented as decision trees and as influence diagrams. The authors also contrast the information highlighted in each graphic representation, demonstrate how to calculate the expected utilities of decision alternatives modeled with an influence diagram, provide an overview of the conceptual basis of the solution algorithms that have been developed for influence diagrams, discuss the strengths and limitations of influence diagrams relative to decision trees, and describe the mathematical operations that are used to evaluate both decision trees and influence diagrams. They use clinical examples to illustrate the mathematical operations of the influence-diagram-evaluation algorithm; these operations are arc reversal, chance node removal by averaging, and decision node removal by policy determination. Influence diagrams may be helpful when problems have a high degree of conditional independence, when large models are needed, when communication of the probabilistic relationships is important, or when the analysis requires extensive Bayesian updating. The choice of graphic representation should be governed by convenience, and will depend on the problem being analyzed, on the experience of the analyst, and on the background of the consumers of the analysis.

Abstract

Implantable cardioverter defibrillators (ICDs) are remarkably effective in terminating ventricular arrhythmias, but they are expensive and the extent to which they extend life is unknown. The marginal cost-effectiveness of ICDs relative to amiodarone has not been clearly established.To compare the cost-effectiveness of a third-generation implantable ICD with that of empirical amiodarone treatment for preventing sudden cardiac death in patients at high or intermediate risk.A Markov model was used to evaluate health and economic outcomes of patients who received an ICD, amiodarone, or a sequential regimen that reserved ICD for patients who had an arrhythmia during amiodarone treatment.Life-years gained, quality-adjusted life-years gained, costs, and marginal cost-effectiveness.For the base-case analysis, it was assumed that treatment with an ICD would reduce the total mortality rate by 20% to 40% at 1 year compared with amiodarone and that the ICD generator would be replaced every 4 years. In high-risk patients, if an ICD reduces total mortality by 20%, patients who receive an ICD live for 4.18 quality-adjusted life-years and have a lifetime expenditure of $88,400. Patients receiving amiodarone live for 3.68 quality-adjusted life-years and have a lifetime expenditure of $51,000. Marginal cost-effectiveness of an ICD relative to amiodarone is $74,400 per quality-adjusted life-year saved. If an ICD reduces mortality by 40%, the cost-effectiveness of ICD use is $37,300 per quality-adjusted life-year saved. Both choice of therapy (an ICD or amiodarone) and the cost-effectiveness ratio are sensitive to assumptions about quality of life.Use of an ICD will cost more than $50,000 per quality-adjusted life-year gained unless it reduces all-cause mortality by 30% or more relative to amiodarone. Current evidence does not definitively support or exclude a benefit of this magnitude, but ongoing randomized trials have sufficient statistical power to do so.

Abstract

An understanding of quality of life (QOL) with human immunodeficiency virus (HIV) is important because the merits of prevention and treatment alternatives may depend substantially on how these interventions affect QOL. Physicians' views about QOL are important, because they influence the therapeutic options that physicians consider or offer, the recommendations that physicians make, and because they are important for the analysis of certain policy questions. We assessed physicians' utilities of health states associated with HIV infection, and hepatitis B virus (HBV) infection; assessment of utilities for HBV was induced to provide a comparison with HIV utilities. We surveyed 200 housestaff and staff physicians in an academic medical centre by anonymous paper-based questionnaire and used the time-tradeoff method to assess physicians' utilities of the health states. On a scale in which 0 was equivalent to death, and 1 was equivalent to good health, the median utilities for asymptomatic HIV infection, symptomatic HIV infection, and AIDS were 0.833, 0.417, and 0.167, respectively (p < 0.01 or each two-way comparison). Median utilities for asymptomatic HBV infection, mildly symptomatic HBV infection, and severely symptomatic HBV infection were 0.917, 0.667, and 0.167, respectively (p < 0.01 for each two-way comparison). Although physicians varied substantially in the ratings of health states, they assessed the utility of life with HIV disease, including asymptomatic infection, as severely reduced. Studies of the effectiveness and cost-effectiveness of preventive and therapeutic interventions for HIV should evaluate the effect of the intervention on utility-based assessments of QOL. Studies that do not assess such effects may significantly underestimate or overestimate the value of these interventions, depending on the intervention's effect on QOL.

Abstract

Because most strokes cause neurological impairment rather than death, stroke prophylaxis may improve quality of life more than length of life. Thus, an understanding of how stroke and stroke prophylaxis affect quality of life is central to clinical decision making for many patients.We elicited quality-of-life estimates, known as utilities, for 3 degrees of severity of anticipated stroke-mild, moderate, and major- and for stroke prophylaxis with either warfarin sodium or aspirin therapy. We used the time tradeoff and standard gamble methods to elicit these utilities from 83 patients who had atrial fibrillation.Seventy patients completed the interview successfully. Their utilities for stroke ranged from worse than death (< 0) to as good as current health (1.0). The median utilities for mild, moderate, and major stroke were 0.94, 0.07, and 0.0, respectively. Although the median utilities decreased with increasing severity of stroke (P < .001), there was high interpatient variability within each degree of stroke severity. For example, 7 subjects (10%) rated a major stroke above 0.5, while 58 subjects (83%) rated it as equal to or worse than death. In contrast to the stroke utilities, the median utilities for warfarin and aspirin therapy were high-0.997 and 1.0, respectively. However, the interpatient variability for warfarin therapy was also important: 11 patients (16%) with atrial fibrillation rated the utility of warfarin therapy so low that their quality-adjusted life expectancy would be greater with aspirin.Patients' utilities for stroke prophylaxis and anticipated stroke vary substantially. Many patients view the quality of life with major stroke as tantamount to or worse than death. These findings highlight the relevance of incorporating patient preferences when choosing stroke prophylaxis.

Abstract

A research prototype Physician Workstation (PWS) incorporating a graphical user interface and a drug ordering module was compared with the existing hospital information system in an academic Veterans Administration General Medical Clinic. Physicians in the intervention group received recommendations for drug substitutions to reduce costs and were alerted to potential drug interactions. The objective was to evaluate the effect of the PWS on user satisfaction, on health-related outcomes, and on costs.A one-year, two-period, randomized controlled trial with 37 subjects.Differences in the reliance on noncomputer sources of information, in user satisfaction, in the cost of prescribed medications, and in the rate of clinically relevant drug interactions were assessed.The study subjects logged onto the workstation an average of 6.53 times per provider and used it to generate 2.8% of prescriptions during the intervention period. On a five-point scale (5 = very satisfied, 1 = very dissatisfied), user satisfaction declined in the PWS group (3.44 to 2.98 p = 0.008), and increased in the control group (3.23 to 3.72, p < 0.0001).The intervention physicians did not use the PWS frequently enough to influence information-seeking behavior, health outcomes, or cost. The study design did not determine whether the poor usage resulted from satisfaction with the control system, problems using the PWS intervention, or the functions provided by the PWS intervention. Evaluative studies should include provisions to improve the chance of successful implementation as well as to yield maximum information if a negative study occurs.

Abstract

It is unknown whether eradication of Helicobacter pylori infection prevents development of gastric adenocarcinoma. To determine whether screening and treatment trials are warranted, we conducted a cost-effectiveness analysis to estimate the costs and benefits associated with screening for H pylori at age 50 and treating those individuals infected with antibiotics.We compared two interventions: (1) screen for H pylori and treat those with a positive test, and (2) do not screen and do not treat. Estimates of risks and costs were obtained by review of published reports. Since the efficacy of H pylori therapy in cancer prevention is unknown, we did sensitivity analyses, varying this estimate widely. In our base-case analysis, we assumed that H pylori treatment prevented 30% of attributable gastric cancers.In the base-case analysis, 11,646,000 persons in the US would be screened and 4,658,400 treated, at a cost of $996 million. Cost-effectiveness was $25,000 per year of life saved. Cost-effectiveness was sensitive to the efficacy of the cancer prevention strategy. At low efficacy rates (< 10%), the screening programme was more expensive (> $75,000 per year of life saved). In a high-risk group such as Japanese-Americans, however, screening and treatment required less than $50,000 per year of life saved, even at 5% treatment efficacy.Screening and treatment for H pylori infection is potentially cost-effective in the prevention of gastric cancer, particularly in high-risk populations. Cancer prevention trials are strongly recommended.

Abstract

To do a meta-analysis of studies that have evaluated the sensitivity and specificity of polymerase chain reaction (PCR) assay for the diagnosis of human immunodeficiency virus (HIV) infection in adults. Evaluating the performance of PCR is difficult because in certain clinical situations, the sensitivity or specificity of PCR may exceed those of the current reference standard tests (enzyme immunoassay followed by confirmatory Western blot analysis). Therefore, an additional goal was to develop recommendations for 1) the design of future evaluative studies of PCR and 2) the use of PCR in persons with suspected HIV infection.Studies published between 1988 and 1994 that were identified in a search of 17 computer databases, including MEDLINE, and abstracts identified from conference proceedings.Studies were included if DNA amplification by PCR was done on peripheral blood mononuclear cells from adults. Ninety-six studies met the inclusion criteria.Data were extracted independently by two reviewers. Study design was assessed independently by two investigators blinded to study results.Reported sensitivities for PCR range from 10% to 100%, and specificities range from 40% to 100%. A summary receiver-operating characteristic curve based on all 96 studies has a maximum joint sensitivity and specificity (upper left point on the curve, where sensitivity equals specificity) of 97.0% to 98.1%. If the threshold value that defines a positive PCR result is chosen so that sensitivity is higher than 98.1%, specificity will decrease to less than 98.1%. Conversely, if the threshold value that defines a positive PCR result is chosen so that specificity is greater than 98.1%, sensitivity will decrease to less than 98.1%. If sensitivity and specificity are chosen to be equal, the corresponding false-positive rate is 1.9% to 3.0%. At the maximum joint sensitivity and specificity, the positive predictive value of PCR ranges from 34% to 85% as the prevalence of HIV increases from 1.0% to 10%. We identified seven areas in which study design could be modified to 1) reduce susceptibility to bias in estimates of the sensitivity and specificity of PCR and 2) to increase the generalizability of the study results. These modifications will also help to overcome methodologic problems created by the lack of a reference standard test.The PCR assay is not sufficiently accurate to be used for the diagnosis of HIV infection without confirmation. Use of PCR for the diagnosis of HIV in adults should be limited to situations in which antibody tests are known to be insufficient. Future studies of PCR performance should be sufficiently large and should use adequate reference standard tests and standardized methods for the performance of PCR. Specimens should be evaluated by persons blinded to clinical status and to the results of other diagnostic tests for HIV infection.

Abstract

To evaluate the sensitivity and specificity of the polymerase chain reaction (PCR) for the diagnosis of infection with human immunodeficiency virus (HIV) in infants.We used studies published between 1988 and 1994 identified in a literature search of 17 databases, including MEDLINE.Studies were included if DNA amplification by PCR was performed on peripheral blood mononuclear cells from infants or children.Two investigators independently extracted data. The study design was assessed independently by 2 investigators who were blinded to study results.Thirty-two studies met the inclusion criteria and were analyzed. The median reported sensitivity was 91.6% (range, 31%-100%), and the median specificity was 100% (range, 50%-100%). A summary receiver operating characteristic curve based on all 32 studies indicated that PCR has a maximum joint sensitivity and specificity between 93.2% and 94.9%. Subgroup analysis indicated that the joint sensitivity and specificity was significantly (P = .04) higher in older infants (98.2%) than in neonates (aged < or = 30 days; 93.3%). For infants at low risk of perinatal transmission (probability of transmission, 8.3%), the positive predictive value for PCR is 55.8% in neonates and 83.2% in older infants. A negative PCR result reduces the probability of HIV infection to less than 3%. No studies met all criteria for study design.Although PCR is one of the best available tests for diagnosis of HIV infection in neonates and infants, it is not definitive. Therefore, PCR should be interpreted with the aid of careful clinical follow-up examinations. The sensitivity and specificity of PCR in neonates is lower than in older infants, which results in a low positive predictive value; however, negative tests are informative. Delaying the use of PCR until after the neonatal period or repeating PCR on independent samples obtained 30 to 60 days later will reduce test errors.

Abstract

Although screening inpatients for human immunodeficiency virus (HIV) in acute care hospital settings has been recommended, the cost-effectiveness of screening is not known.To estimate the cost-effectiveness of a voluntary screening program in acute care hospitals and associated clinics.During the first year, an HIV screening program implemented in acute care hospital settings in which the seroprevalence of HIV infection is 1% or more would result in the identification of approximately 110,000 undetected cases of HIV infection. The program would result in expenditures of approximately $171 million for testing and counseling, and expenditures of approximately $2 billion for incremental medical care for the patients identified as having HIV infection during the first year of screening. When the seroprevalence of HIV is 1%, the cost-effectiveness of screening is $47,200 per year of life saved. When the effect of early identification of HIV infection on the patient's quality of life also is considered, screening is less cost-effective. Screening-induced reductions in risk behavior improve the cost-effectiveness of screening by preventing the transmission of HIV.

Abstract

We demonstrated the use of the World Wide Web for the presentation and explanation of a medical decision model. We put on the web a treatment model developed as part of the Cardiac Arrhythmia and Risk of Death Patient Outcomes Research Team (CARD PORT). To demonstrate the advantages of our web-based presentation, we critiqued both the conventional paper-based and the web-based formats of this decision-model presentation with reference to an accepted published guide to understanding clinical decision models. A web-based presentation provides a useful supplement to paper-based publications by allowing authors to present their model in greater detail, to link model inputs to the primary evidence, and to disseminate the model to peer investigators for critique and collaborative modeling.

Abstract

To examine the cost-effectiveness of prescribing warfarin sodium in patients who have nonvalvular atrial fibrillation (NVAF) with or without additional stroke risk factors (a prior stroke or transient ischemic attack, diabetes, hypertension, or heart disease).Decision and cost-effectiveness analyses. The probabilities for stroke, hemorrhage, and death were obtained from published randomized controlled trials. The quality-of-life estimates were obtained by interviewing 74 patients with atrial fibrillation. Costs were estimated from literature review, phone survey, and Medicare reimbursement.In the base case, the patients were 65 years of age and good candidates for warfarin therapy.Treatment with warfarin, aspirin, or no therapy in the decision analytic model.Quality-adjusted survival and marginal cost-effectiveness of warfarin as compared with aspirin or no therapy.For patients with NVAF and additional risk factors for stroke, warfarin therapy led to a greater quality-adjusted survival and to cost savings. For patients with NVAF and one additional risk factor, warfarin therapy cost $8000 per quality-adjusted life-year saved. For 65-year-old patients with NVAF alone, warfarin cost about $370,000 per quality-adjusted life-year saved, as compared with aspirin therapy. However, for 75-year-old patients with NVAF alone, prescribing warfarin cost $110,000 per quality-adjusted life-year saved. For patients who were not prescribed warfarin, aspirin was preferred to no therapy on the basis of both quality-adjusted survival and cost in all patients, regardless of the number of risk factors present.Treatment with warfarin is cost-effective in patients with NVAF and one or more additional risk factors for stroke. In 65-year-old patients with NVAF but no other risk factors for stroke, prescribing warfarin instead of aspirin would affect quality-adjusted survival minimally but increase costs significantly.

Abstract

To determine whether preoperative coronary angiography and revascularization improve short-term outcomes in patients undergoing noncardiac vascular surgery.Decision analysis.Patients undergoing elective vascular surgery who had either no angina or mild angina and a positive dipyridamole-thallium scan result.Three strategies were compared. The first strategy was to proceed directly to vascular surgery. The second was to perform coronary angiography, followed by selective coronary revascularization, before proceeding to vascular surgery and to cancel vascular surgery in patients with severe inoperable coronary artery disease (CAD). The third was to perform coronary angiography, followed by selective coronary revascularization, before proceeding to vascular surgery and to perform vascular surgery in patients with inoperable CAD.Mortality, nonfatal myocardial infarction, stroke, uncorrected vascular disease, and cost. All outcomes were assessed within 3 months.Proceeding directly to vascular surgery led to lower morbidity and cost in the base case analysis. The coronary angiography strategy led to higher mortality if vascular surgery would proceed in patients with inoperable CAD, but led to slightly lower mortality if vascular surgery were canceled in patients with inoperable CAD. The coronary angiography strategy also led to lower mortality when vascular surgery was particularly risky.Decision analysis indicates vascular surgery without preoperative coronary angiography generally leads to better outcomes. Preoperative coronary angiography should be reserved for patients whose estimated mortality from vascular surgery is substantially higher than average.

Abstract

OBJECTIVE. To determine the cost-effectiveness of a policy to screen surgeons for human immunodeficiency virus (HIV) infection to prevent transmission of HIV to patients having invasive procedures.Cost-effectiveness analysis.A one-time national screening program would identify approximately 137 surgeons with HIV infection (range, 28 to 423 surgeons) and would prevent approximately 4.3 infections (range, 1.9 to 21.3 infections) in patients treated by infected surgeons and 0.9 infections (range, 0 to 12.9 infections) in sexual partners of infected surgeons at a direct cost of $8.1 million and an induced cost of approximately $44 million. It would result in expenditures of $458,000 per year of life saved (range, $147,000 to $687,000 per year of life saved), whereas an annual screening program would result in expenditures of approximately $1.1 million per year of life saved (range, $338,000 to $1,886,000 per year of life saved). If the prevalence of HIV infection in surgeons is estimated to be three times our base-case estimate (an increase from 0.1% to 0.3%), annual screening would result in expenditures of approximately $741,000 per year of life saved. If the probability of seroconversion after a patient is exposed to a contaminated instrument is increased to 5.0% from our base-case estimate of 0.29%, an annual screening program would still cost more than $228,000 per year of life saved.Screening surgeons for HIV to prevent transmission of HIV to patients having invasive procedures requires expenditures per year of life saved that are considerably in excess of those of most accepted health interventions. Surveillance studies of patients treated by surgeons infected with HIV should be continued to confirm that transmission of HIV to patients having invasive procedures is rare.

Abstract

As part of the Cardiac Arrhythmia and Risk of Death Patient Outcomes Research Team (CARD PORT) study we are developing a comprehensive decision model to help physicians identify preferred strategies for preventing sudden cardiac death. The model integrates three components: a screening model, a treatment model, and a value model. Ultimately this model will use the CARD PORT's collective findings to produce policy recommendations and will support patient-specific clinical decision making. Our initial modeling suggests the importance of patient-specific value models in an analysis of treatment options. Although our model is specific to cardiac sudden death, other medical domains that exhibit similar characteristics--the importance of patient preferences and the uncertainty regarding the benefits of strategies for risk stratification and treatment--can use a conceptual framework similar to the approach we used to represent strategies to prevent sudden cardiac death.

Abstract

We are performing a randomized, controlled trial of a Physician's Workstation (PWS), an ambulatory care information system, developed for use in the General Medical Clinic (GMC) of the Palo Alto VA. Goals for the project include selecting appropriate outcome variables and developing a statistically powerful experimental design with a limited number of subjects. As PWS provides real-time drug-ordering advice, we retrospectively examined drug costs and drug-drug interactions in order to select outcome variables sensitive to our short-term intervention as well as to estimate the statistical efficiency of alternative design possibilities. Drug cost data revealed the mean daily cost per physician per patient was 99.3 cents +/- 13.4 cents, with a range from 0.77 cent to 1.37 cents. The rate of major interactions per prescription for each physician was 2.9% +/- 1%, with a range from 1.5% to 4.8%. Based on these baseline analyses, we selected a two-period parallel design for the evaluation, which maximized statistical power while minimizing sources of bias.

Abstract

Many clinical practice guidelines fail to account for the preferences of the individual patient. Approaches that seek to include the preferences of the individual patient in the decision-making process (e.g., interactive videodisks for patient education), however, may incur substantial incremental costs. Developers of clinical practice guidelines must therefore determine whether it is appropriate to make their guidelines flexible with regard to patient preferences. The authors present a formal method for determining the cost-effectiveness of incorporating the preferences of individual patients into clinical practice guidelines. Based on utilities assessed from 37 patients, they apply the method in the setting of mild hypertension. In this example, they estimate that the cost-effectiveness ratio for individualized utility assessment is $48,565 per quality-adjusted year of life, a ratio that compares favorably with other health interventions that are promoted actively. This approach, which can be applied to any clinical domain, offers a formal method for determining whether the incorporation of individual patient preferences is important clinically and is justified economically.

Abstract

We developed a computer-based utility assessment tool to assess the preferences of patients towards HIV-related health states and identify risk behaviors (both sexual and drug related) of the patient being interviewed. The reliability of the computer-based interview was assessed through comparison with person-to-person interviews. Our pilot study included 22 patients. Twelve of these patients were also interviewed by the research assistants in person-to-person interviews. The agreement between the person-to-person and computer-based interviews was excellent (3 discrepancies of 180 compared answers), and the majority of the patients preferred to use the computer to disclose sensitive information regarding risk behaviors. Our study suggests that assessment of patient preferences and risk factors can be performed reliably through a computer-based interview.

Abstract

The growth in guidance development projects has focused attention on the methods used in developing the guideline. For a guideline to be sound it should be linked on the basis of scientific evidence to the very health outcome that the guideline is designed to promote.Structuring a health intervention as an influence diagram, a decision model (1) allows for the identification of the relevant benefits, harms, and costs that may result from an intervention; (2) provides an explicit link between the intervention and these outcomes, a crucial prerequisite for the development of an outcome-based guideline; and (3) identifies the evidence that must be synthesized to predict the effect of the intervention on the health outcomes. EXAMPLE: In the development of a guideline related to prevention of opportunistic infections in HIV-infected persons, we would define the interventions (for example, use of medication for PCP pneumonia), the intended health outcome (a potential reduction in the number of opportunistic infections), and the evidence that demonstrates that the intervention produces the desired outcome. If PCP prophylaxis is delayed, the HIV-infected person is exposed to a undue risk of PCP, with its attendant morbidity and mortality. If it is initiated too early, the person incurs excess monetary costs and may experience additional side-effect-associated morbidity. EXAMPLE: The intervention in question is screening for HIV infection, and the outcomes of interest are the medical benefits and harms associated with screening and the financial costs (and savings) that a screening program would incur. Screening for HIV infection differs from many clinical questions because it has potential benefit both to the persons screened and to public health if the screened person reduces risk behaviors that might transmit HIV infection.Structuring a problem with an influence diagram: delineates an explicit link between interventions and outcomes; focuses the questions to be addressed (a series of more sharply defined questions, each of which we may be able to answer based on direct evidence, replaces a much broader question [should we screen for HIV?], which cannot be answered directly); and highlights the importance of a clear, unambiguous statement whose benefit and costs are under consideration.

Abstract

In light of the increasing problem of perinatal human immunodeficiency virus (HIV) transmission, the issue of screening women for HIV is receiving considerable attention. We analyzed the costs and benefits of screening women of childbearing age for HIV. The analysis was based on a dynamic model of the HIV epidemic that incorporated disease transmission and progression, behavioral changes, and effects of screening and counseling. We found that the primary benefit of screening programs targeted to women of childbearing age lies not in the prevention of HIV infection in their newborns but in the prevention of infection in their adult contacts. Because of this benefit, screening medium- and high-risk women is likely to be cost-beneficial over a wide range of assumptions about program cost and behavioral changes in response to screening.

Abstract

This study was designed to test a dissemination model for providing clinical preventive medicine (CPM) training to general internal medicine faculty across the United States.The model incorporated direct instruction of a few faculty as seminar facilitators who, in turn, taught a CPM curriculum to their faculty colleagues, who then could teach it to housestaff and students. The CPM curriculum consisted of six seminars that focused primarily on the risk factors for chronic diseases and on behavior change methods for modifying smoking, diet, and exercise.Faculty who participated in the seminars had significant pre- to post-test increase in knowledge and reported self-efficacy to implement CPM strategies with patients, as well as changes in CPM clinical practices. These faculty, in turn, successfully disseminated CPM information to their housestaff, who also had increases in self-efficacy and changed clinical practices regarding CPM topics.The successful implementation of the dissemination model attests to its viability as a mechanism for disseminating CPM curricula and increasing the emphasis on CMP issues in both clinical teaching and clinical encounters with patients.

Abstract

To estimate the occupational risk from infection with the human immunodeficiency virus (HIV) in terms of loss of (quality-adjusted) life expectancy, and to compare that risk to those posed by other hazards faced by health care workers.Decision-analytic model.For a 30-year-old female health care worker (unvaccinated for hepatitis B virus [HBV]), the loss of life expectancy from a needlestick from a symptomatic HIV-positive (HIV+) patient is 39 days (range, 17 to 93 days), as compared with a loss of 17 days from a needlestick from a patient who is hepatitis-B-surface-antigen-positive (HBsAg+), and 38 days from a needlestick from a patient who is hepatitis-B-e-antigen-positive (HBeAg+). When morbidity is included in the analysis of risk (through calculation of the quality-adjusted loss of life expectancy), the risk from both HBV and HIV increases. The quality-adjusted loss of life expectancy due to a needlestick exposure from a symptomatic HIV+ patient is 45 days (range, 20 to 108 days), as compared with a quality-adjusted loss of life expectancy of 48 days from a needlestick from an HBsAg+ patient, and 109 days from a needlestick from a patient who is known to be HBeAg+. By comparison, a cross-country automobile trip is associated with a loss of life expectancy of approximately 1 day. The 45- to 50-day loss of quality-adjusted life expectancy from percutaneous exposures to HIV and HBV is approximately the same magnitude as the gain in life expectancy from 10 years of annual screening for breast cancer with mammography and physical examination.The risk associated with percutaneous exposures to symptomatic HIV+ patients is comparable to other risks that health care workers have faced knowingly and have accepted in the recent past. However, the loss of quality-adjusted life expectancy associated with a needlestick exposure is significant. Identification of cost-effective methods that increase the safety of medical personnel but also ensure full access to high-quality care for HIV+ patients should be a high priority.

Abstract

Clinical problems represented by decision trees can be analyzed in terms of the probability threshold model, which provides management recommendations based on the prior probability of disease, the test threshold, and the test-treatment threshold. As originally proposed, the threshold model assumes that diagnostic tests provide information about a single event that is relevant to the decision. For some problems, however, a diagnostic test may provide information about more than one such event (e.g., a computed tomography [CT] scan gives information about both mediastinal and hilar metastases in lung cancer). The authors extend the probability threshold model to cases in which a single test provides information about two events that are relevant to the decision. They derive four thresholds that determine the best strategy for any combination of test results. The approach is illustrated for the decision to use a CT scan to stage lung cancer. The analysis reveals that: 1) the range of prior probabilities for which testing is optimal increases; 2) for some prior probabilities only test results about one event are important; 3) for some prior probabilities test results about both events are important; and 4) failure to account fully for information provided by a test can lead to erroneous test and treatment recommendations.

Abstract

The vitreal space of the intact eye of albino rats was perfused in vivo. The concentration of several endogenous amino acids in the vitreal effluent was measured by the [3H]microdansylation procedure. GABA was never detected despite a sensitivity of the method of 0.5 pmol. In contrast to previous results obtained in pigmented rats, photic stimulation with flashing white light did not alter the release of glycine or any of the other amino acids. Potassium (60 mM) and ouabain (0.1 mM) evoked a specific release of glycine. The potassium-evoked release was blocked by magnesium suggesting a neuronal site of origin of glycine. Ouabain-evoked release was not blocked by magnesium. The results were contrasted with experiments on radiolabeled amino acid release from retinas preloaded and superfused in vitro, a condition in which glial localization of exogenous amino acids predominates.