A pediatrician gives vaccine advice to presidential candidates

First, I’d like to thank you for taking the time to read this; I know you’re busy fund-raising and campaigning, so I’ll try to keep this brief. It’s recently become quite apparent that several of you have some misconceptions about our immunization program. That’s unfortunate for people who are seeking such a prominent position. I know science can be complicated, but public health is a pretty important topic. (It’s especially disappointing that the physicians among you don’t seem to fully understand this issue, but I suppose immunizations are outside your specific fields.)

Anyway, the following are a few brief facts about vaccines that I hope you will find useful in your next debate.

1. Vaccines do not cause autism. Numerous studies have demonstrated this, and a huge meta-analysis involving over 1.2 million children demonstrated that pretty clearly. Evidence doesn’t get any better than that.

2. The guy that started this whole autism/vaccine thing lost his license because of his fraudulent study, which has since been retracted.

3. “Too many, too soon” is not a thing. Children encounter many viruses and bacteria every day, and their immune systems are not overwhelmed. (And they don’t develop autism.)

4. Although a popular book about alternative vaccine schedules has been quite a hit, the guy that wrote it didn’t bother to prove that his schedule was effective or safer than the schedule developed by the most knowledgeable infectious disease experts in our great nation. He just made it up.

5. Spreading out immunizations has been shown not to reduce the risk of complications from vaccines. All it does is extend the time period during which children are at risk for these infections. And since the most significant risk of immunizations is driving to the office to get them, it creates some indirect risks as well.

6. While we obviously disagree about some of those points, I support your assertion that we shouldn’t bother immunizing against insignificant diseases. So I’ve narrowed the list down to the diseases that cause “death or crippling.” (The links are from the CDC, a government organization made up of people who know more than you do about infectious diseases. You should get to know them; they will work for one of you some day.)

7. Since you’re probably not familiar with the CDC vaccine schedule that you think people should avoid, I just listed every one of the vaccines it recommends. All of those diseases kill people. Fortunately, they don’t kill very many people anymore. (Because of vaccines.)

8. And since I know your world isn’t all about saving lives, vaccines save money, too. That might be a good talking point.

I could go into more details, and I’d be happy to speak to you personally if you’d like to hear more. In fact, there’s a huge network of pediatricians that would be happy to field the vaccine questions while you tend to your more important affairs. (We were actually going to talk to these families anyway, because their children are our patients.) But hopefully, this basic information has been enough to allow you to speak a little more intelligently about the topic–especially since one of you will be running our country.

But in the future, if you’re unsure about similarly complicated topics, please feel free admit your lack of knowledge and defer to the experts. That’s what real leaders do.

Chad Hayes is a pediatrician who blogs at his self-titled site, Chad Hayes, MD.

I’ve been struck recently by how little we (or at least I) seem to know about variations in use of health services across the world, and what drives them. Do people in, say, India or Mali use doctors “a lot” or “a little”. Even harder: do they “overuse” or “underuse” doctors? At least we could say whether doctor utilization rates in these countries are low or high compared to the rate for the developing world as a whole. But typically we don’t actually make such comparisons – we don’t have the numbers at our fingertips. Or at least I don’t.

I’m also struck by how strongly people feel about the factors that shape people’s use of services and what the consequences are. There are some who argue that the health problems in the developing world stem from people not getting care, and that people don’t get care because of shortages of doctors and infrastructure. There are others who argue that doctors are in fact quite plentiful – in principle; the problem is that in practice doctors are often absent from their clinic and people don’t get care at the right moment. There are others who argue that doctors are plentiful even in practice and people do get care; the problem is that the quality of the care is shockingly bad. Who’s right?

WHS to the rescue – again

As in a recent post of mine on Let’s Talk Development, I thought the World Health Survey might shed some light on these issues. The WHS was fielded in the early 2000’s in 70 countries – spanning the World Bank’s lower-, middle- and high-income categories. The WHS enumerators asked a randomly-selected adult in each household about his or her use of inpatient care and outpatient care; in the numbers that follow I’ve focused on use in the last 12 months. As I said in the earlier blog post, the WHS does have some drawbacks: it covers some regions fairly fully, other much less fully; it’s 10 years old; and all we can tell is whether inpatient or outpatient care was received, not the number of contacts. But despite these problems, the WHS gets us quite a long way.

A lot of variation – but not necessarily what you’d expect

The maps below show the inpatient admission and outpatient visit rate – actually the fraction of people who had at least one admission or visit in the last 12 months. Green countries are above the developing-country average; red countries are below it.
For IP admissions, most of the OECD countries are above the developing-country average (6.98%). Brazil, Namibia and the European and central Asian countries are also above it. African and Asian countries are mostly below or close to the developing-country average.

The picture is different for outpatient visits. Several OECD countries are actually below the developing-country average (27.52%). And for the most part, the countries below the developing-country average are in Africa: many are considerably below it (Mali stands out dramatically); only a few are above it (Kenya and Zambia stand out). By contrast, several countries in Asia are above the developing-country average: India and Pakistan are dramatically above it, but China and Vietnam are also above it; a few Asian countries are below it – Laos and Myanmar are considerably below it, Malaysia and the Philippines less so.

Do variations in doctor numbers and infrastructure explain variations in utilization?

The maps below show data on doctors and hospital beds per 1,000 persons. I got the data from the World Development Indicators, and took the country averages for the first half of the 2000s. As before, green countries are above the developing-country average; red countries are below it. The countries above the developing-country averages are mostly those in the OECD and Europe and central Asia, though in the case of doctors per 1,000 some of them are also in Latin America and the Caribbean. Except for China, most of Asian countries fall below the developing country average.

Correlating the WHS utilization data with the WDI doctor and beds data shows that doctors and beds per 1,000 persons are positively associated with outpatient visit and inpatient admission rates. A lack of doctors and beds looks like it could indeed be part of the explanation for low utilization rates, though of course we haven’t established causality.

But a lack of doctors and hospital beds is only part of the story. Together they “explain” only 60% of the cross-country variation in inpatient admission rates, while doctors “explain” an even smaller 20% of the cross-country variation in outpatient visit rates.

Some countries – India and Pakistan are examples – are below the developing-country average on doctors per 1,000 persons, but above the developing-country average on the outpatient visit rate. Doctors and hospitals in these countries treat far more patients than one would expect given the number of doctors and hospital beds in these countries. In these countries, it doesn’t look like accessibility is the pressing issue; as research by my colleague Jishnu Das confirms, at least in India, poor quality is the bigger problem.

By contrast, much – but not all – of Africa is in the opposite camp: these countries have inpatient admission and outpatient visit rates that are below what would be expected on the basis of their doctor and beds per 1,000 figures. So it’s not just that these countries lack doctors and beds; it’s also that people are not getting the level of contacts you’d expect from the existing staff and infrastructure. Here it looks like absenteeism could well be part of the story; recent research from my colleague Markus Goldstein confirms it – pregnant women whose first clinic visit coincided with a nurse’s attendance were found to be 46 percent more likely to deliver their baby in a hospital.

Two take away messages

Message #1 is that countries differ considerably in their utilization rates. Much of Asia visits doctors more regularly than both the developing world and the entire world; India’s consultation rate is a third higher than the global average. Africa stands out as the continent where outpatient visits and inpatient admissions lag behind the rest of the world.

Message #2 is that these variations are partly explained by differences in doctors and hospital beds per capita, but only partly. The problem goes deeper than hiring more doctors and building more hospitals. Africa has lower outpatient visit rates than its doctors per 1,000 figures would suggest, while the opposite is true of India and Pakistan. In Africa, it looks like the binding constraint may well be absenteeism, while in S Asia it looks like the first-order problem is the poor quality of care that’s actually delivered.

Summary: New technologies take time to mature, but Gartner’s annual hype cycle diagram provides a guide to whether they are being overhyped and how close they are to becoming productive. http://zd.net/1c2wvEb

The 2013 edition of Gartner’s long-running Hype Cycle for Emerging Technologies focuses on “the evolving relationship between humans and machines … due to the increased hype around smart machines, cognitive computing and the Internet of Things.”

Gartner fellow Jackie Fenn, who came up with the hype cycle idea in 1995, says “there are actually three main trends at work. These are augmenting humans with technology — for example, an employee with a wearable computing device; machines replacing humans — for example, a cognitive virtual assistant acting as an automated customer representative; and humans and machines working alongside each other — for example, a mobile robot working with a warehouse employee to move many boxes.”

Fenn’s collaborator Hung LeHong says these trends have been made possible because machines are becoming better at understanding humans and humans are becoming better at understanding machines. “At the same time, machines and humans are getting smarter by working together.”

Robots have been used on the factory floor for decades but improvements in technology mean there is still plenty of scope for automating both physical and mental procedures. Gartner says: “Organizations should look to some of these representative technologies for sources of innovation on how machines can take over human tasks: volumetric and holographic displays, autonomous vehicles, mobile robots and virtual assistants.”

3. Humans and machines working alongside each other

Gartner says: “The main benefits of having machines working alongside humans are the ability to access the best of both worlds (that is, productivity and speed from machines, emotional intelligence and the ability to handle the unknown from humans). Technologies that represent and support this trend include autonomous vehicles, mobile robots, natural language question and answering, and virtual assistants.” One example is IBM’s Watson working alongside doctors and providing natural-language question answering (NLQA).

The point of the Hype Cycle is to give enterprises some idea how far various technologies are from the “plateau of productivity” where they can be more easily adopted. The cycle has five stages, for which Gartner uses terminology reminiscent of John Bunyan’s Pilgrim’s Progress. It starts with a Technology Trigger: a new invention or innovation. That gets the attention of the media, analysts, conference organizers etc, which drives the idea to a Peak of Inflated Expectations. At this point, disillusion sets in. As I noted in the Guardian in 2005, “The press, having overhyped it, knocks it for being overhyped, and it descends into the Trough of Disillusionment.” Successful innovations pass through the trough and start to climb the Slope of Enlightenment before reaching the Plateau of Productivity.

In the 2013 hype cycle, Technology Triggers include SmartDust, brain-computer interfaces, and quantum computing, all of which Gartner reckons are 10 years or more from the plateau. It reckons autonomous vehicles and biochips are 5-10 years away.

Gartner’s Hype Cycle for Emerging Technologies, 2013 ($1,995) “includes a video in which Ms Fenn provides more details”. Fenn and LeHong are also hosting two free webinars at 3pm and 6pm (UK time) on August 21, registration required.

Jack Schofield spent the 1970s editing photography magazines before becoming editor of an early UK computer magazine, Practical Computing. In 1983, he started writing a weekly computer column for the Guardian, and joined the staff to launch the newspaper’s weekly computer supplement in 1985. This section launched the Guardian’s first website and, in 2001, its first real blog. When the printed section was dropped after 25 years and a couple of reincarnations, he felt it was a time for a change….

Since 1973, when Jack Wennberg published his first paper describing geographic variations in health care, researchers have argued about both the magnitude and the causes of variation. The argument gained greater policy relevance as U.S. health care spending reached 18 percent of GDP and as evidence mounted, largely from researchers at Dartmouth, that higher spending regions were failing to achieve better outcomes. The possibility of substantial savings not only helped to motivate reform but also raised the stakes in what had been largely an academic argument. Some began to raise questions about the Dartmouth research.

Today, the prestigious Institute of Medicine released a committee report, led by Harvard’s Professor Joseph Newhouse and Provost Alan Garber, that weighs in on these issues.

The report, called for by the Affordable Care Act and entitled “Variation in Health Care Spending: Target Decision Making, Not Geography,” deserves a careful read. The committee of 19 distinguished academics and policy experts spent several years documenting the causes and consequences of regional variations and developing solid policy recommendations on what to do about them. (Disclosure: We helped write a background study for the committee).

But for those trying to make health care better and more affordable, whether in Washington or in communities around the country, there are a few areas where the headlines are likely to gloss over important details in the report.

And we believe that the Committee risks throwing out the baby with the bathwater by appearing, through its choice of title, to turn its back on regional initiatives to improve both health and health care.

What the committee found

The report confirmed three core findings of Dartmouth’s research.

First, geographic variations in spending are substantial, pervasive and persistent over time — the variations are not just random noise. Second, adjusting for individuals’ age, sex, income, race, and health status attenuates these variations, but there’s still plenty that remain. Third, there is little or no correlation between spending and health care quality. The report also effectively identifies the puzzling empirical patterns that don’t fit conveniently into the Dartmouth framework, such as a lack of association between spending in commercial insurance and Medicare populations.
The committee also confirmed earlier work by Harvard investigators showing that, for the commercially insured population, variations in the prices paid by private health plans explain most of the variations in private insurance spending. The committee deserves considerable credit for deepening our understanding of this irrational world of pricing commercial health care services. Yet as the report finds, even in the commercially insured population, there are substantial differences in utilization rates across regions. We would therefore argue that for commercial populations both price and utilization deserve attention, especially because in many regions, avoidable utilization may be easier to address than price.

It is Medicare spending growth, however, that represents arguably the greatest risk to the financial health of the U.S Treasury, and in Medicare, variations are almost entirely the consequence of utilization of services, not prices. The report finds that the single largest component of the variation in Medicare spending across regions that remains after risk and price adjustment is due to post-acute care (including skilled nursing facility services, home health care, hospice, inpatient rehabilitation and long term acute care). These services have also been a major source of growth.

But this focus on post-acute rather than acute hospital and physician services misses the key point that dysfunctional regional health systems are characterized both by hospitals providing fragmented and expensive care and by a large and thriving post-acute care sector ready and eager to absorb the discharged patients. For example, Joan Teno and colleagues at Brown University have established the strong association of inpatient treatments with no medical benefit, such as feeding tubes for people with advanced dementia, with high rates of regional resource use.

Which brings us to…..

The IOM committee’s policy recommendations: Where they hit the mark …

The committee makes five policy recommendations — and we agree with all of them. First, they call for making more and better data available, on both Medicare and commercial populations. Second, they recommend that CMS continue to test new payment models that encourage clinical and financial integration. Third, they call for timely and iterative evaluation of current and new payment reforms so that improvements can be made to the models. Fourth, they call on Congress to grant CMS the flexibility to accelerate the transition to value-based payment models as successful approaches emerge.

The fifth recommendation focuses on whether Congress should adopt a geographically based payment adjustment. When the committee was first mandated by Congress in the midst of health care reform in 2010, congressional members from regions with lower costs espoused a “Value Index” in which Medicare would reward low-spending regions with higher reimbursements, at the expense of high-spending regions. The committee concluded that payment mechanisms should not be tied to region, but instead targeted to individual providers, rightly criticizing the Value Index approach as not providing institutions and systems with the right incentives to reduce costs and improve quality.

… and where they fall short: Geography does matter

We believe, however, that the committee, by subtitling the report “Target Decision-makers, Not Geography,” will confuse the media and casual readers (for example, those who don’t make it to page 3-3 in the full report) by appearing to cast doubt on the promise of geographic and regional efforts to improve the quality and efficiency of U.S. health care.

As the late Nobel-Prize winning economist Elinor Ostrom has emphasized, successful management ofcomplex social problems can best be achieved through sustained collaboration among diverse stakeholders, often across traditional political boundaries. She demonstrated that cooperative agreements are often the most effective approach to solving the kinds of problems we face in health care. Among these are the natural instincts of physicians and hospitals within local health care systems to protect their financial health by expanding capacity and defending market share, whether by opening new cardiac centers when the one at the nearby hospital is perfectly adequate, or by buying proton accelerators that will be used to treat conditions where they offer no demonstrated benefit.

The rationale for a geographic focus on health care reform is strong: the factors that determine population health are largely local, rooted in the environmental, social, economic, and behavioral determinants of health. Many of the factors that influence health care quality and costs are also local, including local supply, pricing behavior, and the relative emphasis of providers on profit. For example, in the widely cited New Yorkerarticle by Atul Gawande, Medicare utilization in McAllen was found to be nearly twice as high as that in another Texas border town, El Paso, despite the existence of multiple hospitals in both McAllen and El Paso, nearly identical Medicare prices, and common Texas malpractice laws.

Many regional multi-stakeholder initiatives have been established. Although most began with a focus on quality, many are beginning to act more broadly to both improve health and lower costs: Three examples include Pueblo Colorado (Regional Triple Aim), Akron, OH (Accountable Care Community), and the Atlanta Regional Collaborative for Health Improvement (focused on driving provider transitions to global payment, capturing savings, and reinvesting in strategic population health initiatives).

While the IOM Committee is exactly right to call for improved financial incentives for health care providers, we should also remember that both health and health care are local. Geography matters.

Elliot Fisher, MD, MPH and Jonathan Skinner, PhD are professors at Darmouth’s Geisel School of Medicine and The Dartmouth Institute for Health Policy and Clinical Practice. Fisher is a principal investigators, and Skinner is a senior scholar of The Dartmouth Atlas Project.

As we grapple with provider shortages, the surge in chronic illness and the quality to price (QPR as they say in the wine business) challenge in US healthcare delivery, it’s hard to imagine a future that does not include some sort of guideline or algorithm-driven care. As providers take on more financial risk, one common strategy involves team-based care, and the attendant increase in decision-making and care delivery by non-physician clinicians. If the je ne sais quoi feature of a quintessentially great doctor is clinical judgment and instinct, one of the challenges of this transition to team-based care is how to harness that trait and use it efficiently.

Care decisions that are unassailable at a population level (e.g., women should have regular, routine PAP smears or smoking is bad for your health) or are algorithmic in nature (e.g., titration of treatment for uncomplicated hypertension or therapy for mild to moderate teenage acne) can all be effectively reduced to guidelines. This, in turn, allows a physician to delegate certain therapeutic decisions to non-physician providers while maintaining a high degree of care quality. It is also thought that this type of uniformity of care delivery will improve the QPR too, by decreasing variability.

How do we come up with guidelines? Typically they are based on large-scale, randomized, controlled clinical studies. As is nicely articulated in a recent JAMA opinion piece by Drs. Jeffrey Goldberg and Alfred Buxton (JAMA, June 26, 2013—Vol 309, No. 24, pg 2559), guidelines are formulated based on the inclusion criteria for these trials. This process gives us comfort that guidelines are based on rigorous science — and that is a good thing. The challenge arises when we realize that individuals do not reflect populations exactly. Clinical research is much more complex than wet lab work because people are complex and indeed unique. Every clinician has had the experience of prescribing a therapy to a patient who fit guideline criteria exactly and having the opposite outcome of what the guideline predicts.

Goldberg and Buxton point out the collision of this guideline-based care delivery model with the burgeoning area of personalized medicine. I was immediately drawn to their definition of personalized medicine: “The tailoring of medical treatment to the individual characteristics of each patient. It does not literally mean the creation of drugs or medical devices that are unique to a patient, but rather the ability to classify individuals into subpopulations that differ in their susceptibility to a particular disease or their response to a specific treatment.” I always felt like there was too much emphasis on the genetic components of personalized medicine.

Our vision at the Center for Connected Health (which is backed up by our experience to date) is that we will get far richer and complex data from multiple phenotypic inputs such as physiologic monitoring data, mood and motivation-related data than is represented by genomic data. The genome is an incredibly important anchor for devising a personalized medicine profile, but the profile will change over an individual’s lifetime according to these phenotypic inputs.

We’ve done some preliminary work on this and found that indeed we can map individuals phenotypic data over time as they go through an intervention designed, for example, to improve activity level. During a six month period of tracking activity and motivation, we have seen dynamic changes in these two variables. Think about it over a lifetime.

The collision with guidelines is multifactorial. We are all individuals and none of us are completely representative of the composite patient who is defined by the inclusion criteria for the clinical trial that lead to the guideline. Thus, some of us are bound to be poor candidates for the prescribed intervention (I hate to mention it, but we’ve all seen examples of Uncle Harry who smoked two packs per day, lived into his 90s and died of causes unrelated to smoking). If that wasn’t enough, there is the fact that we change over time and though we might fit a guideline today, we may not in a year.

Really, when you think about it, ‘clinical judgment and instinct’ is the 20th century (and earlier) embodiment of personalized medicine. Those of us who are clinicians can all point to experiences where we’ve said, “I can’t tell you why, but I really think we should do it this way” (this way being contrary to conventional wisdom) and it has generated a positive outcome. Of course we also have experiences where the outcome is not good or where we make mistakes that could have been prevented by adherence to guidelines.

How to make sense of this complex and contradictory situation? Here’s my take:

Personalized medicine, however you define it, is still in the very early stages. We have decades to go, probably on both the genetic and phenotypic fronts, before we can comfortably replace guidelines.

We should welcome the sharing of decision-making across the care team and maximize the use of non-physician clinicians. Guidelines give us the state-of-the-art way to do this.

The best form of personalized medicine today is still clinician instinct and judgment. This does not mean deferring all clinical decisions to the most senior or most highly trained person on the team. The care delivery culture can be modified to maximize appropriate personalization of care while adhering appropriately to guidelines. This requires an open culture where inquiry is encouraged. Each care team member must be comfortable with what he or she doesn’t know, with spotting exceptions to norms and engaging other team members in a learning dialogue around these exceptions.

This should enable guidelines to be appropriately applied while surfacing exceptions for discussion. In the meantime, we and others will be working as fast as we can to create the framework for personalized medicine from both the genetic and phenotypic perspective.

IN recent weeks the world’s leading medical journals have published articles about the overtreatment of mild hypertension, the risks of breast cancer overdiagnosis, and the lack of effectiveness and potential harms of general health checks.

As the studies of dangerous excess mount, so too does the effort to raise awareness about the problem. JAMA Internal Medicinenow has a regular “Less is more” feature, the BMJ has just launched its “Too much medicine” campaign, and professional societies in the US are running the “Choosing wisely” initiative, highlighting overused tests and treatments.

In the field of mental health few could have missed the global fight over the DSM–5 and vociferous claims it will further fuel the medicalisation of normal life.

There’s little doubt that the market-based system in the US is the epicentre of excess — where health care now comprises almost one-fifth of the entire economy — but the problem affects many nations.

With breast cancer for example, estimates based on incidence studies suggest one-third of invasive cancers diagnosed by screening mammography in NSW may be overdiagnosed — in other words, the cancer would not have gone on to harm the woman.

The probable causes of overdiagnosis and overtreatment are complex — technological change, commercial gain, professional imperialism, fears of litigation, perverse incentives and our deep cultural faith in early detection. But despite the complexity and enormity of the challenge, it’s surely time to try to work out how we can wind back the harms of too much medicine.

A group of Australian researchers are a key driving force behind the first international scientific conference on overdiagnosis to be held in the US this September. The Dartmouth Institute for Health Policy and Clinical Practice is a logical host for the Preventing Overdiagnosis conference, with its proud history of medical scepticism and impeccable credentials on the dangers of too much medicine.

Resulting from a small meeting on Queensland’s Gold Coast last year, the conference is being run in partnership with the BMJ and one of the world’s most influential consumer organisations,Consumer Reports. It will feature 90 scientific presentations on the problem and its solutions, and keynote speakers include Dr Virginia Moyer, the chair of the US Preventive Services Task Force, Dr Allen Frances, chair of the DSM IV, and Dr Barry Kramer, a senior director at the National Cancer Institute, which has made overdiagnosis one of its research priorities.

Along with the research and the conferences, the time is ripe for a lot more discussion about what can be done in the clinic and the classroom, how we can communicate the counterintuitive message that less is sometimes more, and how we can develop and evaluate effective policy responses.

The aim, after all, is not just more meetings and peer-reviewed papers, but fewer healthy infants labelled unnecessarily with gastro-oesophageal reflux disease, less distress overdiagnosed as mental illness, and fewer of our elders assailed by out-of-control polypharmacy. The less we waste on unnecessary care, the more resources there are for those in genuine need.

Along with innovations in genetics and information technology, one of the exciting areas in medicine in the 21st century will be how to wind back unnecessary excess — safely and fairly.

Ray Moynihan is a senior research fellow and PhD student at Bond University, and co-organiser of the Preventing Overdiagnosis conference being held at Dartmouth, US, 10–12 September 2013. www.preventingoverdiagnosis.net

How the healthcare industry’s scare tactics have screwed up our economy — and our future http://bit.ly/18TFCaf

There are multiple lines of evidence that doing more things to patients doesn’t always result in better health. I summarize a few examples here.

Dartmouth Studies

Researchers at Dartmouth University examined the relationship between medical resources used and the resulting health outcomes in people nearing the end of their lives in two California regions, Los Angeles and Sacramento.

In Los Angeles, the patients used 61% more hospital beds, 128% more intensive care unit (ICU) beds, and 89% more physician labor in the management of chronically ill patients during the last two years of life compared to Sacramento. In spite of this intense use of medical resources, the quality of care for patients with heart attacks, heart failure, and pneumonia was worse in Los Angeles. Patients did not enjoy this aggressive care either. Patients rated 57% of Los Angeles hospitals as below average compared to 13% of Sacramento hospitals.

What are the cost implications of the overly aggressive care in Los Angeles? If the Los Angeles hospitals had functioned at the same level as the Sacramento hospitals over the five years of the study measuring these differences, the savings to the Medicare system would have been approximately $1.7 billion.

Brain Aneurysms

Researchers studied immediate family members of patients who had symptomatic brain aneurysms. The researchers wanted to know if finding and surgically fixing aneurysms in the healthy family members who had no aneurysm symptoms would prevent strokes and deaths. The results were basically that many people were injured as a result of the surgery, which the researchers didn’t feel justified the few saved lives.

The Medical Outcomes Studies

In the late 1980s and early 1990s a series of studies called the Medical Outcomes Studies were completed. Their purpose was to measure differences in medical resources used and health outcomes in patients with common conditions who saw different kinds of doctors. They wanted to know if ologist care led to better health compared to primary care, and how the doctors differed in practice styles. The researchers studied patients with high blood pressure and diabetes.

For high blood pressure, patients of cardiologists had more office visits, more prescriptions, more lab tests per physician visit, and were more likely to be hospitalized. There was no difference between the three physician types for average blood pressure, complications, or physical function.

For diabetes, patients of endocrinologists were found to have higher hospitalization rates, more office visits, more prescription drugs, and more lab tests per physician visit than family physicians. There was no difference between the three physicians for average sugar levels, physical functioning, and almost all diabetic complications.

Summary

These are just a few examples of how more aggressive medical care doesn’t always result in better health. All of the GIMeC members typically support the notion that more is better. Overcoming this aggression bias will be one of our big challenges in reforming our healthcare system.

SMOKERS will be asked to quit before undergoing surgery and be referred for help while on waiting lists under new medical guidelines.

A strengthened smoking policy from the Australian and New Zealand College of Anaesthetists will require all elective surgery patients to be asked if they smoke, and for tobacco users to be given referrals to help them quit before their operations.

The policy will not give practitioners the power to delay or cancel surgery. But ANZCA president Dr Lindy Roberts said the guidelines would offer smokers the best chance to avoid life-threatening complications by providing them with support.

The hope is to convince and help smokers to quit four to six weeks before surgery, while they are already on the waiting list, which can greatly cut the risks of serious complications during recovery.

“Smokers are at greater risk of complications such as pneumonia, heart attacks and wound infections,” Dr Roberts said.

“When you are coming into hospital for something like an operation, it does provide you with an opportunity to think about your health more generally, and the benefits of giving up smoking for your health are in the longer term as well as relating to surgery and anaesthesia.

“It may be that when presented with the risks for a certain procedure that the surgery is delayed to allow somebody to improve their health prior to the surgery.

“From time to time a decision may be made between the anaesthetist, the surgeon and the patient to delay the surgery if there is something that can be improved to make them fitter for surgery.”

The move follows the success of a Frankston Hospital program in which all smokers entering the surgery waiting list were sent a quit pack – prompting 13 per cent to act and contact Quitline. Australian Medical Association Victorian president Victoria president Dr Stephen Parnis said the college’s quit-smoking stance was a positive move, balancing the need to advise patients without discriminating.

“This is not about banning people, this is about giving them the best chance to benefit,” Dr Parnis said. “When you weigh into account the procedure they need and their health, if there is a benefit to delaying the procedure then we would do that.”

For years I’ve been a fan of the idea of flow, and have felt that the concepts apply very specifically to success in innovation. If you aren’t familiar with flow, the concept arises when individuals are engaged in experiences where they are highly skilled and highly challenged. You may think of this when people refer to themselves as being “in the zone”, so highly engaged and so proficient that an individual delivers exceptionally high quality work almost effortlessly.

While the concept of being “in the zone” has been recognized for years, psychologist and researcher Mihaly Csikszentmihalyi defined these concepts in his book Flow: The Psychology of Optimal Experience. The book is a bit dense, and like many books that seek to reduce cognitive concepts to everyday practice, can be a bit of a stretch, but the key points that Csikszentmihalyi makes are important. Flow happens when people are engaged in work or leisure activities where their experience levels and engagement levels are high. Perhaps the best way to illustrate this is with the graphic he posts in his book. See below:

Flow is achieved when the challege matches the skill. If the skills are too high, boredom ensues. If the challenge is higher than skill, anxiety and frustration sets in.

Flow and its relation to innovation

In his book Csikszentmihalyi talks about work and the concept of flow. He notes that many people experience flow more consistently in their work than they do in their leisure time, probably because people become bored with leisure – their skills are higher than their engagement or challenges. Innovation, I think, is often quite the reverse.

For 30 years businesses have focused on driving inefficiency and variability out of the organization. There’s been successive waves of management theory, including the quality movement, business process re-engineering, right sizing, outsourcing and so forth. Our skills are exceptionally high when it comes to efficiency, and exceptionally low when it comes to innovation. Likewise, we’ve become so accustomed to efficiency, and we understand it so well (and are compensated so well for it) that our engagement to innovation is low, regardless of what we say about innovation. I wrote about this in Relentless Innovation – focusing on the tyranny of business as usual. This means that many organizations start off very low on the skills/challenges axis, and then management places undue pressure on the teams to do innovation quickly and successfully, without providing more skills or knowledge. Following the chart, this places the team very quickly in a position of high anxiety – they really aren’t as engaged as they should be, they are unprepared for the challenge and most importantly lack skills. Innovation seems risky, difficult and dangerous, and teams can’t achieve consistent success, let alone innovation “flow”.

Achieving Innovation Flow

How then does a firm or a team achieve innovation “flow”? What does it take for a team or an organization to create the conditions for innovation to “flow”, where innovators are always in the “zone”?

Clearly two factors are at play. First, team selection and engagement. Finding the right people, those who are open to change and uncertainty, and placing them in a position to do more innovation is paramount. Don’t choose the available people, or the “best” people, but people whose perspectives and temperament make them the right people for innovation, who are interested and easily engaged in challenging innovation activities. There are plenty of assessment tools to find the right people, including the Innovator’s DNA, the Foursight Model and the Kirton Adaption Index.

Second, skill development. For this team of innovators, build their skills and competencies so that they are ever increasing their knowledge of innovation tools and methods. Engage them frequently so their training is activated in actual projects, and perform after action reviews to learn what went right and what should be changed in the next innovation activity. Teams don’t learn in a “once and done” model – they need to repeat their successes and learn from their mistakes. Unfortunately there is no agreed body of knowledge that spans all of innovation. You should invest in innovation training, but be careful of “certified” programs that are popping up everywhere. Look for innovation training offered by experienced trainers who also provide innovation consulting services – you need real world examples, not academic perspectives about innovation tools, methods and their applicability and success rates.

Why is innovation so difficult?

The reason innovation seems so difficult to many organizations is that it is virtually impossible for anyone or any team to get anywhere near the “flow”, to get into the innovation “zone”. Innovation teams are placed under inordinate pressure to deliver valuable results with little time and no training, often with poor direction and no tools or methodologies. Instead of defining skills and finding the right people, we corral the available people and kick off projects with little forethought or definition. Then executives wonder why innovation seems so difficult or returns results that seem so incremental.

Find ways to get your teams and your organization into the innovation zone. Use the concept of Flow to improve WHO you select, HOW you direct them and WHERE and WHEN you offer training and skill development. Then you’ll find it much easier for your teams to achieve innovation flow.

(Reuters Health) – Close to one-quarter of colonoscopies performed on older adults in the U.S. may be uncalled for based on screening guidelines, a new study from Texas suggests.

Researchers found rates of inappropriate testing varied widely by doctor. Some did more than 40 percent of their colonoscopies on patients who were likely too old to benefit or who’d had a recent negative screening test and weren’t due for another.

Guidelines from the U.S. Preventive Services Task Force, a government-backed panel, recommend screening for colon cancer – every 10 years, if it’s done with colonoscopy – between age 50 and 75.

After that point, “It involves an unnecessary risk with no added benefit for these older patients,” said Kristin Sheffield, the new study’s lead author from the University of Texas Medical Branch in Galveston.

Those risks include bowel perforation, bleeding and incontinence, as well as the chance of having a false positive test and receiving unnecessary treatment.

Even for screening tests that are universally recommended for middle-aged adults, the balance of benefits and risks eventually points away from screening as people age. Any cancers that are caught might never have shown up during a patient’s lifetime if the person is too old or the cancer too slow-growing.

But because there has been so much effort to educate the public about reasons to get screened, the potential harms are often overlooked – and the idea of stopping screening isn’t regularly discussed, researchers said.

Sheffield and her colleagues looked at Medicare claims data for all of Texas and found just over 23 percent of colonoscopies performed on people age 70 and older were possibly inappropriate.

For patients age 76 to 85, as many as 39 percent of the tests were uncalled for, the researchers wrote Monday in JAMA Internal Medicine. The rest were likely done for diagnostic purposes.

A MORAL OBLIGATION?

Another study published in the same journal supports the idea that many Americans are so focused on the possible benefits of screening that they don’t realize harms are involved as well.

Dr. Alexia Torke from the Indiana University School of Medicine in Indianapolis and her colleagues surveyed 33 adults between age 63 and 91 and found many saw screening as a moral obligation.

Few of the older adults had discussed the possibility of stopping routine screening, such as for breast cancer, with their doctor, and some told the researchers they would distrust or question a doctor who recommended they stop.

“There’s very limited data for any cancer test that it leads to any benefit for older adults,” said Dr. Mara Schonberg, from Beth Israel Deaconess Medical Center and Harvard Medical School in Boston.

“You want to be doing this thinking it’s going to be helping you live longer,” she told Reuters Health – especially because the chance of suffering side effects from screening or treatment may be higher among older people.

Schonberg, who wrote a commentary on Torke’s study, said time spent unnecessarily screening older adults may take away from conversations that could actually benefit their health – such as about exercise and eating better.

“There’s really a strongly held belief that you need to get screened, that it’s irresponsible if you don’t,” said Dr. Steven Woloshin, who has studied attitudes toward screening at the Geisel School of Medicine at Dartmouth in Hanover, New Hampshire.

“There have been all these messages for years about the importance of screening that people have been inundated with, and I think it’s really hard to change the message now, even though it’s become clear that screening is a double-edged sword,” Woloshin, who wasn’t involved in the new research, told Reuters Health.

The researchers agreed screening should be an individual decision as people get older, but that everyone should fully understand what they stand to gain – if anything – and what they could lose by getting screened.

For colon cancer in particular, Sheffield recommended elderly people who really want to be screened go with a less-invasive method than colonoscopy, such as fecal occult blood testing.

OVERUSING ANESTHESIA?

In another analysis of Medicare beneficiaries undergoing colonoscopy, researchers led by Dr. Gregory Cooper from Case Western Reserve University in Cleveland learned the proportion of procedures using anesthesia – most likely propofol – increased from less than nine percent in 2000 to 35 percent in 2009.

The cost of a procedure using anesthesia is about 20 percent higher than one without it, the researchers noted.

Patients in their study suffered a complication – including perforation or breathing problems – during one in 455 procedures using anesthesia, compared to one in 625 without anesthesia. The researchers said so-called deep sedation may impair patients’ airway reflexes and blunt their ability to respond to procedure-related pain.

During the year after an influential U.S. task force advised providers to stop routine screening colonoscopies in seniors over age 75 because risks of harm outweigh benefits, as many as 30% of these “potentially or probably inappropriate” procedures were still being performed, with huge pattern variation across the nation, especially in Texas.

“We found that a large proportion of colonoscopies that are performed in these older patients were potentially inappropriate based on age-based screening guidelines,” says Kristin Sheffield, PhD, assistant professor of surgery at the University of Texas Medical Branch at Galveston, lead researcher of the study.

For patients between 70 and 74, “procedures were repeated too soon after a negative exam,” increasing the odds of avoidable harm, such as “perforations, major bleeding, diverticulitis, severe abdominal pain or cardiovascular events,” she says. The guidance, from the U.S. Preventive Services Task Force, which was released in 2008, also set a 10-year interval for routine colonoscopies for people between age 70 to 75 unless the patient develops certain symptoms.

The task force’s prior guidance issued in 2002 had no age limit recommendation, Sheffield says.

“For some physicians, more than 30% of the colonoscopies they performed were potentially inappropriate according to these screening guidelines,” she says. “So this variation suggests that there are some providers who are overusing colonoscopy for screening purposes in older adults,” Sheffield said.

Her report, published in this week’s JAMA Internal Medicine,looked at Medicare data from the Dartmouth Atlas between October 1, 2008 and September 30, 2009, to see hospital referral region patterns of variation across the nation as a whole. For the state of Texas, Sheffield used claims data from smaller hospital service areas, so she could see practices of individual physicians who performed colonoscopies.

She discovered that Medicare beneficiaries were much less likely to have a “potentially or probably inappropriate” colonoscopy if they lived in a non-metropolitan or rural area. Practitioners who were more likely to perform potentially or probably inappropriate colonoscopies were more likely to have been graduated from medical school before 1990 rather than after, and were more likely to perform a higher volumes of the procedure on Medicare beneficiaries each year.

The data was de-identified, so as not to reveal the practice pattern of an individual physician by name.

“Our purpose was not to point fingers at individual physicians or specialties. We just wanted to examine patterns in potentially inappropriate colonoscopy, because patterns can illustrate issues in everyday practice. It can help illuminate and show the range of practice in terms of the range of inappropriate colonoscopies.

Sheffield says that it may be that colonoscopists were simply slow to adapt the recommendations to their practices in certain parts of the country. In a subset of cases, she acknowledges, there may have been legitimate reasons why a physician recommended the procedure in a patient, and perhaps failed to code it properly for the claims database.

“For example, in adults between the ages of 76 to 85, there are some considerations that would support the use of screening colonoscopy, for example, a patient has a higher risk of developing an adenoma. But in general, screening guidelines indicate that should be exception, rather than the rule.”

And if that were the case, there wouldn’t be such a huge variation. For example, in the wedge of west Texas that includes El Paso, the percentages of colonoscopies that were potentially inappropriate was between 13.3% and 18.79%. But in large areas including Austin, Corpus Christi, San Antonio Houston, and Waco, the percentages ranged between 23.3% and 34.9%.

Nationally, areas of higher potentially inappropriate colonoscopies­—with rates between 25.27% and 30.51%— included eastern Washington state, Idaho, and eastern Nevada, Minnesota, parts of North and South Dakota, all of New England, Arkansas and large portions of North Carolina and Tennessee.

Low utilization areas—with rates between 19.45% and 22.64% — included New Mexico and north Texas, Central and Northern Inland areas of California, and all parts of Florida except Pensacola and areas of South Florida.

The issue included a related article and related commentary.

In the related article, Alexia M. Torke, MD, and colleagues, of the Indiana University for Aging Research, interviewed several dozen patients about their reasons for screening. They found that these patients considered screening at their age to be an automatic part of healthcare, and “a moral obligation.”

For example, one told investigators that discontinuation of routine colonoscopy screening, at age 84, “would be the same as me taking my life. And that’s a sin.”

Discontinuation would mean a much more difficult and significant decision they would have to make.

And they were skeptical of recommendations that they should not have screening, saying it would threaten their trust in their doctors and make them suspicious that a guideline they shouldn’t be screened was made only to save money.

“Public health education and physician endorsements (of cancer screening) may have created a high degree of ‘momentum’ for continuation screening, even in situations in which the benefits may no longer outweigh the risks or burdens.”

In an invited commentary, Mara Schonberg, MD, MPH, of Harvard Medical School and Beth Israel Deaconess Medical Center in Boston, noted that as much as colonoscopies are celebrated as a preventive therapy, they also cause harm.

“Harms of cancer screening are immediate and include pain and anxiety related to the screening test, complications…(e.g., bowel perforation from colonoscopy,) or additional tests after a false positive result, and overdiagnosis (finding tumors that would never cause symptoms in an older adult’s lifetime). Overdiagnosis is particularly concerning because some older adults experience significant complications from cancer treatment.”

She blames “unbalanced public health messages” for contributing to “perceptions that cancer screening should be continued indefinitely,” she also points to the physician’s recommendation as a strong driver of whether a senior citizen undergoes one.

Cheryl Clark is senior quality editor and California correspondent for HealthLeaders Media. She is a member of the Association of Health Care Journalists.