Last week, Tropical Storm Isaac started tracking toward the Gulf of Mexico. As usual, the prediction models offered varying forecasts. Nonetheless, by this weekend a consensus emerged that the tempestuous weather system would, most likely, affect the City of New Orleans.

National Hurricane Center image

The Mayor, Mitch Landrieu, didn’t panic. I watched him on TV on Sunday evening in an interview with CNN’s Wolf Blitzer and Erin Burnett. Isaac wasn’t a hurricane yet, although a Category I or II storm was predicted by then. He didn’t order an evacuation. Rather, he emphasized the unpredictable nature of storms. There’d be business as usual the next day, on Monday morning August 27. Mind the weather reports, and do what you need to do, he suggested to the citizens. He did mention there’d be buses for people who registered.

“Don’t worry,” was the gist of his message to the citizens of New Orleans. The levees should hold. He exuded confidence. Too much, perhaps.

Some people are drawn to leaders – or doctors – who blow off signs of a serious problem. “It’s nothing,” they might say to a woman who fell after skiing and hit her head, or to a man with a history of lymphoma who develops swollen glands and fever. It’s trendy, now, and sensible, to be cost-conscious in medical care. This is a terrific approach except when it misses a treatable and life-threatening condition or one that’s much less expensive to fix earlier than later.

“Every storm is different,” meteorologist Chad Myers informs us.

Like tumors. Sometimes you see one that should have a favorable course, like a node-negative, estrogen-receptor breast tumor in a 65 year old woman, but it spreads to a woman’s bones within a year. Or a lymphoma in a 40 year old man that looks to be aggressive under the light microscope but regresses before the patient has gone for a third opinion. But these are both exceptions. Cancer can be hard to predict; each case is a little different. Still, there are patterns and trends, and insights learned from experience with similar cases and common ways of spreading. Sometimes it’s hard to know when to treat aggressively. Other times, the pathology is clear. Sometimes you’re wrong. Sometimes you’re lucky….

In New Orleans, the Mayor’s inclination was to let nature take its course. He’s confident in the new levees, tested now by Isaac’s slow pace and prolonged rains. I do hope they hold.

2. The researchers use mathematical arguments so complex to prove a point that Einstein would certainly, 100%, without a doubt, take issue with their model and proof.

3. “Overdiagnosis” is not defined in any clinical sense (such as the finding of a tumor in a woman that’s benign and doesn’t need treatment). Here, from the paper’s abstract:

The percentage of overdiagnosis was calculated by accounting for the expected decrease in incidence following cessation of screening after age 69 years (approach 1) and by comparing incidence in the current screening group with incidence among women 2 and 5 years older in the historical screening groups, accounting for average lead time (approach 2).

No joke: this is how “overdiagnosis” – the primary outcome of the study, is explained. After reading the paper in its entirety three times, I cannot find any better definition of overdiagnosis within the full text. Based on these manipulations, the researchers “find” an estimated rate of overdiagnosis attributable to mammography between 18 -25% by one method (model/approach 1) or 15-20% (model/approach 2).

4. The study includes a significant cohort of women between the ages of 70-79. Indolent tumors are more common in older women who, also, are more likely to die of other causes by virtue of their age. The analysis does not include women younger than 50 in its constructs.

5. My biggest concern is how this paper was broadcast – which, firstly, was too much.

Bloomberg News takes away this simple message in a headline: “Breast Cancer Screening May Overdiagnose by Up to 25%.” Or, from the Boston Globe’s Daily Dose, “Mammograms may overdiagnose up to 1 in 4 breast cancers, Harvard study finds.” (Did they all get the same memo?)

The Washington Post’s Checkup offers some details: “Through complicated calculations, the researchers determined that between 15 percent and 25 percent of those diagnoses fell into the category of overdiagnosis — the detection of tumors that would have done no harm had they gone undetected.” But then the Post blows it with this commentary, a few paragraphs down:

The problem is that nobody yet knows how to predict which cancers can be left untreated and which will prove fatal if untreated. So for now the only viable approach is to regard all breast cancers as potentially fatal and treat them with surgery, radiation, chemotherapy or a combination of approaches, none of them pleasant options…

This is simply not true. Any pathologist or oncologist or breast cancer surgeon worth his or her education could tell you that not all breast cancers are the same. There’s a spectrum of disease. Some cases warrant more treatment than others, and some merit distinct forms of treatment, like Herceptin, or estrogen modulators, surgery alone…Very few forms of invasive breast cancer warrant no treatment unless the patient is so old that she is likely to die first of another condition, or the patient prefers to die of the disease. When and if they do arise, slow-growing subtypes should be evident to any well-trained, modern pathologist.

“Mammograms Spot Cancers That May Not Be Dangerous,” said WebMD, yesterday. This is feel-good news, and largely wishful.

A dangerous message, IMO.

—

Addendum, 4/15/12: The abstract of the Annals paper includes a definition of “overdiagnosis” that is absent in the body of the report: “…defined as the percentage of cases of cancer that would not have become clinically apparent in a woman’s lifetime without screening…” I acknowledge this is helpful, in understanding the study’s purpose. But this explanation does not clarify the study’s findings, which are abstract. The paper does not count or otherwise directly measure any clinical cases in which women’s tumors either didn’t grow or waned. It’s just a calculation. – ES

Cyberchondria is an unfounded health concern that develops upon searching the Internet for information about symptoms or a disease. A cyberchondriac is someone who surfs the Web about a medical problem and worries about it unduly.

Through Wikipedia, I located what might be the first reference to cyberchondria in a medical journal: a 2003 article in the Journal of Neurology, Neurosurgery, and Psychiatry. A section on the new diagnosis starts like this: “Although not yet in the Oxford English Dictionary, the word ‘cyberchondria’ has been coined to describe the excessive use of internet health sites to fuel health anxiety.” That academic report links back to a 2001 story in the Independent, “Are you a Cyberchondriac?”

Interesting that the term – coined in a newspaper story and evaluated largely by IT experts – has entered the medical lexicon. I wonder how the American Psychiatry Association will handle cyberchondria in the upcoming DSM-5.

From Ventana Medical: the HER2 and Chromosome 17 probes are detected using two color chromogenic in situ hybridization (ISH)

Inform Dual ISH works like this: technicians, typically working under the supervision of a pathologist, expose a tiny bit of a breast biopsy specimen, fixed on a microscope slide, to probes for Her2. This gene, normally found on human chromosome 17, is amplified in some breast cancer cells. The assay exploits an enzyme, linked to the genetic probe, which creates a color (in this case, red) upon exposure to a chemical. The system allows a pathologist, using a microscope, to “see” and measure the gene’s presence on chromosomes in cells of an ordinary biopsy sample.

What’s interesting about this in situ hybridization (ISH) kit is that it doesn’t require a fluorescent microscope for imaging. The Ventana probe generates a simpler, ordinary color signal that can be detected by a light microscope. Most commercial assays for Her2 use a method called immunohistochemistry (IHC); that technique relies on antibodies that bind Her2, a cell surface receptor that’s implicated in cancer cell signaling and growth. This and other ISH assays measure genes directly on the chromosomes; by contrast, IHC usually tests for protein.

Her2 is the molecular target for Herceptin, and there’s been considered discussion about how and where it might be accurately assessed. So the “readout” from this diagnostic test might inform a woman and her doctor in deciding whether or not she should receive treatment with Herceptin.

How pathologists evaluate breast biopsy specimens matters a lot, especially when you’re on the receiving end of a diagnosis and you’re choosing among treatments. In 2007, ASCO and the College of American Pathologists published guidelines on Her2 testing in the Journal of Clinical Oncology. These groups recently updated the recommendations. How this new assay will be received by these societies, I’m not sure.

A key question, in this author’s mind, is where the Her2 measurements take place, and whether women should rely on local labs’ assays – by whatever method – to determine the Her2-ness of their breast tumors.

One of the goals of this blog is to introduce readers to some of the language of medicine. As much as jargon is sometimes unnecessary, sometimes the specificity and detail in medical terms aids precision.

So what is a cluster of differentiation, or CD?

In medical practice, the two-letter acronym specifies a molecule, or antigen, usually on a cell’s surface. In 1982, an international group of immunologists got together for the First International Workshop on Human Leukocyte Differentiation Antigens. The initial focus was on leukocyte (white blood cell) molecules. The goal was to agree on definitions of receptors and other complex proteins to which monoclonal antibodies bind, so that scientists could communicate more effectively.

A few examples of CDs about which you might be curious:

CD1 – the first-named CD; this complex glycoprotein is expressed in immature T cells, some B cells and other, specialized immune cells in the skin; there are several variants (CD1a, -b, -c…) encoded by genes on human chromosome 1.

CD4 – a molecule on a mature “helper” T cell surface; T lymphocytes with CD4 diminish in people with untreated HIV disease.

CD20 – a molecule at the surface of immature B lymphocytes that binds Rituxan, an antibody used to treat some forms of lymphoma, leukemia and immune disorders.

In this schematic, an antibody recognizes a specific molecule, or cluster of differentiation, at a cell surface.

The CDs were named (i.e. numbered) not necessarily by the order of discovery, but by the order of their being deemed as bona fide CDs by HLA Workshop participants. There’s a pretty good, albeit technical, definition in FEBS Letters, from 2009:

Cluster of differentiation (CD) antigens are defined when a surface molecule found on some members of a standard panel of human cells reacts with at least one novel antibody, and there is good accompanying molecular data.

Perhaps the best way to think about CDs is that they’re unique structures, usually at a cell’s surface, to which specific antibodies bind. By knowing the CDs, and by examining which antibodies bind to cells in a patient’s tumor specimen, pathologists can distinguish among cancer types. Another use is in the clinic, when oncologists give an antibody, like Campath – which binds CD52, the responsiveness might depend on whether the malignant cells bear the CD target.

Still, I haven’t come across an official (such as NIH), open-source and complete database for all the CDs. Most can be found at the Human Cell Differentiation Molecules website, and information gleaned through PubMed using the MeSH browser or a straight literature search.

Wikipedia is disappointing on this topic; the list thins out as the CD numbers go higher, and the external references are few. To my astonishment, I found a related page on Facebook. Neither makes the grade.

Where should patients get information about these kinds of things? Or doctors, for that matter?

A surprise lesson arrived in my snail mailbox today: the April 28 issue of NEJM includes a fascinating research paper on a probable cause of leprosy in the southern U.S. New, detailed genetic studies show that armadillos, long-known to harbor the disease, carry the same strain as occurs in some patients; they’re a likely culprit in some cases.

For those who didn’t go to med school: Leprosy is a chronic, infectious disease cause by Mycobacterium leprae. In my second year we were told to refer to the illness as Hansen’s disease. We learned that some people are more susceptible to it than others, possibly due to inherited immunological differences, a point that is reiterated in the current article.

The World Health Organization reports there are under 250,000 cases worldwide every year. Here in the U.S., Hansen’s disease is quite rare, with about 150 new cases reported annually according to the study authors. The condition wasn’t evident in the Americas before Columbus’ travels, but by the mid-18th Century it was affecting some settlers near New Orleans. Today, most cases in the U.S. arise in travelers and others who’ve lived or worked abroad in regions where leprosy is endemic. About a third crop up in people who’ve never left the country, and these cases tend to cluster in the southeastern U.S.

Leprosy tends to affect the skin, and what the NEJM investigators first did was examine skin biopsy specimens from patients who live in the U.S. and hadn’t traveled. It’s been known for decades that armadillos can carry these bacteria, and so the researchers took specimens from wild armadillos in five southern states, and analyzed the M. leprae bacterial genomes. They matched. Then they looked at more patients’ samples, and also analyzed M. leprae sequences from patients in other parts of the world.

The conclusion is that wild armadillos and some leprosy patients in the southern U.S. are infected with an identical strain of the bacteria that causes leprosy. From this information, the authors infer that armadillos are a reservoir for this stigmatizing germ, and that they may be the source of some patients’ infections.

Only once I saw a patient with Hansen’s disease, at the Bellevue dermatology clinic, when I was a fourth-year student. She was an elderly woman from China. Her face, which I can picture now, had classic leonine features. The resident caring for her, an intern with a plan to become a dermatologist, prescribed antibiotics.

The author has been concerned for a while that she might be addicted to blogging. Symptoms include wanting to post instead of working on a book proposal and other, likely more important projects. She was thinking of crowd-sourcing how best to describe this disposition, but it turns out the Internet already provides a diagnostic term:

The Times ran an intriguing experiment on its Well blog yesterday: a medical problem-solving contest. The challenge, based on the story of a real girl who lives near Philadelphia, drew 1379 posted comments and closed this morning with publication of the answer.

Dr. Lisa Sanders, who moderated the piece, says today that the first submitted correct response came from a California physician; the second came from a Minnesota woman who is not a physician. Evidently she recognized the condition’s manifestations from her experience working with people who have it.

The public contest – and even the concept of using the word “contest” – to solve a real person’s medical condition interests me a lot. This kind of puzzle is, as far as I know, unprecedented apart from the somewhat removed domains of doctors’ journals and on-line platforms intended for physicians, medical school problem-based learning cases, clinical pathological conferences (CPC’s) and fictional TV shows.

In this example, the patient’s diagnosis was known, and treatment successfully implemented, before publication. Surely the Times legal team carefully reviewed those scanned commercial lab reports with the wiped-out patient’s name and address, and likely they got the OK from the patient and her family to run the story as they did. There were sufficient details included that she’s likely identifiable to some people in her community.

The case is instructive at many levels: It’s not just about the girl and her symptoms and her disease, and how doctors think, but about how the population of New York Times readers approached it over the course of 24 hours. A question an editor, if happy with the “results” – i.e. the on-line turnout (clicks, emails, tweets…) and lack of flak – might ask is what sort of case to use next week or next month, and how perhaps to improve on the presentation.

The question I ask as a physician is this: why we don’t have this sort of crowd sourcing for tough, unsolved medical cases? Privacy is an obvious concern as is, perhaps, physicians’ fear of missing something or being wrong. Also, if a diagnosis isn’t already determined, the responsible doctor might end up (and likely would) order more tests and, perhaps, harm the patient by chasing zebras and heeding some well-intentioned but absurd or simply wrong suggestions from a diverse collection of world-wide readers. So there would be a problem of “too many cooks” among other issues.

On the other hand, a single physician dealing with a challenging case would have, potentially, access to the expertise of millions of people, perhaps a few who have genuine insight and have seen a rare situation before. Doctors needn’t think in silos.

There’s a new study out on mammography with important implications for breast cancer screening. The main result is that when radiologists review more mammograms per year, the rate of false positives declines.

The stated purpose of the research,* published in the journal Radiology, was to see how radiologists’ interpretive volume – essentially the number of mammograms read per year – affects their performance in breast cancer screening. The investigators collected data from six registries participating in the NCI’s Breast Cancer Surveillance Consortium, involving 120 radiologists who interpreted 783,965 screening mammograms from 2002 to 2006. So it was a big study, at least in terms of the number of images and outcomes assessed.

First – and before reaching any conclusions – the variance among seasoned radiologists’ everyday experience reading mammograms is striking. From the paper:

…We studied 120 radiologists with a median age of 54 years (range, 37–74 years); most worked full time (75%), had 20 or more years of experience (53%), and had no fellowship training in breast imaging (92%). Time spent in breast imaging varied, with 26% of radiologists working less than 20% and 33% working 80%–100% of their time in breast imaging. Most (61%) interpreted 1000–2999 mammograms annually, with 9% interpreting 5000 or more mammograms.

So they’re looking at a diverse bunch of radiologists reading mammograms, as young as 37 and as old as 74, most with no extra training in the subspecialty. The fraction of work effort spent on breast imaging –presumably mammography, sonos and MRIs – ranged from a quarter of the group (26%) who spend less than a fifth of their time on it and a third (33%) who spend almost all of their time on breast imaging studies.

This means is that radiologists who review more mammograms are better at reading them correctly. The main difference is that they are less likely to call a false positive. Their work is otherwise comparable, mainly in terms of cancers identified.**

Why this matters is because the costs of false positives – emotional (which I have argued shouldn’t matter so much), physical (surgery, complications of surgery, scars) and financial (costs of biopsies and surgery) are said to be the main problem with breast cancer screening by mammography. If we can reduce the false positive rate, BC screening becomes more efficient and safer.

Time provides the only major press coverage I found on this study, and suggests the findings may be counter-intuitive. I guess the notion is that radiologists might tire of reading so many films, or that a higher volume of work is inherently detrimental.

But I wasn’t at all surprised, nor do I find the results counter-intuitive: the more time a medical specialist spends doing the same sort of work – say examining blood cells under the microscope, as I used to do, routinely – the more likely that doctor will know the difference between a benign variant and a likely sign of malignancy.

Finally, the authors point to the potential problem of inaccessibility of specialized radiologists – an argument against greater requirements, in terms of the number of mammograms a radiologist needs to read per year to be deemed qualified by the FDA and MQSA. The point is that in some rural areas, women wouldn’t have access to mammography if there’s more stringency on radiologists’ volume. But I don’t see this accessibility problem as a valid issue. If the images were all digital, the doctor’s location shouldn’t matter at all.

**I recommend a read of the full paper and in particular the discussion section, if you can access it through a library or elsewhere. It’s fairly long, and includes some nuanced findings I could not fully cover here.

This is an unusual entry into a discussion on the limits of patient empowerment.

In late December the Times ran a story, beginning on its front page, about a portrait in the Metropolitan Museum of Art by Diego Velázquez, the 17th Century Spanish painter. The news was that the tall representation of the teenage Prince Philip IV would be back on display in the European paintings galleries after a 16-month cleaning, restoration and re-evaluation of the work. And, in case you weren’t up on your art history news – the painting really is a Velázquez.

label (ikonic's Flickr)

I learned this morning that the museum received the painting in 1913. It was a gift of Benjamin Altman (that would be B. Altman, as in the department store of my childhood…). The 7-foot portrait was considered a true masterpiece for hundreds of years, its authenticity supported by a receipt signed by Velázquez and dated Dec. 4, 1624. According to the Times now, in 1973 experts at the museum formally revised their opinion of the painting; they down-rated it, saying it’s a product of Velázquez’s studio, rather than of the artist himself.

Velazquez' Portrait of Philip IV, at the Metropolitan Museum

Evidently Michael Gallagher, the chief paintings conservator at the Met, recently became concerned about the painting’s “workshop” label based on his experience upon cleaning another, later Velázquez portrait at the Frick. “Its true condition was obfuscated by the decades of varnish and the liberal repainting,” he said of the Met portrait. According to the Times, Philip’s left eye was missing, possibly from flaking or vandalism. Ultimately, x-ray analyses and careful examination of the cleaned portrait convinced Gallagher and his colleagues of the portrait’s legitimacy.

I was in the neighborhood, so I thought I’d check out the work for myself, in light of this new information. I spent a while staring at it, studying the prince’s hand and other features about which I’d recently updated my knowledge. Still, I realized, there was no way in the world I could tell, on my own and even if my life depended on it, if it were a Velázquez, or not a Velázquez.

Sometimes you have to rely on experts. I don’t have a Ph.D. in art history. Or anything approaching sufficient knowledge of Velázquez and his workshop, Prince Philip IV of Spain, x-ray analyses of oil paintings, varnish and resins, 17th Century receipts and signatures, or similar “cases” – like the related portrait that turns out to be in the Prado, and other works by the same painter – to know the difference.

That’s the thing – in medicine, if you have an unusual health condition, like a rare form of T cell lymphoma or an obscure infection, you may find that you depend on a doctor’s expertise. Recommending the right treatment (which might be no treatment) requires knowing and understanding the correct diagnosis. Figuring out what’s the correct diagnosis requires a lot of knowledge, and experience.

detail of hand, in Velazquez' painting

As for patient empowerment, I think what patients with rare or puzzling conditions can do is to make sure they’re comfortable with their physicians, that their doctors know what about what they’re treating and will admit when they’re unsure of a diagnosis or need more expert, specialist advice. The problem, then, is for doctors to admit what they don’t know, which in the end requires that they be well-educated and able to discern unusual cases and outliers, and take the time to notice – and not dismiss – details about their patients’ stories that warrant further examination and thought.