The American Cancer Society (ACS), an advocacy organization that has fairly recently (and very positively) taken a more appropriate, evidence-based approach to cancer screening, recently revised its mammography recommendations. While it still recommends more mammograms than the U.S. Preventive Services Task Force (which doesn’t recommend starting until age 50, and then screening only every other year), it has raised the starting age from 40 to 45, and has recommended changing from annual to biennial screening at age 55.
This prompted the usual outrage from the usual quarters making all the usual uninformed arguments. I followed these with a mixture of remorse, amusement, infuriation and boredom—boredom borne of the fact that this very “debate” has been going on largely the same way for most of my professional life. Yes, there has been some new evidence within the last 20 to 30 years, but most of it suggests that mammography is less effective, not more, than we used to think. Hence the revised ACS guidelines.
But when an op-ed appeared in the New York Times, written by three doctors making ill-informed arguments, I had to speak up. It’s embarrassing when physicians don’t seem to understand what constitutes meaningful evidence. There are many points in that op-ed I take issue with, but I’m focusing on one idea here: the oft-stated, yet incorrect, view that clinical experience and expertise are necessary in order to evaluate the efficacy and effectiveness of screening tests.
The authors of this op-ed, who identify themselves as “two breast radiologists and one breast surgeon,” state:
We think it’s noteworthy that while there were medical specialists involved in an advisory group, the panel actually charged with developing the new guidelines did not include a single surgeon, radiologist or medical oncologist who specializes in the care and treatment of breast cancer. Not one.
At first blush, this sounds reasonable: if you’re trying to determine the value of breast cancer screening, shouldn’t you ask people who have the most experience screening for, and treating, breast cancer? Well, no, you shouldn’t, and here’s why: screening is undertaken at a population level, and its value can be assessed and understood only in the context of the entire population. Individual patients’ anecdotes aren’t informative; worse, they tend to be misleading.
Radiologists see people who come for mammograms, not those who don’t; surgeons and oncologists see people with positive mammograms, not the rest. Thus, their experience (while certainly critically important with regard to reading mammograms or treating cancer) provides no useful information about whether screening itself is valuable, neutral or harmful. Only appropriately collected and analyzed data can tell you about that.
There is a pernicious aspect of this “expertise fallacy”: once you understand that patient-level experience cannot provide useful information to assess screening, it becomes clear that clinical experience tends to provide misleading information. Among the many reasons for this:
- SELECTION BIAS: People who get screened are different from those who don’t. Individuals who come in for screening tests tend, on average, to be wealthier, better educated and more concerned about their health than those who don’t get screened. These features tend to lead to better health outcomes in those patients, whether they get screened or not. But there’s also an opposite bias: people get screened because they have a higher-than-average risk of the condition they’re being screened for, which would tend to lead to worse health outcomes.
- LEAD–TIME BIAS: Let’s say there’s someone out there with undiagnosed breast cancer destined to die from it in 2020. If we don’t screen her, let’s say she develops a large lump, or signs of illness, and gets her cancer diagnosed in 2018; she therefore dies two years after her diagnosis. Imagine instead that we screen her next year, but there’s no effective treatment available: she will still die in 2020, four years after her diagnosis. She’s now living twice as long after diagnosis, but is that of any real value to her?
- LENGTH-TIME BIAS: Some cancers grow more quickly than others. These aggressive cancers are more likely to kill you; they are also less likely to be identified by screening tests compared with slower-growing tumors, since they spend less time in a subclinical (and hence screen-detectable) state. Given that, tumors identified by screening are generally going to have a better prognosis than those that show up because of a lump or symptoms.
- AVAILABILITY BIAS: Since human beings are not computers, we are more likely to remember and take note of dramatic or meaningful events than of more-mundane ones. For a doctor, nothing is more dramatic than a potentially avoidable death. A surgeon who sees a patient who presents with an advanced case of breast cancer is likely to see this as a case that “could have been saved” had she been screened, and can think this argues for the efficacy of screening, even though it doesn’t. Almost as memorable as an avoidable death is a life saved; when a screened patient thanks her surgeon for saving her life, it makes a powerful impact on the physician, even though it says nothing about the value of screening. These “availability bias”–influenced perceptions are made even more problematic by the…
- POST HOC ERGO PROPTER HOC FALLACY: A woman gets screened for breast cancer, and five years later she’s alive. Another woman without screening presents with advanced breast cancer, and she dies. It’s compelling to attribute the outcome to the screening, or lack thereof, but such an attribution is not logically necessary.
- CONFLICT OF INTEREST: If you spend your time doing mammograms, you have a strong vested interest in believing that they are of value. Even if we ignore the impact of financial incentives, there is an easily understandable tendency to care about and defend that which you do every day.
So when it comes to screening recommendations, must we ask the doctors who do the tests or treat the patients to give us guidance? I’d say no. After all, we seem to understand that if we have questions about how good the new iPhone is, we should probably find an independent review rather than asking the leadership of Apple what it thinks. Why is it so different in medicine?
Comments on this entry are closed.
Hi Dr. Marantz,
Sure, radiologists earn their income by imaging, and tend to see benefit in what they do. But it’s also true that many (not all) physicians with public health degrees don’t sufficiently acknowledge progress in specialty areas of medicine, such as radiology and pathology, which has, already and really, improved the quality of life and survival of people with breast cancer (and other malignancies).