≡ Content Category ≡ Main Menu

Just How Important Are Facts?

It has been purported that one of the major take-home messages from the bizarre events of 2016 is that we are now living in a “post-truth” world. Indeed, that very phrase was dubbed the “word of the year” by Oxford Dictionaries, which defines it as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”

We are also seeing a growing level of mistrust of science among Americans. From the climate-change deniers to the anti-vaxxers, we seem to be increasingly comfortable with the idea that we can choose to believe whatever we want, along with the related notion that “experts” are less reliable than the general public.

People Holding the word "Facts" in the SkyAs a scientist, I am troubled by the notion that emotion or personal belief would trump (forgive the pun) objective reality. But also as a scientist, I understand that facts themselves aren’t sufficient to create a model of reality—and as a former philosophy student, I choose the phrase “model of reality” because, while properly interpreted and analyzed facts are necessary in the quest for meaningful knowledge, ultimate “truth” is unknowable.

From this perspective, therefore, “facts” are necessary but not sufficient for understanding the universe. Not only must there be a clear ability to distinguish purported “facts” from fiction, including an ability to distinguish truths of logic from facts derived empirically (the former are immutable; the latter may change with new information); we also must critically and carefully interpret the available facts within a conceptual or theoretical framework. If this notion makes you uncomfortable, there’s only one thing I can say: tough noogies. If you want to live in a world of revealed truth, go enter a monastery. If you want to live in the real world, then learn to live with it.

I have the great honor of teaching future physicians and future researchers, and try to help my learners address the challenge of making valid inferences from the facts they know or the data they’re interpreting. Unfortunately, starting in grade school and continuing through grad school, our educational system remains mired in an old model that continues to prize “fact retention” over analytic ability—a model that is increasingly irrelevant because, with information ever at our fingertips (right there on our smartphones), there is less need than ever before to retain facts, and conversely a greater need than ever to analyze them critically to draw reasonable conclusions. This has created the perfect situation for a post-truth world: unfettered access to (real or fake) information, and limited ability to make reasoned judgments about or with this information.

So why do educators persist in valuing fact memorization and regurgitation over reasoned judgment? The sad answer is that it’s a lot easier to measure what people know than how they think, and we need metrics to adjudicate educational achievement. All that’s needed to test factual knowledge are some agreed-upon facts and some multiple-choice questions; measuring the true goal of education, the achievement of competency, is much more elusive. And since testing drives learning (as in the aphorism “People don’t respect what you expect, they respect what you inspect”), this approach pushes learners to overvalue factual knowledge at the expense of analytic ability. Whether we look at the overall failure of “no child left behind” in primary education or the stranglehold the U.S. Medical Licensure Examination (USMLE) has on medical education, we seem to exist in two parallel universes simultaneously: one that is “post-truth,” and one that values nothing other than the (generally agreed-upon catechism we accept as) facts, which we erroneously label “truth.”

But even as we bow before the gods of facts, we also understand the importance of this concept (one of my favorite Albert Einstein quotes that is, alas, not his—“fact-checking” can sometimes be painful!): “Not everything that can be counted counts, and not everything that counts can be counted.” We recognize the importance of “competency-based education,” but downplay the challenges of assessing competency achievement—thus perpetuating the convenient assessment of factual knowledge. It is in this context that my colleagues and I recently published our strong objection to a new “certification” examination for clinical researchers: a (you guessed it) multiple-choice test of factual information being promulgated by the National Board of Medical Examiners, the same organization that brings us (and resists changes to) the USMLE.

We lay out our arguments against this exam in this editorial. I am especially troubled because, as someone who has been educating clinical researchers for many years, I actually know how to measure their competency: most importantly, by evaluating the quality and impact of their research, and also (indirectly) by examining their professional track records (through such metrics as publications, grants and promotions). While one can argue that these metrics are imperfect – and some potentially more enlightened (although harder to measure) scales have been proposed – they are certainly better, and more aligned with competency assessment, than a multiple choice test could ever be.

Many of us will choose to fight against accepting the notion that we are entering a “post-truth” era. To fight that good fight, we must acknowledge that “truth” is an elusive concept requiring the analysis and application of factual information, and advocate for educational (and testing) approaches that will yield the educated citizenry that Thomas Jefferson recognized as a cornerstone of democracy.

Like what you’ve read? Subscribe to The Doctor’s Tablet!