(Warning parents, this post contains Santa spoilers!)
As a student and researcher of evolution during my undergraduate thesis and now climate change in my dissertation, I am no stranger to debates over the proper use of science in policy-making as well as over the validity of science itself. But recently, as I was reflecting on my experience with the challenges of evidence-based policy-making, I realized that to get to the start we have to journey back to 1998, the height of my suspicions about Santa Claus.
No way did Santa fit that new bicycle down the chimney. And elves don’t make American Girl Dolls. But I couldn’t just say I didn’t believe anymore. I had to know. And to be certain, I had to do some serious research. This was before high-speed internet; it took an hour to log on to our dial-up system, and should someone call during my Google search, all would be lost. Plus, let’s be real. My mom wasn’t going to let seven-year-old Michelle surf the web without supervision.
So instead, I undertook some top-secret archival work in my mother’s basement office, and as is often not the case with archival research, it didn’t take long before I found what I was looking for. To this day, my mother meticulously prints, marks, and files receipts for everything she purchases. On November 29, 1998, Karen Sullivan spent $115.00 on “Kit, the American Girl Doll, with book and accessories.”
Truth hurts. And always one for drama, I ran upstairs to the dining room and tearfully screamed, “All adults are liars!!!!!!!” at my mother right in the middle of her lunch-date with friends.
A crusader for honesty, I shared my discovery with my friends on Monday at school. My mother paid for this dearly when my friends’ parents called to complain that I had unapologetically RUINED their daughters’ childhoods. Though most of my friends appreciated me (even if begrudgingly) for disseminating my research, one friend stopped me in my tracks.
“Maybe Santa just gave your mom a receipt in case you didn’t like the doll. That’s smart,”she said.
The evidence was…inconclusive?! How could she disagree with my findings? How could she criticize my policy plan to stop the generational cycle of deceit?
Early Lessons from “Santa Policy”
Now, maybe you’re thinking I just needed more information to convince her. I could have set up a video camera to capture her parents setting up her toys. (But maybe Santa wasn’t feeling well so he shipped the gifts to her parents?) Or what if I could have flown over the North pole to show her it’s just a barren ice-cap? (They probably live underground?). If she wanted to believe in Santa Claus, she was probably going to find a way to believe.
At the time, I was shocked. Fast forward about two decades, *almost* three degrees, a few eye-opening mentors, and some real world practice with this evidence-based policy stuff, and I can now say I encounter this all the time in my research of climate change politics and policy. Researchers often have noble intentions for sharing their work with decision makers, but this doesn’t always translate into the policy actions they propose. That some scientists even propose specific policy actions creates uncertainty around their science for those who disagree with their proposals. And decision makers may use evidence to justify their policy actions, only to find critique from colleagues gets louder.
Disagreements over issues like climate change are often argued in terms of lacking or contested knowledge, as well as conflicting notions of risk. But increasing the quality and amount of evidence doesn’t seem to dissolve dissension as much as you might expect. Why is that? What are the limits of scientific knowledge for addressing today’s pressing policy issues?
The Excess of Objectivity
First, particularly with climate change, there are myriad perspectives from which you could conduct your research due to various and overlapping natural and human-causes of climate change and an even wider array of potential environmental, economic, and socio-cultural impacts. Each perspective comes with its own body of knowledge, values, and action-items which may contradict those of another. And in the vast space of climate change research, those holding different views are sure to find some academic in some university who holds a hypothesis or theory that fits their perspective.1
My co-blogger, Nich has a helpful analogy for this: We have a dozen cupcakes, all of different flavors and decoration fitting one of twelve people’s preferences. Each person can choose the cupcake that best suits them based on taste and appearance. Even outside of that dozen, you’re bound to find a cupcake that suits you, considering the vast number of bakeries, recipes, and ingredients. Now, substitute preferences of flavor or appearance for “values/aims” and cupcakes for “evidence/science.” Because of the various perspectives that characterize the extensive amounts of climate change science and evidence out there, you can find contradicting facts to support contradicting value- or aim-based positions on climate change, and a whole host of other issues.
Scientific Uncertainty, Caught in the Middle
Scientific uncertainty often lies at the center of debates over climate policy. One side will prescribe a policy based on a scientific claim, while dissenters will invoke scientific uncertainty to rally against action.
Some scientists argue that the public and their elected officials simply don’t understand uncertainty. But I have to say, I can hardly blame my grandmother2 for misunderstanding “scientific uncertainty.” First, it has different meanings in different fields, largely owing to the mathematical differences among studying electrons, atoms, cells, humans bodies, and human societies. Second, uncertainty is an abused concept in debates over climate change policy (among other controversial science policy arenas, e.g., GMOs, vaccines, etc.).
Case in point: earlier this year, New York Times columnist Bret Stephens used his first column to challenge climate change scientists and activists, noting that there are many unknowns and uncertainties when it comes to climate change, enough that proposed ‘abrupt and expensive changes in public policy’ should be delayed and conversation (i.e., debate) should continue. Former Times columnist and blogger, Andrew Revkin swiftly replied to the column, in which he was oft quoted. In his reply, he argues that the basics are clear; climate change is happening. Unclear are the scope and scale of impacts including answers to questions that extend far beyond the bounds of climate science, ‘how dangerous?’ and ‘what do we do?’ But, Revkin argues, such uncertainty is still actionable knowledge.
Uncertainty has been caught in the middle of this debate, with one side declaring that it is reason to forestall action and the other countering that it’s the reason to act urgently. Yet, Revkin and Stephens would likely both agree that climate change is a (mostly) political problem involving really difficult values questions that are consistently couched in terms of (un)certainty by advocates and opponents of action. With such debate, centered on competing interpretations of, misunderstanding of, or misuse of scientific uncertainty, it’s fair to see why my grandmother is sometimes skeptical of the facts.
Some people cannot accept evidence for climate change because it is inconsistent with their social-cultural identity. To explain this phenomenon, Yale Law Professor, Dan Kahan suggests that there may be two ways people use reason, (1) to know what is known (e.g., the latest climate science), and (2) to be who we are. Sometimes who we are doesn’t align with what is known. Understandably, most people choose to protect their social-cultural identities; it’s what they have to live with everyday. Put another way, whether or not someone “believes” in the evidence for climate change may be less an expression of what they know and more an expression of who they are. Climate change is wrapped up in a host of cultural and socio-economic problems, so it isn’t surprising that many individuals and institutions find evidence for climate change to be “uncomfortable knowledge.”
For example, my conservative, Republican uncle, who works in steel, refuses to accept climate change is real. But given that proposals to address climate change threaten his work and his ideology, it makes sense that he would have a hard time accepting evidence for climate change. And it’s worth noting that he approves of my climate change work with the National Park Service. Perhaps this is because national parks are ideologically neutral (their 75% ‘favorable’ approval rating is only second to the US Postal Service among federal agencies3) compared to debates over energy, infrastructure, and lifestyle.
The Role of Science
So if evidence is so contested in political negotiations, what’s the use?
Well one idea, in the words of Philip Handler, President of the National Academy of Science from 1969-1981, is that “The estimation of risk is a scientific question… The acceptability of a given level of risk, however, is a political question, to be determined in the political arena.”4 In other words, the role of science is to understand how different policy choices can lead to different outcomes (or, in Handler’s example, different levels of risk). The role of politics, then, is to choose which outcomes (levels of risk) and thus which policy choices are acceptable. But Handler’s point is not entirely sound because even our tools of research and estimation can be politically subjective: the way scientists and policy analysts pose research questions can bias research programs toward certain conclusions and policy suggestions. In a recent National Affairs article, conservative pundit Oren Cass argues that this is one of the ways in which he believes evidence-based policy falls short. He uses the example of policy analyses around health care access to make his point:5
“The debate over how best to ensure that low-income Americans have access to health care in the most cost-effective way possible is one of the most controversial and complex policy quandaries in our politics. Yet the researchers providing the evidence on which to base policy were investigating whether the value of Medicaid is larger than zero…Proponents of Medicaid expansion understandably delighted in this framing, which established a bar of “not worthless” for the program.”
Cass argues that the research and results are biased because the experiment was designed without regard for alternative ways to spend Medicaid money, or some might say, with a liberal mindset. Cass then purports that the government philosophy should come before the research design: “…assessment should begin from a philosophical inquiry into the proper role of the state and its relationship to the development of healthy families and communities….”
Such an inquiry could lead to different measurements, different experimental designs, and the use of different research tools. If this is true, our ‘objective research’ can be politically biased from the outset because the questions we choose to ask, the frames we ask them in, and the tools and experiments we use to answer them can all be ideologically influenced. Cass even suggests that we should abandon the premise that policy-related science is objective: “…let’s couch that science in its political perspective upfront.”
It’s important to note that transparent alignment of a research program with a political perspective doesn’t mean the research is “false” or “wrong.” But it could limit contributions to bipartisan policy-making. In the case of the Medicaid research bearing the brunt of Cass’ criticism, the utility of the results was constrained because the research design neglected a host of other ways in which we might improve access to health care for low income Americans.
Before you lose your mind down a postmodern wormhole wondering about the (non)existence of “truth” or “objectivity,” let’s get back to what’s important here: Santa isn’t real. My friend could spend her whole life believing, but that doesn’t change reality. But of course, telling her this didn’t change her mind at the time.6 In retrospect (and this is what’s really important here), I was learning an important science policy lesson at the ripe-old-age of seven: Two people can look at the same facts and reach two different, even opposite, conclusions, and not because the facts aren’t true, but because the world and its problems are complicated and our ability to “know” is limited.
Awareness of this “Santa Policy” lesson, and all of the others above, is necessary when creating, acquiring, using, and sharing information. Plus, it invites us to question what we know and why we know it. After all, blind faith in the value of evidence isn’t scientific.
I’m personally still muddling through. First, how do I know when to stop questioning (i.e., how do I avoid that postmodern wormhole)? Perhaps, it has to do with improving the transparency of perspectives that contribute to research. I personally believe climate change is real because smart people who work on climate change and who demonstrate understanding of both sides of the political argument (Stephens and Revkin, for example) agree climate change is real, but still disagree about what to do about it. But that confidence in expertise is just confidence, founded or unfounded, in certain people’s opinions anyway, which can seem an insufficient justification for policy action.
My own research examines the role of science in decision making for the National Park Service, often concerning climate change. And through that work, I’ve started to understand why more or better information rarely solves disagreements over climate change. But how can such disagreements be solved? And how can we effectively use evidence to inform policy? I’m learning everyday. Stay tuned for another post, another time.
P.S. My Santa Policy has evolved, and I promise not to break the news to your small children. Also worth noting that my sister’s policy was to pretend she still believed because then you guarantee a consistent gift-flow…So of course there is more than one policy to craft based on the evidence!
1ASU science policy professor and practitioner, Dan Sarewitz, calls this the “excess of objectivity.” He claims that it’s not for a lack of knowledge that we can’t all agree; rather it’s the excess of knowledge.
2My grandmother is my litmus strip for thoughts from the average American. I love you, Grandma!
4Quoted in Risk and Culture, Douglas and Wildavsky 1982, 65
5And then there are other places where I disagree with his analysis, but that is outside the scope of this post.
6And similarly, recent work suggests that constantly barraging climate deniers with the “97.1% consensus” is a failing strategy.