Evidence-Based Policy-Maker since 1998

From Santa to Climate Change

(Warning parents, this post contains Santa spoilers!)

As a student and researcher of evolution during my undergraduate thesis and now climate change in my dissertation, I am no stranger to debates over the proper use of science in policy-making as well as over the validity of science itself. But recently, as I was reflecting on my experience with the challenges of evidence-based policy-making, I realized that to get to the start we have to journey back to 1998, the height of my suspicions about Santa Claus.

No way did Santa fit that new bicycle down the chimney. And elves don’t make American Girl Dolls. But I couldn’t just say I didn’t believe anymore. I had to know. And to be certain, I had to do some serious research. This was before high-speed internet; it took an hour to log on to our dial-up system, and should someone call during my Google search, all would be lost. Plus, let’s be real. My mom wasn’t going to let seven-year-old Michelle surf the web without supervision.

So instead, I undertook some top-secret archival work in my mother’s basement office, and as is often not the case with archival research, it didn’t take long before I found what I was looking for. To this day, my mother meticulously prints, marks, and files receipts for everything she purchases. On November 29, 1998, Karen Sullivan spent $115.00 on “Kit, the American Girl Doll, with book and accessories.”

Truth hurts. And always one for drama, I ran upstairs to the dining room and tearfully screamed, “All adults are liars!!!!!!!” at my mother right in the middle of her lunch-date with friends.

A crusader for honesty, I shared my discovery with my friends on Monday at school. My mother paid for this dearly when my friends’ parents called to complain that I had unapologetically RUINED their daughters’ childhoods. Though most of my friends appreciated me (even if begrudgingly) for disseminating my research, one friend stopped me in my tracks.

“Maybe Santa just gave your mom a receipt in case you didn’t like the doll. That’s smart,”she said.

The evidence was…inconclusive?! How could she disagree with my findings? How could she criticize my policy plan to stop the generational cycle of deceit?

Early Lessons from “Santa Policy”

Now, maybe you’re thinking I just needed more information to convince her. I could have set up a video camera to capture her parents setting up her toys. (But maybe Santa wasn’t feeling well so he shipped the gifts to her parents?) Or what if I could have flown over the North pole to show her it’s just a barren ice-cap? (They probably live underground?). If she wanted to believe in Santa Claus, she was probably going to find a way to believe.

At the time, I was shocked. Fast forward about two decades, *almost* three degrees, a few eye-opening mentors, and some real world practice with this evidence-based policy stuff, and I can now say I encounter this all the time in my research of climate change politics and policy. Researchers often have noble intentions for sharing their work with decision makers, but this doesn’t always translate into the policy actions they propose. That some scientists even propose specific policy actions creates uncertainty around their science for those who disagree with their proposals. And decision makers may use evidence to justify their policy actions, only to find critique from colleagues gets louder.

Disagreements over issues like climate change are often argued in terms of lacking or contested knowledge, as well as conflicting notions of risk. But increasing the quality and amount of evidence doesn’t seem to dissolve dissension as much as you might expect. Why is that? What are the limits of scientific knowledge for addressing today’s pressing policy issues?

The Excess of Objectivity

First, particularly with climate change, there are myriad perspectives from which you could conduct your research due to various and overlapping natural and human-causes of climate change and an even wider array of potential environmental, economic, and socio-cultural impacts. Each perspective comes with its own body of knowledge, values, and action-items which may contradict those of another. And in the vast space of climate change research, those holding different views are sure to find some academic in some university who holds a hypothesis or theory that fits their perspective.1

My co-blogger, Nich has a helpful analogy for this: We have a dozen cupcakes, all of different flavors and decoration fitting one of twelve people’s preferences. Each person can choose the cupcake that best suits them based on taste and appearance. Even outside of that dozen, you’re bound to find a cupcake that suits you, considering the vast number of bakeries, recipes, and ingredients. Now, substitute preferences of flavor or appearance for “values/aims” and cupcakes for “evidence/science.” Because of the various perspectives that characterize the extensive amounts of climate change science and evidence out there, you can find contradicting facts to support contradicting value- or aim-based positions on climate change, and a whole host of other issues.

Scientific Uncertainty, Caught in the Middle

Scientific uncertainty often lies at the center of debates over climate policy. One side will prescribe a policy based on a scientific claim, while dissenters will invoke scientific uncertainty to rally against action.

Some scientists argue that the public and their elected officials simply don’t understand uncertainty.  But I have to say, I can hardly blame my grandmother2 for misunderstanding “scientific uncertainty.” First, it has different meanings in different fields, largely owing to the mathematical differences among studying electrons, atoms, cells, humans bodies, and human societies. Second, uncertainty is an abused concept in debates over climate change policy (among other controversial science policy arenas, e.g., GMOs, vaccines, etc.).

Case in point: earlier this year, New York Times columnist Bret Stephens used his first column to challenge climate change scientists and activists, noting that there are many unknowns and uncertainties when it comes to climate change, enough that proposed ‘abrupt and expensive changes in public policy’ should be delayed and conversation (i.e., debate) should continue. Former Times columnist and blogger, Andrew Revkin swiftly replied to the column, in which he was oft quoted. In his reply, he argues that the basics are clear; climate change is happening. Unclear are the scope and scale of impacts including answers to questions that extend far beyond the bounds of climate science, ‘how dangerous?’ and ‘what do we do?’ But, Revkin argues, such uncertainty is still actionable knowledge.

Uncertainty has been caught in the middle of this debate, with one side declaring that it is reason to forestall action and the other countering that it’s the reason to act urgently. Yet, Revkin and Stephens would likely both agree that climate change is a (mostly) political problem involving really difficult values questions that are consistently couched in terms of (un)certainty by advocates and opponents of action. With such debate, centered on competing interpretations of, misunderstanding of, or misuse of scientific uncertainty, it’s fair to see why my grandmother is sometimes skeptical of the facts.

Uncomfortable Knowledge

Some people cannot accept evidence for climate change because it is inconsistent with their social-cultural identity. To explain this phenomenon, Yale Law Professor, Dan Kahan suggests that there may be two ways people use reason, (1) to know what is known (e.g., the latest climate science), and (2) to be who we are. Sometimes who we are doesn’t align with what is known. Understandably, most people choose to protect their social-cultural identities; it’s what they have to live with everyday. Put another way, whether or not someone “believes” in the evidence for climate change may be less an expression of what they know and more an expression of who they are. Climate change is wrapped up in a host of cultural and socio-economic problems, so it isn’t surprising that many individuals and institutions find evidence for climate change to be “uncomfortable knowledge.”

For example, my conservative, Republican uncle, who works in steel, refuses to accept climate change is real. But given that proposals to address climate change threaten his work and his ideology, it makes sense that he would have a hard time accepting evidence for climate change. And it’s worth noting that he approves of my climate change work with the National Park Service. Perhaps this is because national parks are ideologically neutral (their 75% ‘favorable’ approval rating is only second to the US Postal Service among federal agencies3) compared to debates over energy, infrastructure, and lifestyle.

The Role of Science

So if evidence is so contested in political negotiations, what’s the use?

Well one idea, in the words of Philip Handler, President of the National Academy of Science from 1969-1981, is that “The estimation of risk is a scientific question… The acceptability of a given level of risk, however, is a political question, to be determined in the political arena.”4 In other words, the role of science is to understand how different policy choices can lead to different outcomes (or, in Handler’s example, different levels of risk). The role of politics, then, is to choose which outcomes (levels of risk) and thus which policy choices are acceptable. But Handler’s point is not entirely sound because even our tools of research and estimation can be politically subjective: the way scientists and policy analysts pose research questions can bias research programs toward certain conclusions and policy suggestions. In a recent National Affairs article, conservative pundit Oren Cass argues that this is one of the ways in which he believes evidence-based policy falls short. He uses the example of policy analyses around health care access to make his point:5

“The debate over how best to ensure that low-income Americans have access to health care in the most cost-effective way possible is one of the most controversial and complex policy quandaries in our politics. Yet the researchers providing the evidence on which to base policy were investigating whether the value of Medicaid is larger than zero…Proponents of Medicaid expansion understandably delighted in this framing, which established a bar of “not worthless” for the program.”

Cass argues that the research and results are biased because the experiment was designed without regard for alternative ways to spend Medicaid money, or some might say, with a liberal mindset. Cass then purports that the government philosophy should come before the research design: “…assessment should begin from a philosophical inquiry into the proper role of the state and its relationship to the development of healthy families and communities….”

Such an inquiry could lead to different measurements, different experimental designs, and the use of different research tools. If this is true, our ‘objective research’ can be politically biased from the outset because the questions we choose to ask, the frames we ask them in, and the tools and experiments we use to answer them can all be ideologically influenced. Cass even suggests that we should abandon the premise that policy-related science is objective: “…let’s couch that science in its political perspective upfront.”

It’s important to note that transparent alignment of a research program with a political perspective doesn’t mean the research is “false” or “wrong.” But it could limit contributions to bipartisan policy-making. In the case of the Medicaid research bearing the brunt of Cass’ criticism, the utility of the results was constrained because the research design neglected a host of other ways in which we might improve access to health care for low income Americans.

The Upshot

Before you lose your mind down a postmodern wormhole wondering about the (non)existence of “truth” or “objectivity,” let’s get back to what’s important here: Santa isn’t real. My friend could spend her whole life believing, but that doesn’t change reality. But of course, telling her this didn’t change her mind at the time.6 In retrospect (and this is what’s really important here), I was learning an important science policy lesson at the ripe-old-age of seven: Two people can look at the same facts and reach two different, even opposite, conclusions, and not because the facts aren’t true, but because the world and its problems are complicated and our ability to “know” is limited.

Awareness of this “Santa Policy” lesson, and all of the others above, is necessary when creating, acquiring, using, and sharing information. Plus, it invites us to question what we know and why we know it. After all, blind faith in the value of evidence isn’t scientific.

I’m personally still muddling through. First, how do I know when to stop questioning (i.e., how do I avoid that postmodern wormhole)? Perhaps, it has to do with improving the transparency of perspectives that contribute to research. I personally believe climate change is real because smart people who work on climate change and who demonstrate understanding of both sides of the political argument (Stephens and Revkin, for example) agree climate change is real, but still disagree about what to do about it. But that confidence in expertise is just confidence, founded or unfounded, in certain people’s opinions anyway, which can seem an insufficient justification for policy action.

My own research examines the role of science in decision making for the National Park Service, often concerning climate change. And through that work, I’ve started to understand why more or better information rarely solves disagreements over climate change. But how can such disagreements be solved? And how can we effectively use evidence to inform policy? I’m learning everyday. Stay tuned for another post, another time.

-Michelle

P.S. My Santa Policy has evolved, and I promise not to break the news to your small children. Also worth noting that my sister’s policy was to pretend she still believed because then you guarantee a consistent gift-flow…So of course there is more than one policy to craft based on the evidence!

Footnotes:

1ASU science policy professor and practitioner, Dan Sarewitz, calls this the “excess of objectivity.” He claims that it’s not for a lack of knowledge that we can’t all agree; rather it’s the excess of knowledge.

2My grandmother is my litmus strip for thoughts from the average American. I love you, Grandma!

3The latest data on this is from 2015, but the parks are only increasing in popularity year-to-year so I think it’s safe to assume this number is probably steady.

4Quoted in Risk and Culture, Douglas and Wildavsky 1982, 65

5And then there are other places where I disagree with his analysis, but that is outside the scope of this post.

6And similarly, recent work suggests that constantly barraging climate deniers with the “97.1% consensus” is a failing strategy.

A Tired Critique of a Tired Pro-Science Op-Ed

Reflecting on the Use of Science as a Political Tool

Although it may feel we are always on the science-policy news beat, fieldwork, summer jobs, new puppies, and Game of Thrones watch parties sometimes put us on a news delay. Lucky for us, we have friends (and guest bloggers) like Christian Ross who keep us on our toes. Last week, Christian brought an editorial to our attention; in the latest issue of Science, Delaware’s junior Senator, Chris Coons, sounds off on why “scientists can’t be silent.” Christian asked us, “does this strike you as just political posturing?,” and questioned some of Coons’ claims about the public benefits of the historic and current public support of science in the United States.

The rapid-fire emails that ensued led us to a reflection on science’s increasing use as a political tool. If our critique sounds hackneyed that’s because Coons’ piece is just another standard “pro-science” op-ed.1 Maybe we are cynical, but Senator Coons’ vague declaration of the importance of science for decision making makes him another politician playing the “science card” for political points. And the card game Coons and other “pro-science” politicians play increasingly draws a line between the “right” and “wrong” sides of science on issues like climate change, vaccines, and GMOs. Dangerously, these sides map onto political agendas, with democrats like Coons on the side of science and republicans painted as the opposition.

We all agreed the piece tasted strongly of political posturing. Coons includes lots of democratic talking points from immigration to climate change – and a sincere call for action. And he offers himself shameless self-congratulations for co-founding the Senate Chemistry Caucus, a bipartisan effort to “promote the use of sound science in policy-making.”2

We also questioned the misleading lack of context for Coons’ claims.3 For example, he laments the 17% cuts to research funding in Trump’s Skinny Budget but ignores the rest of the budget proposal and accompanying questions: What else is cut? Is scientific progress really being “threatened”?  And as Nich notes in a previous blog post: “Too many responses to Trump’s budget blueprint and its impacts on federally funded science rely on dubious connections between research and public value to justify funding.” To us, Coons’ outcry fails to address the real problems facing our country’s relationship with science.

Coons’ call for scientists to more widely publicize their work and reach out to elected officials is a noble one, but lacks nuance and detail. We know it’s imperative that scientists share and publicize their work, but it’s important to consider how. Are they openly advocating for a particular policy or agenda item? Are they presenting a set of policy options based on evidence? Are scientists setting their recommendations in the context of political realities? (For more on this, see the work of Roger Pielke, Jr.). Recent work on the failure of scientific consensus messaging in climate change policy points to the importance of these questions. Further, how will Coons’ recommendations for scientists help him accomplish his vague goal to elevate the role of science and fight ‘anti-science’ sentiments?

Finally, Coons fails to note that at the root of debates surrounding climate change, GMOs, and vaccines, are questions fundamental to political ideology: how much control should the government have over our personal choices (lifestyle, nutrition, health)?; how much government regulation should businesses tolerate?; etc. Science has become implicated as the justification for different answers to those questions. It’s concerning to us that being ‘pro-science ‘is becoming synonymous with a being a democrat. In op-eds like Coons’, science represents a political talking point to garner votes and exchange barbs with opponents instead of an effective tool for evidence-based governance. In doing so, ‘Science’ borrows at high interest against its future status and trust in the public sphere.

-Michelle, Nich, and Christian

1You can find very similar messaging in rhetoric surrounding the March for Science. Also, see Coons’ speech to the AAAS from earlier this year.

2We will give Coons credit for “his” effort being “bipartisan,” but he’s still posturing. The caucus actually started in the House, sponsored by the largest scientific society in the world, The American Chemical Society, to spotlight the role of the chemistry enterprise in the U.S. economy. The caucus is not the subject of our critique, but rather Coons’ seeming touting of it for political gain; our (admittedly short) web search did not uncover any other senators putting out press releases on the matter.

3Though we acknowledge you don’t typically have the luxury of a generous word count when writing editorials.

Science Policy Immersion

Learning to swim in the deep end of science policy

We harp on the importance of scientists engaging with policy and politics but it’s admittedly easy to do so from behind a laptop screen. Talking to policy makers and learning about the politics and mechanics of science policy requires time and emotional and intellectual bandwidth. Pressures to publish, submit grant applications, and myriad other professional responsibilities quickly drown out the ability of researchers to engage with the public and policy makers, despite the importance of such efforts.

Fortunately, smart and forward-looking people created a program to immerse scientists and researchers in science policy amidst the chaotic backdrop of Washington, DC. And we’ve both participated in this ASU-run program, called Science Outside the Lab (SOtL)1. Michelle attended in 2016, Nich earlier this summer. The program centers around conversations with people who work in science policy including federal-agency staff, former congressional staffers, and representatives of scientific societies. With each, participants discuss a variety of topics such as how scientific models factor into decision-making, the intricacies of the science budget, and the politics of congressional science committees. Rather than sharing our reflections on the program, we asked a few of our fellow participants to reflect on their time in DC and to share their insights. The big takeaways? There’s no single ‘science policy’, networking facilitates policy making and careers, and DC has some good restaurants.

Ryan Edwards (RE), is a PhD Candidate from Princeton University in civil and environmental engineering. He researches the potential environmental impacts from hydraulic fracturing in shale gas formations and has served as a policy consultant for the National Governors Association.

Ana Lopez (AL) is a Masters of Science and Technology Policy student at ASU, focusing on water and the water-energy nexus. She has interned at the Kyl Center for Water Policy and is doing her applied project on the Navajo Generating Station and its closure.

Walter Johnson (WJ) received his BS in Chemistry and currently studies the regulation of health biotechnologies. He attends Arizona State University, under the Masters of Science and Technology Policy program.

Wale Odukomaiya (WO) is a Research Associate at the U.S. Department of Energy’s Oak Ridge National Laboratory and a PhD candidate in Georgia Tech’s Woodruff School of Mechanical Engineering. His research interests include the development, integration, business/economics of sustainable energy systems and technologies, energy and climate change policy, and global energy infrastructure development.

1) Why did you decide to participate in SOtL?

RE: I have always been interested in the intersection between science, politics and policy-making and am considering a career in this area, so I saw SOtL as an awesome opportunity to learn more.

AL: The SOtL program is a part of the Masters of Science and Technology Policy program, so by enrolling in the program I made the decision to participate in SOtL. I thought the D.C. experience would be a valuable addition to my education and could help inform my career aspirations.

WJ: Though the DC experience fulfilled a degree requirement for me, SOtL was a natural complement to my science policy work and education. The US federal government is a huge complex system made of rapidly moving components, which can be learned in the classroom but takes on a much different meaning when experiencing it in person. The better appreciation of how DC functions and the contacts I made there certainly augmented my policy education.

WO: I heard about the SOtL program through a colleague in an interdisciplinary science and policy fellowship program that I am a part of at Georgia Tech. I had developed an interest in gaining breadth in my academic program to complement my very technical mechanical engineering curriculum. This is what led me to pursue the fellowship program in the first place, and subsequently the SOtL program. My interest in SOtL was heightened when I learned that it was run through an Arizona State University consortium, because I had known ASU to be quite disruptive in terms of creating innovative, out-of-the-box multidisciplinary academic programs.  For me, the biggest barrier to participating in the program was the financial cost. Luckily for me, my fellowship program had some funds earmarked for just this sort of interdisciplinary experience, so that took care of that, and I was on my way to D.C.!

2) The program organizers referred to the course as a ‘science policy disorientation.’ Were any of your ideas of science policy upended during your time in DC?

RE: The biggest idea that was upended for me was that science and scientists can direct science to its greatest benefit for society – the idea of “the free play of intellects working on subjects driven by their curiosity”. Dan Sarewitz gave us a great talk that dispelled the myth that curiosity-driven science leads to the best outcomes, and showed how important strong government direction (particularly from the U.S. Department of Defense) was in many of the great scientific and technological breakthroughs from World War II onward.

AL: I don’t think that any of my ideas of science policy were upended so to speak during SOtL, but I would say my preconceived notions of science policy became better informed. I gained a better idea of how the mechanisms of science policy work, and how public servants at various points in their careers make their contribution to the science policy realm.

WJ: My conceptualization of federal science policy was certainly challenged, and the sheer number of people that we heard from was sufficiently disorienting. The mechanics of the federal budget in general and appropriations to science related agencies is horridly complex. I had been told this in advance, but hearing it explained in greater detail was equal parts fascinating and headache-inducing.

WO: Yes, definitely. For me, I had this preconceived notion that science policy was a thing, a formal, well-defined, living, breathing institutional machine that literally forced the intersection of science and policy for the betterment of society. Through SOtL, I learned that this was not the case at all. In actuality, science policy could mean any one of several different things, and the definition continues to expand. Rather, science policy could be better described as a community of scientists, or people with scientific backgrounds who are currently working in a policy or social sector space. SOtL did a wonderful job exposing us to several individuals with science backgrounds currently working in Washington D.C. in some policy capacity. For example, we spoke to one person who had a PhD in a hard science and in a past job directed a think-tank housed at a university, but was currently working in the Congressional Budget Office. Another person we interacted with quite a bit throughout the program, who also had a PhD in a hard science, was working as a science communication coach, helping to equip scientists and engineers with tools to be able to better communicate with politicians and public servants.

3) What surprised you about the way science policy is made or the way science is used by the federal government? What was as expected?

RE: I guess I would say that I came into the program already being pretty cynical, because nothing surprised me much. But a couple things were really reinforced: firstly, that the government is vast and diverse and that there is no single “science policy”. Decisions are made by different branches of government and a number of different agencies. Secondly, that science is just one factor that plays into policy-making and it never will, or should, be the only consideration in government policy-making.

AL: At this point in my education and career, I didn’t think I would be surprised by the amount of bureaucracy in policy-making. Prior to SOtL, I thought that I had a firm grasp on understanding how bureaucracy affects policy in general, but I was still surprised about how hierarchy and political maneuvering grease the wheels of policy creation. One thing that was as expected was the fact that many of the public sector professionals we spoke with dedicate their professional lives to the work they do, and often work long hours. If you want to work in science policy, you are going to have to put the hours into get to a position where you are integral to the policy-making process.

WJ: The extent of the role of Congressional committee staffers was a bit shocking. I had heard anecdotes about the importance of staffers, but hearing from former staffers and those who interacted with staffers had an impact. The communication occurring between staffers and stakeholders seems to capture a lot of nuance which could easily be missed in the way science policy legislation evolves.

WO: In line with my previous comment, what surprised me the most was the non-existence of a formal structure or process of incorporating science or evidence-based decision making in general into policy-making or policy discussions. Especially in areas like research funding or climate change policy where you would think input from scientists would be vital. The way we lobby for this is by encouraging more people with STEM backgrounds to participate in some way in policy.

4) What advice might you give to someone who wants to get involved in science policy discussions in DC?

RE: Do the SOtL program! There are also some similar programs run by specific science organisations that may be of interest depending on your discipline (for example, the American Meteorological Society Summer Policy Colloquium). I have personally found it helpful to get involved with the public policy school at my university (even though I am an engineer) and to reach out to alumni who have gone into the policy world.

AL: My best advice for someone who wants to get involved in science policy discussions would be to get involved in a program like SOtL, and to get to know people who are a part of the process. Gaining an insider perspective and making connections is really valuable within science policy and can give someone a clearer understanding of how the process actually works. Beyond that, read as much as you can about current issues in science policy that interest you.

WJ: Connections make a big difference in any field, but DC thrives on personal relationships and connections. If you have contacts in DC or contacts who can introduce you to people in DC, take advantage of it.

WO: Take the time to find out for yourself what exactly science policy means to you. A great way of doing this is through a program like SOtL where you have the opportunity to speak with numerous people who are currently contributing to science policy in some capacity. Also, if you are a student, find ways to form connections with departments on campus that are more traditionally considered to align with policy (i.e. political science, public policy, law departments, etc…). This could be in the form of taking an actual course, or attending monthly seminars, or even trying to collaborate on a multi-disciplinary research paper of some sort.

5) Any thoughts on what this experience might mean for your career? Have your career goals shifted because of something you learned or encountered during the program?

RE: I was already intending to pursue a career in policy and the SOtL experience helped further solidify that goal for me. It was really helpful to learn about specific opportunities in D.C. and to meet people involved in science policy in different organizations (both governmental and non-governmental) and at different points in their careers. Knowing more about the wide array of opportunities has certainly helped me see the science policy career pathway as an exciting one.

AL: The experience definitely gave me a taste of D.C. life and a better idea of what my career and life would look like if I were to move out there. I’ve broadened my scope of potential long-term career goals because of the program and have a better understanding of what jobs are out there and what different professions in various agencies do on a day-to- day basis. I’m still figuring out what the SOtL experience will mean for me, but the program definitely broadened my horizons.

WJ: Meeting staff from GAO and CRS had a big impact. Both were agencies I was aware of, but hearing those individuals describe their jobs and day-to- day experiences has definitely sparked my interest. Against the cacophony of executive agencies, the Congressional agencies like GAO often seem to be forgotten, but play substantial roles in supporting government operations. And they have internships.

WO: My general approach to “career development” has been to gain as much exposure as possible and gain varied experiences to be as well-rounded as possible. I will soon have a PhD in mechanical engineering, which will almost by default make me an expert in a very technical area. I can differentiate myself from other technical experts by gaining as much breadth as possible in my education and experiences. I would say, if anything, that SOtL broadened my horizons because I previously thought that science policy meant one specific thing, when it actually is quite broad. Something else that was quite interesting to learn was that as a person with an advanced degree in a STEM area looking to work in policy, there is actually many times the opportunity to sort of create your own job, or tailor the issues that you work on to your interests. This is something that is quite valued among academics.

6) Last but not least: What was your favorite DC restaurant?

RE: Good Stuff Eatery! It’s a burger and shake place near the Capitol – a perfect destination after a walk around Capitol Hill.

AL: A place called Lincoln D.C.. The food and atmosphere were great!

WJ: Otello’s was a fun Italian place; our group celebrated a birthday there and had a great time.

WO: DC has a lot of great restaurants! Definitely enjoyed Founding Farmers, and Toro Toro for happy hour.

Footnotes

  1. Check out this paper written by program organizers for a more academic perspective on SOtL.

Basic Research Fails to Deliver to Society, and Applied Research Does Too!

Guest blogger, Christian Ross, argues that the basic versus applied research argument leads us astray in thinking about biomedicine and biotechnology funding and outcomes.

Our guest blogger, Christian Ross, is a PhD student in the School of Life Sciences at Arizona State University and a National Science Foundation Graduate Research Fellow. His research is focused on the intersections of science, society, and science policy, particularly surrounding emerging biotechnologies.

There is a recurring problem that I see in ongoing discussions about public funding of scientific research. And, perhaps surprisingly, this problem is not specific to the current US administration1, but rather seems to be endemic in scientific communities as much as political ones. Often, science policy is reduced to a simple question: do we need more basic research or more applied research? I think that this distinction between basic and applied research is unnecessary and unhelpful, particularly for biotechnology and biomedicine. Insistence on the basic-applied distinction as the question in science policy obscures growing evidence of the ineffectiveness of all publicly funded science to deliver on its promises to the public to provide life-improving technologies and products.

Thus far, the Trump administration has not made many friends within scientific communities. Threatened and realized budget cuts to research funding across disciplines and what is often called “anti-science” rhetoric have led to many impassioned responses from scientists and science advocates proselytizing about the importance and necessity of public funding of scientific research, especially basic research. But what counts as basic research, and more importantly how useful it actually is, nearly always goes unexamined.

Back to Basics: The Linear Model

The prevalence of the basic and applied research distinction in funding conversations in the US has strong roots in the work of Vannevar Bush and his report to President Harry Truman on the role of science in post-WWII America, Science: The Endless Frontier. Basic research, as opposed to applied research, focuses on pursuing fundamental understanding of natural phenomena without direct consideration about the potential applications of that knowledge to create novel technologies. Applied research, by contrast, harnesses the scientific knowledge generated by basic research to create technological solutions to societal problems in engineering, medicine, and other applied fields.

While the distinction between basic and applied research by no means began with Bush and Science: The Endless Frontier, Bush’s work did serve to legitimize the basic-applied distinction as a foundational aspect of post-war US science policy. Bush argued that national military and commercial interests depended on good science. And good science was basic science, unhindered by the constraints of applications. Moreover, basic research was the “pacemaker for technological progress” and invariably led to applied technologies and societal benefits.

Bush’s work also solidified an implicit social contract between science and society. In return for public funding, science would provide technological solutions and products to improve society through advances in healthcare, sanitation, travel, communication, national defense, or economic outcomes. Since then, federal funding for science has been characterized as supporting either basic or applied research with consistent pressure from scientists to keep basic research as the essential fuel for scientific and societal advancement.

Deceptively Linear Beginnings

Bush’s “linear model” for scientific development has already been widely and frequently critiqued (including at times by this blog) for oversimplifying the technical and political complexities of scientific research and development. Those are fair criticisms, and I am not intending to pile on much more here. Actually, I hope to defend the linear model a bit (or at least those that have bought into its ideology) as an understandable misstep. Though, I only want to do so to show that even if a linear model approach to science was once helpful or fitting2, it is not so for our present moment nor going forward, especially not in the fields of biotechnology and biomedicine.

That said, it is understandable how the linear model gained credibility in the US, particularly in biotechnology and biomedicine. Leading up to and during WWII, US applied research focused heavily on developing basic physics research (especially atomic physics) to support the war effort. After the war, there was a surplus of basic research in virology, cell biology, and genetics that had not yet been applied to developing technologies and products. This swell of untapped basic research directly led to the development of many new biotechnologies and drugs with the advent of recombinant DNA technology in the 1960s3. Recombinant DNA techniques enabled scientists to modify bacterial genomes to produce complex biochemical compounds for use in new, synthetic pharmaceuticals, like synthetic insulin to treat type I diabetes, erythropoietin (EPO) which treats chronic kidney disease, and tissue plasminogen activator (tPA) which breaks down blood clots to treat strokes. Much like the linear model describes, basic research acted as a precursor to biotechnological and biomedical applications.

The development of these new drugs through bioengineering was an unqualified success and provided justification for continued use of a linear model approach to science funding.

Biotech Paradise Lost?

However, we do not see similarly large leaps forward in biomedicine and biotechnology today. To be sure, we regularly develop new therapies, drugs, and techniques, but not at rates (or profit margins) comparable to the 1970s and 1980s4. And we certainly are not making progress on the high-priority problems like cancer in the same way as early biomedical and biotechnology research did with renal disease and diabetes.

According to linear model thinking, a slowdown of technological development is the result of a lack of basic scientific research. But over the past fifteen years federal support of basic research has been at least double what it was in the 1970s through the  mid-1980s. Evidently, increased funding for basic research has not translated into increased biomedical and biotechnological development. Even if we grant that the linear model at one time adequately described the relationship between federal funding of science and basic and applied research5, it is clearly not doing so now.

So, what changed? Why does the linear model no longer seem to describe the developments of biotechnological and biomedical innovation? Put simply, the reasons and context that enabled scientific and industrial boom in biomedicine and biotechnology in the 1970s and 1980s are different now in three major ways.

First, there was an unusual backlog of untapped basic life science research available for application by industry. During WWII and early years of the Cold War, basic research in virology, cell biology, and genetics seldom translated into applied research. During that time, the biological sciences did not receive nearly the same attention as other fields, like physics, resulting in much of its basic research to remain unconnected to tangible applications. However, once recombinant DNA techniques were developed and their potential became apparent, applied research in biotechnology and biomedicine grew dramatically. Researchers took advantage of surplus of basic biological research to jumpstart the applied research of the biotechnology and biomedicine industries in the US. New basic research became incorporated into the applied research and products pipeline as quickly as it could be published.

Second, new biotechnology and biomedicine industries and markets emerged. Before the 1960s, the life sciences and industry were more separated from each other. There was some overlap in medicine, but nothing of the magnitude that came with the developments in the 70s and 80s. For the first time, biotechnology and biomedicine startup companies emerged that proved highly profitable in a newly created market. Today, academic and corporate researchers now more readily recognize biotechnology and biomedical research as sources of technical problem solving and profit.Corporations are already established and dominate what once was a more open, competitive marketplace. There simply are not the same commercial opportunities for startups to be innovative in the lab insulated from the pressures of industry like there used to be.

Third, the problems that could be solved with biotechnology and biomedicine in the 1970s and 1980s were relatively simple and straightforward. The advances in the technical capabilities of biotechnology and biomedicine made accessible an entirely new class of biological problems that previously had been beyond the scope of science. In this newfound class of biological problems there were problems ranging from relatively easy to devilishly difficult. Understandably, researchers at new startup companies first picked the low-hanging fruit6.

Now, the biotechnological and biomedical problems that exist are technically and socially more complex than those of the mid-twentieth century. Although the development of recombinant drugs was certainly challenging, the biotechnological and biomedical problems that exist today by comparison are much more difficult. Also, because the problems are harder, research ventures to develop commercially viable products are more costly and more risky for both academic and corporate researchers.

Further, we are more aware that we work within larger, more complex systems with greater ranges and degrees of uncertainty. Science at every stage is hard, and it makes sense to solve the easiest problems first. But that means that once the relatively easy problems have been solved, only harder, often wicked problems remain.

Now What Do We Do?

So, if the state of biotechnological and biomedical research is totally different than it was in the 1970s and 1980s, what does that mean for current funding of research initiatives? Well, it certainly does not mean that we should stop supporting basic research. Even if federal funding of basic research is not the sole source of scientific progress like Bush and the linear model suggest, it is still essential to the generation of scientific and technical knowledge that advancement and innovation is based upon. The integration of biotechnology and biomedicine in industry has increasingly tied basic research to technological development based on the profitability of the resulting applications and products. So, perhaps reconceptualizing the value of basic research based on its contributions to societal outputs rather than its funding inputs will prove a more useful framing for understanding the roots of scientific progress.

At the same time, it also does not mean that we should simply prioritize applied research over basic research. Although some critics of the linear model suggest that applied research is the remedy to the shortcomings of pure basic research, there is little compelling evidence indicating that applied research is more effective at furthering scientific progress than basic research. One study published in Science by Danielle Li, Pierre Azoulay, and Bhaven N. Sampat in April of this year examined nearly 30 years of biomedical research funded by the US National Institutes of Health (NIH), the largest non-defense research funder in the US7,8.. The study by Li and her collaborators found that both basic and applied research displayed similar rates of citations in biotechnology and drug patent applications. In other words, neither basic nor applied research appear to be better suited to actually producing new products and solutions. It stands to reason, then, that prioritizing applied research over basic would not create meaningful differences in the kinds of technological outcomes generated.

So, what should we do then? Fundamentally, we need to reevaluate how we think about basic and applied research in biotechnological and biomedical research and development. The article on biomedical research from Science also found that of all basic and applied NIH-funded research only 10% is directly cited by product patents. Even including research indirectly cited in patents (meaning patents citing research that cites NIH-funded research), that number only grows to 30%. Put another way, 70% of NIH funded research was not related to new patents, even at two degrees of separation. But if the implicit social contract between science and society is that in exchange for public financial support science will produce technologies and products that improve society, then both basic and applied biomedical research seem to be coming up short on their end of the deal. Even if this is just the rate of return on investment in research—less than a third of it generates new biomedical technologies or drugs—at the least, that is not commensurate with the justifications given for furthering public funding of biomedical research.

Rethinking How We Evaluate Research

The problem of under-delivering science is frequently framed in terms of the role and merits of basic versus applied research. Yet, research suggests that, the problem seems to be across both basic and applied research equally. This seems to be not an issue of which research type is better at producing technological solutions and scientific progress, but indicative of a broader problem (at least for NIH-funded projects) with our approach and expectations of scientific funding—the distinction between basic and applied research for funding allocation.

When it comes to the technological outputs of science, a meaningful distinction between basic and applied research does not exist. We have seen that both basic and applied research are indistinguishable when it comes to the transfer of scientific knowledge into technological solutions and products. Emphasizing one over the other or trying to determine which should receive more research funding is the wrong kind of question to be asking. The question we should be asking is: which kinds of research leads to the societal benefits and outcomes we want?

Funding for science should be based on the problem-solving effectiveness of research and its potential usefulness in society. Whether the desired outcomes of research are patents, publications, commercially-viable technologies, new companies, development of human capital, economic stimulus, or fueling knowledge-generation enterprises, the basis of scientific research to receive funding should be based on the extent to which it contributes to societal goals, not based on the whether it is basic or applied research. We need to prioritize research that lives up to the social contract of public support for technological benefits and stop rewarding research that fails to deliver.

-Christian Ross

Footnotes

  1. Though I do have a litany of concerns about the Trump administration’s approach to science policy, that is not my purpose here.
  2. It really is not, but that is not the main point of the argument here.
  3. Historian of science, Nicolas Rasmussen has written extensively and excellently about the development of the biotechnology industry in the US in his book Gene Jockeys: Life Science and the Rise of Biotech Enterprise (2014).
  4. One prominent counterexample is the development of CRISPR-Cas9 as a genome editing technique which has taken the field of molecular biology by storm and stands to have similar impact as many early biotechnologies. I do not want to ignore or downplay the enormous impact of this biotechnology on science and industry (my own dissertation research is focused on it!). But the development of such a potent, accessible, and widely applicable technology for biomedical and biotechnological research is the exception, not the rule, over the past several decades.
  5. Again, it did not.
  6. The fascinating story of the motivations and social context that surrounded the development of many of these drugs is outlined in Rasmussen’s book Gene Jockeys.
  7. Li, D., Azoulay, P., & Sampat, B. N. (2017) “The applied value of public investments in biomedical research.” Science 356: 78–81. DOI: 10.1126
  8. While patent citations are certainly not the only measure of the productivity of scientific research and may not capture all cases, it does provide a quantifiable estimate of the impact of research and an easily compared standard of measure to evaluate between a wide variety of disciplines and projects.

Public Forums Help Us Explore What We Care About

To care about democracy is to care about conversations on shared and differing values. A recent public forum shows why.

Michelle and I are big supporters of democratic innovations in science and technology governance. But as academics, we can get a little caught up in the nuances of participatory events. A recent public forum at the Museum of Science, Boston reminded me of the power of conversation.

The forum, funded by the National Oceanic and Atmospheric Administration and held on June 11th, brought ~60 members of the public and a handful of facilitators and staff from the Museum together to discuss the challenges of sea level rise and extreme precipitation in the Boston area. I observed the forum, occasionally helping facilitators with technical issues and keeping conversations on track. (Full disclosure, I’m part of the planning team for this forum and others). A nice summary of the forum is available here and I posted photos and updates to my twitter. Rather than rehash the importance of public input in science-related decision-making or how the forum could be improved, I thought I’d reflect a bit on the conversations I heard. To care about democracy is to care about conversations on shared and differing values. This forum provides a nice example of why.

The conversations at each table impressed me. Participants, who broadly represented the Boston area’s demographics, took the topic seriously even as they talked about fictionalized communities with names like Rivertown. The Museum of Science worked hard to recruit participants who otherwise might not have the opportunity to contribute to these policy discussions. While some environmental groups were represented, they made up only a small portion of the audience. Participants drew on their own experiences with flooding, their own assessments of current development in flood-prone areas, and their knowledge of what strategies have worked elsewhere. They argued their points using data available to them (each table had a computer that visualized the potential impacts of policy choices) and in terms of who might be affected and how. Most encouraging to me, some participants recognized that their differences in opinion were not based on who had found the ‘right’ answer for dealing with sea level rise or extreme precipitation. Rather, they viewed their different preferences as related to what they valued. Some valued saving as much land as possible from the impacts of sea level rise. Others were concerned about key infrastructure like power plants. Yet others focused on impacts to those who lived on or made a living on the coast. Participants acknowledged other perspectives and ideas, saying things like, “I see where you’re coming from,” but still laid out what was important to them.

At a time in which every major public decision boils down to esoteric assessments of impacts far beyond the ability of most of us to comprehend, I found the tone of the forum’s conversations refreshing. While data was available to participants, the conversation still centered on something we can all relate to: what we care about.

The forum was the first of eight to take place across the country. Stay tuned over the next few months for updates and check out ASU’s Consortium for Science, Policy & Outcomes (CSPO) and the Expert and Citizen Assessment of Science and Technology (ECAST) network for more forum news. For another cool case study, check out the results of the public forum on NASA’s Asteroid Initiative.

– Nich Weller

Worried about science funding? Brush up on policy and politics.

Scientists are often wary of engaging with the messy business of politics for a variety of reasons. But their understanding of and participation in the policy-making—and hence political—process is necessary if they wish to provide their important perspectives on some of our most critical and contentious issues.

This past March, a group of scientists from three continents published “Empirically derived guidance for social scientists to influence environmental policy” in the journal PLoS One. The article combined 348 years of cumulative experience to guide social scientists on how to influence science policy. They placed at the top of their list “acquire policy acumen.” We thought this an important task for any scientist, including ourselves, so we put together a policy primer. (And this allows us to return to a previous footnote regarding the myth of capital-P policy). We’ll walk through the difference between policy and politics and a few real and imagined examples of both as they relate to science policy.

Policy vs. Politics

As noted by many scholars and practitioners (e.g., John W. Kingdon, Robert A. Pielke, Jr.), there is a distinction between politics and policy. A policy is a decision; policies define a path forward or a course of action for a government or other institution. In the United States government, policies are often crafted and edited by groups of politicians. Therefore, policy is often the result of politics, the collective process of negotiation and compromise among politicians, lobbyists, agencies, and citizens.

To illustrate these definitions, let’s walk through a (relevant!) example: the policy and politics of the science budget.

Science Budget “Policy”

You may have seen this pie-chart from the American Association for the Advancement of Science (AAAS), or something similar to it, illustrating annual federal research and development (R&D) funding across an alphabet soup of different federal agencies (NIH, NSF, NASA, etc.). The “policy” here seems obvious: “The Science Budget.” Well, surprise! There is no such policy. (We should note that AAAS certainly understands that their pie-chart is a construct, as evidenced by a very informative “Budget Process 101” webpage; but this visualization of the science budget is still a misunderstood illustration of science funding.)

Such visualizations of the science budget oversimplify a policy-making process that is actually quite complex. They imply that the science budget is decided as a whole, when really it’s decided in many pieces by myriad players. In fact, the number of players and the number and nature of the complex interactions among them could be the subject of an entire thesis project; we’ll only scratch the surface.

The first, obvious key players are the President and his administration. Presidentially appointed executives, directors, and secretaries go back and forth with the President over the best funding outcome for their respective agencies. (Typically, everyone wants a funding increase over the previous fiscal year.) Some of these agencies do not spend any money at all on science or R&D; for others, research may comprise a large portion of their budget. But it’s important to keep in mind that R&D is a small fraction of the overall federal budget—3.5 percent in 2016. The Office of Management and Budget is the ultimate broker of these executive branch negotiations; they know the budgetary constraints of federal revenues and outlays, and impose them.

The other set of obvious players is the Congress, particularly the elected representatives who sit on Appropriations Committees in the House or Senate. Although the President submits his budget to Congress, it is ultimately up to these multiple committees and their members to decide where the funds will flow. More often than not, they have a different idea of how money should be dispersed (even among themselves; for example, see comments on President Trump’s so-called “skinny budget” from House Speaker Paul Ryan and Senate Majority Leader Mitch McConnell). In addition, there are countless lobbyists, advocates, and institutions that weigh in, using whatever leverage they have, to seek advantages for their constituents.

So rather than “The Science Budget,” the policy in the case of science funding consists of an appropriations bill that is underwritten by scores of negotiations, proposals, requests, and other policies (such as executive or secretarial orders) from all these different sources.

Science Budget “Politics”

A mess of players have skin in the game. So of course there are science budget politics. Perhaps most visibly, the executive branch will fight with Congress over appropriations choices that represent policy priorities. (The American Institute of Physics releases tables and charts illustrating this phenomenon. See an example for the U.S. Geological Survey here.) But it goes far deeper. For example, outside of the expected discourse between the White House and the executive agencies, the agencies have liaisons to Congress: they know who really holds the purse strings. Lobbyists and other advocates also hover about, challenging agencies, representatives, Congressional staffers, anyone with ears, to prioritize their pet issues. For instance, disease advocates, healthcare providers, and biomedical researchers are champions of National Institutes of Health funding, and enjoy strong public support that gives them bargaining power.

Within each of these entities there is a lot of politicking. On Capitol Hill, for example, there are negotiations between the two houses of Congress, among the members of each chamber, among the members of each relevant committee, and even among the members of committees that oversee the non-budgetary aspects of executive agencies.

No Capital-P Policy

The major takeaway from our example of federal funding for science is that there is no one policy or politics that defines the federal science budget. Laws are layered, decision makers are ideologically and culturally diverse, and our government is designed as a heterodoxy. No single group or idea continuously prevails, leading to diversity and competition for resources across time, even in science-budget politics. Organizations like AAAS create pie charts and such showing a single budget representing a single policy, but these are more descriptive-analytical constructs than reflections of policy decisions.

Likewise, many writers, bloggers, and scientists have argued that the Trump administration is waging a war on science by defunding important science agencies and institutions. This implies that there is a single political conflict between science and Trump that plays out in the budget. There’s not. As we showed above, the process of funding science across the federal government involves thousands of moving pieces and lots of negotiation. Our insistence that there’s no single capital-P science-budget policy might come across as overly semantic. But this point is critical to 1) shaping future science-budget policy, and 2) understanding that politics shouldn’t be considered a bad word in science. Addressing science-budget policy as capital-P policy masks the processes that make it and thus misdirects conversations about and efforts to change science-budget policy. Representing science-budget policy, or any forms of science policy, as multiplicative and diverse, on the other hand, better represents the political process and allows for targeted conversations and efforts to be successful.

What if There Was a Capital-P Policy for Science?

Let’s take another example, but this time a hypothetical one: what if there was one budget for federally-funded science that was passed in a single bill? Let’s call it the Science Budget Bill of 2017. The debates and political theatre around issues like climate change or ideological objections to some social sciences would no longer be isolated to individual committees on the topic or related bills and agencies. Those partisan issues would take center stage in debates on the Science Budget Bill of 2017, holding up action on the vast majority of federally funded research that does have bipartisan support.

There’s certainly precedent for this type of bill. The Farm Bill agglomerates agricultural policies, crop insurance and other safety nets for farmers, and nutrition programs like food stamps. In one sense, lumping liberal-supported food stamps with conservative-supported industry initiatives is smart politics: considering them together could force each side to make concessions and pass the bill. But it also opens up the possibility that one side holds up passage on ideological grounds regarding one portion of the bill, preventing action on any of the numerous programs in the larger bill. The latest iteration of the Farm Bill, passed in 2014, faced this problem due to pressure from conservative groups. Congress passed the legislation two years after the prior farm bill had lapsed. Our hypothetical Science Budget Bill of 2017 could face similar conflict and delay.

Beyond the political logistics of getting an omnibus science bill passed, a single policy and budget for science would disconnect science from the various public goods it is funded to support. Accountability and responsiveness to public concerns would be lost because all federally funded science would be funded because it’s science in the science bill, not because of its (potential) links to public goods. A recently passed bill intended to modernize and promote technology transfer in NOAA’s forecasting endeavors, for example, would be reframed as a few budget lines for forecasting science, a small part of a massive general science bill. But this might undermine the public good this funding is intended to promote: protecting lives and property from weather-related harm via better forecasting.

The Necessity of Politics

In our hypothetical scenario, then, politics poses a huge risk to all federally funded science by hamstringing the process by which it is funded and disconnecting science from public goods. In reality, politics are critical to federally funded science, hence the group of scientists we cited above suggests scientists brush up on policy!

There are at least three important reasons for scientists and other citizens to engage in the messy, complex political machinations behind any science-related policy. First, politics are necessary just to reach decisions about what science to fund. There are seemingly unlimited types of research we could fund, but the United States has a limited budget and the government must make decisions about what it should fund. Second, politics in the context of a fragmented science policymaking landscape prevents any one person, group, or idea from overtly dominating the process. This is the wisdom behind the separation of powers and federalism stemming back to the country’s founding. Third, political debate around science ensures that questions of public costs and benefits can influence decision making. Do the costs of certain scientific or technological discoveries outweigh potential benefits? This is a question of values and politics, not methods and theory.

We publicly fund science in the United States for more than the awe of discovery. Politicians, agencies, and scientists justify science in terms of the public benefits connected to it. And if we are talking about public money spent for public benefits with potential public costs, we are talking about democratic politics. Scientists and those who care about publicly funded science definitely need to “acquire policy acumen,” in the words of the august group cited above, if they wish to influence the science budget and science policy more broadly.

Trump’s budget and science: Time for a rationale refresh?

Science advocates fall back on a tired argument for the importance of federal investment in research.

Trump’s budget proposal received a lot of criticism from science advocates1. Rather than discuss the proposal, I’d like to comment on reactions to the proposal. And yes, I know it’s a bit late but grad school gets in the way of the 24-hour news cycle2. First, let’s set aside that the president’s budget proposal is remarkable in its cuts to non-defense discretionary programs, that Trump’s military proposals might be strategically questionable, or that NSF is ominously (luckily?) absent from the proposal3. Let’s also get some disclaimers out of the way just to make clear that Michelle and I do, in fact, think Trump’s proposal is less than desirable:

  • A lot of what we think are valuable programs could be lost, some of which actually align with Trump’s ‘America First,’ jobs, and national security stances. The Advanced Research Projects Agency-Energy (ARPA-E), for example, relies on a proven structure from the Department of Defense to achieve both strategic goals and potential commercial spillover. Another example are coastal programs run by the National Oceanic and Atmospheric Administration (NOAA), which provide valuable information to the fishing and shellfishing industry and bolster knowledge of and preparedness for coastal hazards.  
  • If enacted, Trump’s cuts could be a big ‘branding’ loss, which is surprising given that he is all about branding. The U.S. is known for its scientific prowess, attracting envy (and lots of talent) from around the world. Perception matters, but only if we care about our perceived standing and what that affords us, something Trump understands well4.
  • Finally, huge cuts to science funding could mean the U.S. loses its influence over science and technology related decision-making beyond our borders. I’d rather have developments like human modification and AI happening in a context where democratic accountability for such research is at least a possibility (looking at you, China and Silicon Valley).

Okay, I’ve put forth our credentials as card-carrying science advocates; now I can be a little critical, with the intent of exploring how we might further the conversation about science funding given Trump’s proposal. But first, the criticism: Too many responses to Trump’s budget blueprint and its impacts on federally funded science rely on dubious connections between research and public value to justify funding.

A common refrain in op-eds and reactions to Trump’s blueprint is:

Funding cuts to science means not funding progress/innovation/the economy. Lives will be lost, cures left undiscovered, and ‘the next big discovery/breakthrough/thing’ will never arise (or at least won’t be American) because private entities won’t fund the research necessary to reach these outcomes” 5 (see here and here, for example).

In other words, if we invest in science, we invest in knowledge that leads to technologies, cures for disease, and subsequent social goals like better health, economic growth, and the like. This rationale has guided U.S. science policy since Vannevar Bush wrote his landmark “Science: The Endless Frontier,” establishing modern U.S. science policy in 1945. A key part of this rationale is that the details are fuzzy and unpredictable: Water the seeds of research but don’t worry about which seeds sprout because the process is unpredictable. I point out the centrality of unpredictability to this rationale not because this is reflected in the op-eds and reactions to Trump’s proposal, but because many agencies that fund research are structured with this rationale in mind. The National Science Foundation and the National Institutes of Health, for example, rely on a science-centric model of funding in which proposals are evaluated by expert reviews focused on mostly scientific grounds.

From a historical perspective, this rationale is problematic: It assumes that we cannot pursue certain outcomes without the basic science in hand, despite the histories of many important breakthroughs that have proved otherwise (e.g., Bell Lab’s pursuit of miniaturized electronics giving rise to the discovery and science of semiconductor properties and the transistor).

From a public accountability perspective, this rationale justifies disparity between predicted or promised public goods and actual outcomes of research: One can argue that we should invest in research given the chance that important applications may result. It’s okay that we cannot guarantee outcomes of public value. And a research program does not have to appear to be useful (or be guided toward application) in order to have potentially tremendous societal benefits.  

But mounting evidence is eroding the sanctity of a hands-off approach to science policy based on this rationale. Large portions of published work in psychology and biomedicine are not replicable, and thus of dubious use for achieving the public goods underwriting funding decisions. Years of investment by the National Institutes of Health have had minimal (if any) impact on our country’s health. Why, then, do science advocates fall back on this rationale in the face of (potential) funding cuts?

Why wouldn’t they? The rationale has ‘worked’ politically since WWII, fueling explosive growth of federal research funding throughout the 50’s and 60’s. More recently this argument continued to win support for federal research: From 1996-2003 non-defense research funding increased by about $20 billion, or about 48%. As the old adage goes, “If it ain’t broke don’t fix it.” But Trump’s budget blueprint is evidence that this is a failing political argument, at least for his supporters6. And if you agree with our premise, the rationale is conceptually broken as well.

So what to do? I’ll focus on two intertwined challenges: (1) that of dealing with looming potential cuts (Trump’s proposal means cuts of some sort might be inevitable), and (2) that of drawing up a new rationale for federal support of science and research. To start with, it behooves scientists and science advocates to reflect on the role of science in society, including reflection on the fact that federal investment in science is public money and is thus tied to some expectation of public benefit, from both conservative and liberal perspectives. Scientists should remain receptive to the restructuring of research agencies given budget cuts. Budget cuts suck, sure, but they are also opportunities to reframe outcomes and change research structures accordingly. I’m not implying that the consequences of cuts can be negated via organizational changes that promote efficiency, etc. Rather, I’m implying that substantial cuts provide an opportunity to address the very shortcoming discussed above: a disconnect between predicted or promised public goods and actual outcomes of investments in research. I’m also not calling for purely ‘applied’ research: there are fundamental scientific questions that might be standing in the way of achieving some public good.

A good start might be looking to federally funded research programs that perform well linking research to public goods, or at least specific outcomes. The Department of Defense’s Defense Advanced Research Projects Agency (DARPA), for example, has long connected research to specific defense-related outcomes. Indeed, ARPA-E, the Department of Energy’s Advanced Research Projects Agency, was based on DARPA’s successful institutional model. While DARPA-like arrangements are not necessarily appropriate for questions of basic science, they provide a complementary arrangement to basic science programs that help steer research towards specific outcomes or strategic needs. Critically, DARPA and ARPA-E take an active role in avoiding continuing funding for projects or project areas that lead to entrenchment and little progress towards outcomes, thus avoiding the disparities between research and public goods present in other federal research programs.

Other federal scientific programs well-connected to public goods and outcomes include hurricane tracking and warning at NOAA, NOAA’s Office of Water Prediction, and the US Geological Survey’s (USGS) work on natural hazards. Just this week, Congress passed the Weather Research and Forecasting Innovation Act of 2017 , which awaits President Trump’s signature. While the administrative structure of this Act will be determined by NOAA, the act provides a potential case study in mission-oriented science that 1) supports basic research in atmospheric science, 2) supports social science related to forecasting, and 3) calls for technology transfer efforts within and outside of the federal government. Tellingly, the act received bipartisan support in the House and Senate.

Are there problems with these models? Sure. For starters, we might question how to prioritize outcomes for such agencies. The aforementioned act’s focus on “weather data, modeling, computing, forecasting, and warnings for the protection of life and property and for the enhancement of the national economy7 is a politically ‘easy’ priority. On the other hand, ARPA-E’s focus on energy as it relates to the environment and an entrenched fossil fuel industry is more contentious, which might explain why it’s on the Trump administration’s chopping block. But these models nonetheless provide proven alternatives to current structures at other federal research agencies. Tied to restructuring is a need to update rationales for federal support of research to reflect the success or failure of different institutional models.

I’m sure there are some holes in my argument. For example, one might be concerned whether conservatives or Trump will accept any rationale or structure for some types of research. But perceived unwillingness is a poor reason to ignore existing shortcomings of federally funded research programs. And ignoring shortcomings only risks exacerbating a partisan fight over science that could be disastrous for science and democracy. At the very least, the fact that Trump proposed substantial cuts to science should lead us to reevaluate the relation between federally-funded science and public goods and outcomes.

-Nich Weller

Footnotes

  1. If you’re like me and don’t actually read federal documents cited in popular media, you should take a look at the budget proposal. If for no other reason, it’s called “A Budget Blueprint to Make America Great Again.”
  2. On some days it’s the other way around…
  3. Given conservative critiques of federal overreach, the Trump administration is likely happy that their proposal is remarkable in this way.
  4. The U.S. is also known for it’s economic and military prowess, something Trump is keen to stress. His campaign slogan and posturing about the perceived weakness of the U.S. in military and economic affairs are very much about addressing ‘branding.’
  5. Of course this is a gross generalization but the basic structure undergirds many op-eds regarding Trump’s proposal.
  6. Importantly, it’s not failing because the administration is waging a ‘war on science,’ but because federally funded science is linked to administrative overreach that conservatives have been fighting for decades (Miller, 2017. Why it’s not a war on science. Issues in Science and Technology, Spring 2017).
  7. Weather Research and Forecasting Innovation Act of 2017, Title 1, Section 101. Emphasis added.