Basic Research Fails to Deliver to Society, and Applied Research Does Too!

Guest blogger, Christian Ross, argues that the basic versus applied research argument leads us astray in thinking about biomedicine and biotechnology funding and outcomes.

Our guest blogger, Christian Ross, is a PhD student in the School of Life Sciences at Arizona State University and a National Science Foundation Graduate Research Fellow. His research is focused on the intersections of science, society, and science policy, particularly surrounding emerging biotechnologies.

There is a recurring problem that I see in ongoing discussions about public funding of scientific research. And, perhaps surprisingly, this problem is not specific to the current US administration1, but rather seems to be endemic in scientific communities as much as political ones. Often, science policy is reduced to a simple question: do we need more basic research or more applied research? I think that this distinction between basic and applied research is unnecessary and unhelpful, particularly for biotechnology and biomedicine. Insistence on the basic-applied distinction as the question in science policy obscures growing evidence of the ineffectiveness of all publicly funded science to deliver on its promises to the public to provide life-improving technologies and products.

Thus far, the Trump administration has not made many friends within scientific communities. Threatened and realized budget cuts to research funding across disciplines and what is often called “anti-science” rhetoric have led to many impassioned responses from scientists and science advocates proselytizing about the importance and necessity of public funding of scientific research, especially basic research. But what counts as basic research, and more importantly how useful it actually is, nearly always goes unexamined.

Back to Basics: The Linear Model

The prevalence of the basic and applied research distinction in funding conversations in the US has strong roots in the work of Vannevar Bush and his report to President Harry Truman on the role of science in post-WWII America, Science: The Endless Frontier. Basic research, as opposed to applied research, focuses on pursuing fundamental understanding of natural phenomena without direct consideration about the potential applications of that knowledge to create novel technologies. Applied research, by contrast, harnesses the scientific knowledge generated by basic research to create technological solutions to societal problems in engineering, medicine, and other applied fields.

While the distinction between basic and applied research by no means began with Bush and Science: The Endless Frontier, Bush’s work did serve to legitimize the basic-applied distinction as a foundational aspect of post-war US science policy. Bush argued that national military and commercial interests depended on good science. And good science was basic science, unhindered by the constraints of applications. Moreover, basic research was the “pacemaker for technological progress” and invariably led to applied technologies and societal benefits.

Bush’s work also solidified an implicit social contract between science and society. In return for public funding, science would provide technological solutions and products to improve society through advances in healthcare, sanitation, travel, communication, national defense, or economic outcomes. Since then, federal funding for science has been characterized as supporting either basic or applied research with consistent pressure from scientists to keep basic research as the essential fuel for scientific and societal advancement.

Deceptively Linear Beginnings

Bush’s “linear model” for scientific development has already been widely and frequently critiqued (including at times by this blog) for oversimplifying the technical and political complexities of scientific research and development. Those are fair criticisms, and I am not intending to pile on much more here. Actually, I hope to defend the linear model a bit (or at least those that have bought into its ideology) as an understandable misstep. Though, I only want to do so to show that even if a linear model approach to science was once helpful or fitting2, it is not so for our present moment nor going forward, especially not in the fields of biotechnology and biomedicine.

That said, it is understandable how the linear model gained credibility in the US, particularly in biotechnology and biomedicine. Leading up to and during WWII, US applied research focused heavily on developing basic physics research (especially atomic physics) to support the war effort. After the war, there was a surplus of basic research in virology, cell biology, and genetics that had not yet been applied to developing technologies and products. This swell of untapped basic research directly led to the development of many new biotechnologies and drugs with the advent of recombinant DNA technology in the 1960s3. Recombinant DNA techniques enabled scientists to modify bacterial genomes to produce complex biochemical compounds for use in new, synthetic pharmaceuticals, like synthetic insulin to treat type I diabetes, erythropoietin (EPO) which treats chronic kidney disease, and tissue plasminogen activator (tPA) which breaks down blood clots to treat strokes. Much like the linear model describes, basic research acted as a precursor to biotechnological and biomedical applications.

The development of these new drugs through bioengineering was an unqualified success and provided justification for continued use of a linear model approach to science funding.

Biotech Paradise Lost?

However, we do not see similarly large leaps forward in biomedicine and biotechnology today. To be sure, we regularly develop new therapies, drugs, and techniques, but not at rates (or profit margins) comparable to the 1970s and 1980s4. And we certainly are not making progress on the high-priority problems like cancer in the same way as early biomedical and biotechnology research did with renal disease and diabetes.

According to linear model thinking, a slowdown of technological development is the result of a lack of basic scientific research. But over the past fifteen years federal support of basic research has been at least double what it was in the 1970s through the  mid-1980s. Evidently, increased funding for basic research has not translated into increased biomedical and biotechnological development. Even if we grant that the linear model at one time adequately described the relationship between federal funding of science and basic and applied research5, it is clearly not doing so now.

So, what changed? Why does the linear model no longer seem to describe the developments of biotechnological and biomedical innovation? Put simply, the reasons and context that enabled scientific and industrial boom in biomedicine and biotechnology in the 1970s and 1980s are different now in three major ways.

First, there was an unusual backlog of untapped basic life science research available for application by industry. During WWII and early years of the Cold War, basic research in virology, cell biology, and genetics seldom translated into applied research. During that time, the biological sciences did not receive nearly the same attention as other fields, like physics, resulting in much of its basic research to remain unconnected to tangible applications. However, once recombinant DNA techniques were developed and their potential became apparent, applied research in biotechnology and biomedicine grew dramatically. Researchers took advantage of surplus of basic biological research to jumpstart the applied research of the biotechnology and biomedicine industries in the US. New basic research became incorporated into the applied research and products pipeline as quickly as it could be published.

Second, new biotechnology and biomedicine industries and markets emerged. Before the 1960s, the life sciences and industry were more separated from each other. There was some overlap in medicine, but nothing of the magnitude that came with the developments in the 70s and 80s. For the first time, biotechnology and biomedicine startup companies emerged that proved highly profitable in a newly created market. Today, academic and corporate researchers now more readily recognize biotechnology and biomedical research as sources of technical problem solving and profit.Corporations are already established and dominate what once was a more open, competitive marketplace. There simply are not the same commercial opportunities for startups to be innovative in the lab insulated from the pressures of industry like there used to be.

Third, the problems that could be solved with biotechnology and biomedicine in the 1970s and 1980s were relatively simple and straightforward. The advances in the technical capabilities of biotechnology and biomedicine made accessible an entirely new class of biological problems that previously had been beyond the scope of science. In this newfound class of biological problems there were problems ranging from relatively easy to devilishly difficult. Understandably, researchers at new startup companies first picked the low-hanging fruit6.

Now, the biotechnological and biomedical problems that exist are technically and socially more complex than those of the mid-twentieth century. Although the development of recombinant drugs was certainly challenging, the biotechnological and biomedical problems that exist today by comparison are much more difficult. Also, because the problems are harder, research ventures to develop commercially viable products are more costly and more risky for both academic and corporate researchers.

Further, we are more aware that we work within larger, more complex systems with greater ranges and degrees of uncertainty. Science at every stage is hard, and it makes sense to solve the easiest problems first. But that means that once the relatively easy problems have been solved, only harder, often wicked problems remain.

Now What Do We Do?

So, if the state of biotechnological and biomedical research is totally different than it was in the 1970s and 1980s, what does that mean for current funding of research initiatives? Well, it certainly does not mean that we should stop supporting basic research. Even if federal funding of basic research is not the sole source of scientific progress like Bush and the linear model suggest, it is still essential to the generation of scientific and technical knowledge that advancement and innovation is based upon. The integration of biotechnology and biomedicine in industry has increasingly tied basic research to technological development based on the profitability of the resulting applications and products. So, perhaps reconceptualizing the value of basic research based on its contributions to societal outputs rather than its funding inputs will prove a more useful framing for understanding the roots of scientific progress.

At the same time, it also does not mean that we should simply prioritize applied research over basic research. Although some critics of the linear model suggest that applied research is the remedy to the shortcomings of pure basic research, there is little compelling evidence indicating that applied research is more effective at furthering scientific progress than basic research. One study published in Science by Danielle Li, Pierre Azoulay, and Bhaven N. Sampat in April of this year examined nearly 30 years of biomedical research funded by the US National Institutes of Health (NIH), the largest non-defense research funder in the US7,8.. The study by Li and her collaborators found that both basic and applied research displayed similar rates of citations in biotechnology and drug patent applications. In other words, neither basic nor applied research appear to be better suited to actually producing new products and solutions. It stands to reason, then, that prioritizing applied research over basic would not create meaningful differences in the kinds of technological outcomes generated.

So, what should we do then? Fundamentally, we need to reevaluate how we think about basic and applied research in biotechnological and biomedical research and development. The article on biomedical research from Science also found that of all basic and applied NIH-funded research only 10% is directly cited by product patents. Even including research indirectly cited in patents (meaning patents citing research that cites NIH-funded research), that number only grows to 30%. Put another way, 70% of NIH funded research was not related to new patents, even at two degrees of separation. But if the implicit social contract between science and society is that in exchange for public financial support science will produce technologies and products that improve society, then both basic and applied biomedical research seem to be coming up short on their end of the deal. Even if this is just the rate of return on investment in research—less than a third of it generates new biomedical technologies or drugs—at the least, that is not commensurate with the justifications given for furthering public funding of biomedical research.

Rethinking How We Evaluate Research

The problem of under-delivering science is frequently framed in terms of the role and merits of basic versus applied research. Yet, research suggests that, the problem seems to be across both basic and applied research equally. This seems to be not an issue of which research type is better at producing technological solutions and scientific progress, but indicative of a broader problem (at least for NIH-funded projects) with our approach and expectations of scientific funding—the distinction between basic and applied research for funding allocation.

When it comes to the technological outputs of science, a meaningful distinction between basic and applied research does not exist. We have seen that both basic and applied research are indistinguishable when it comes to the transfer of scientific knowledge into technological solutions and products. Emphasizing one over the other or trying to determine which should receive more research funding is the wrong kind of question to be asking. The question we should be asking is: which kinds of research leads to the societal benefits and outcomes we want?

Funding for science should be based on the problem-solving effectiveness of research and its potential usefulness in society. Whether the desired outcomes of research are patents, publications, commercially-viable technologies, new companies, development of human capital, economic stimulus, or fueling knowledge-generation enterprises, the basis of scientific research to receive funding should be based on the extent to which it contributes to societal goals, not based on the whether it is basic or applied research. We need to prioritize research that lives up to the social contract of public support for technological benefits and stop rewarding research that fails to deliver.

-Christian Ross

Footnotes

  1. Though I do have a litany of concerns about the Trump administration’s approach to science policy, that is not my purpose here.
  2. It really is not, but that is not the main point of the argument here.
  3. Historian of science, Nicolas Rasmussen has written extensively and excellently about the development of the biotechnology industry in the US in his book Gene Jockeys: Life Science and the Rise of Biotech Enterprise (2014).
  4. One prominent counterexample is the development of CRISPR-Cas9 as a genome editing technique which has taken the field of molecular biology by storm and stands to have similar impact as many early biotechnologies. I do not want to ignore or downplay the enormous impact of this biotechnology on science and industry (my own dissertation research is focused on it!). But the development of such a potent, accessible, and widely applicable technology for biomedical and biotechnological research is the exception, not the rule, over the past several decades.
  5. Again, it did not.
  6. The fascinating story of the motivations and social context that surrounded the development of many of these drugs is outlined in Rasmussen’s book Gene Jockeys.
  7. Li, D., Azoulay, P., & Sampat, B. N. (2017) “The applied value of public investments in biomedical research.” Science 356: 78–81. DOI: 10.1126
  8. While patent citations are certainly not the only measure of the productivity of scientific research and may not capture all cases, it does provide a quantifiable estimate of the impact of research and an easily compared standard of measure to evaluate between a wide variety of disciplines and projects.

Public Forums Help Us Explore What We Care About

To care about democracy is to care about conversations on shared and differing values. A recent public forum shows why.

Michelle and I are big supporters of democratic innovations in science and technology governance. But as academics, we can get a little caught up in the nuances of participatory events. A recent public forum at the Museum of Science, Boston reminded me of the power of conversation.

The forum, funded by the National Oceanic and Atmospheric Administration and held on June 11th, brought ~60 members of the public and a handful of facilitators and staff from the Museum together to discuss the challenges of sea level rise and extreme precipitation in the Boston area. I observed the forum, occasionally helping facilitators with technical issues and keeping conversations on track. (Full disclosure, I’m part of the planning team for this forum and others). A nice summary of the forum is available here and I posted photos and updates to my twitter. Rather than rehash the importance of public input in science-related decision-making or how the forum could be improved, I thought I’d reflect a bit on the conversations I heard. To care about democracy is to care about conversations on shared and differing values. This forum provides a nice example of why.

The conversations at each table impressed me. Participants, who broadly represented the Boston area’s demographics, took the topic seriously even as they talked about fictionalized communities with names like Rivertown. The Museum of Science worked hard to recruit participants who otherwise might not have the opportunity to contribute to these policy discussions. While some environmental groups were represented, they made up only a small portion of the audience. Participants drew on their own experiences with flooding, their own assessments of current development in flood-prone areas, and their knowledge of what strategies have worked elsewhere. They argued their points using data available to them (each table had a computer that visualized the potential impacts of policy choices) and in terms of who might be affected and how. Most encouraging to me, some participants recognized that their differences in opinion were not based on who had found the ‘right’ answer for dealing with sea level rise or extreme precipitation. Rather, they viewed their different preferences as related to what they valued. Some valued saving as much land as possible from the impacts of sea level rise. Others were concerned about key infrastructure like power plants. Yet others focused on impacts to those who lived on or made a living on the coast. Participants acknowledged other perspectives and ideas, saying things like, “I see where you’re coming from,” but still laid out what was important to them.

At a time in which every major public decision boils down to esoteric assessments of impacts far beyond the ability of most of us to comprehend, I found the tone of the forum’s conversations refreshing. While data was available to participants, the conversation still centered on something we can all relate to: what we care about.

The forum was the first of eight to take place across the country. Stay tuned over the next few months for updates and check out ASU’s Consortium for Science, Policy & Outcomes (CSPO) and the Expert and Citizen Assessment of Science and Technology (ECAST) network for more forum news. For another cool case study, check out the results of the public forum on NASA’s Asteroid Initiative.

– Nich Weller

Worried about science funding? Brush up on policy and politics.

Scientists are often wary of engaging with the messy business of politics for a variety of reasons. But their understanding of and participation in the policy-making—and hence political—process is necessary if they wish to provide their important perspectives on some of our most critical and contentious issues.

This past March, a group of scientists from three continents published “Empirically derived guidance for social scientists to influence environmental policy” in the journal PLoS One. The article combined 348 years of cumulative experience to guide social scientists on how to influence science policy. They placed at the top of their list “acquire policy acumen.” We thought this an important task for any scientist, including ourselves, so we put together a policy primer. (And this allows us to return to a previous footnote regarding the myth of capital-P policy). We’ll walk through the difference between policy and politics and a few real and imagined examples of both as they relate to science policy.

Policy vs. Politics

As noted by many scholars and practitioners (e.g., John W. Kingdon, Robert A. Pielke, Jr.), there is a distinction between politics and policy. A policy is a decision; policies define a path forward or a course of action for a government or other institution. In the United States government, policies are often crafted and edited by groups of politicians. Therefore, policy is often the result of politics, the collective process of negotiation and compromise among politicians, lobbyists, agencies, and citizens.

To illustrate these definitions, let’s walk through a (relevant!) example: the policy and politics of the science budget.

Science Budget “Policy”

You may have seen this pie-chart from the American Association for the Advancement of Science (AAAS), or something similar to it, illustrating annual federal research and development (R&D) funding across an alphabet soup of different federal agencies (NIH, NSF, NASA, etc.). The “policy” here seems obvious: “The Science Budget.” Well, surprise! There is no such policy. (We should note that AAAS certainly understands that their pie-chart is a construct, as evidenced by a very informative “Budget Process 101” webpage; but this visualization of the science budget is still a misunderstood illustration of science funding.)

Such visualizations of the science budget oversimplify a policy-making process that is actually quite complex. They imply that the science budget is decided as a whole, when really it’s decided in many pieces by myriad players. In fact, the number of players and the number and nature of the complex interactions among them could be the subject of an entire thesis project; we’ll only scratch the surface.

The first, obvious key players are the President and his administration. Presidentially appointed executives, directors, and secretaries go back and forth with the President over the best funding outcome for their respective agencies. (Typically, everyone wants a funding increase over the previous fiscal year.) Some of these agencies do not spend any money at all on science or R&D; for others, research may comprise a large portion of their budget. But it’s important to keep in mind that R&D is a small fraction of the overall federal budget—3.5 percent in 2016. The Office of Management and Budget is the ultimate broker of these executive branch negotiations; they know the budgetary constraints of federal revenues and outlays, and impose them.

The other set of obvious players is the Congress, particularly the elected representatives who sit on Appropriations Committees in the House or Senate. Although the President submits his budget to Congress, it is ultimately up to these multiple committees and their members to decide where the funds will flow. More often than not, they have a different idea of how money should be dispersed (even among themselves; for example, see comments on President Trump’s so-called “skinny budget” from House Speaker Paul Ryan and Senate Majority Leader Mitch McConnell). In addition, there are countless lobbyists, advocates, and institutions that weigh in, using whatever leverage they have, to seek advantages for their constituents.

So rather than “The Science Budget,” the policy in the case of science funding consists of an appropriations bill that is underwritten by scores of negotiations, proposals, requests, and other policies (such as executive or secretarial orders) from all these different sources.

Science Budget “Politics”

A mess of players have skin in the game. So of course there are science budget politics. Perhaps most visibly, the executive branch will fight with Congress over appropriations choices that represent policy priorities. (The American Institute of Physics releases tables and charts illustrating this phenomenon. See an example for the U.S. Geological Survey here.) But it goes far deeper. For example, outside of the expected discourse between the White House and the executive agencies, the agencies have liaisons to Congress: they know who really holds the purse strings. Lobbyists and other advocates also hover about, challenging agencies, representatives, Congressional staffers, anyone with ears, to prioritize their pet issues. For instance, disease advocates, healthcare providers, and biomedical researchers are champions of National Institutes of Health funding, and enjoy strong public support that gives them bargaining power.

Within each of these entities there is a lot of politicking. On Capitol Hill, for example, there are negotiations between the two houses of Congress, among the members of each chamber, among the members of each relevant committee, and even among the members of committees that oversee the non-budgetary aspects of executive agencies.

No Capital-P Policy

The major takeaway from our example of federal funding for science is that there is no one policy or politics that defines the federal science budget. Laws are layered, decision makers are ideologically and culturally diverse, and our government is designed as a heterodoxy. No single group or idea continuously prevails, leading to diversity and competition for resources across time, even in science-budget politics. Organizations like AAAS create pie charts and such showing a single budget representing a single policy, but these are more descriptive-analytical constructs than reflections of policy decisions.

Likewise, many writers, bloggers, and scientists have argued that the Trump administration is waging a war on science by defunding important science agencies and institutions. This implies that there is a single political conflict between science and Trump that plays out in the budget. There’s not. As we showed above, the process of funding science across the federal government involves thousands of moving pieces and lots of negotiation. Our insistence that there’s no single capital-P science-budget policy might come across as overly semantic. But this point is critical to 1) shaping future science-budget policy, and 2) understanding that politics shouldn’t be considered a bad word in science. Addressing science-budget policy as capital-P policy masks the processes that make it and thus misdirects conversations about and efforts to change science-budget policy. Representing science-budget policy, or any forms of science policy, as multiplicative and diverse, on the other hand, better represents the political process and allows for targeted conversations and efforts to be successful.

What if There Was a Capital-P Policy for Science?

Let’s take another example, but this time a hypothetical one: what if there was one budget for federally-funded science that was passed in a single bill? Let’s call it the Science Budget Bill of 2017. The debates and political theatre around issues like climate change or ideological objections to some social sciences would no longer be isolated to individual committees on the topic or related bills and agencies. Those partisan issues would take center stage in debates on the Science Budget Bill of 2017, holding up action on the vast majority of federally funded research that does have bipartisan support.

There’s certainly precedent for this type of bill. The Farm Bill agglomerates agricultural policies, crop insurance and other safety nets for farmers, and nutrition programs like food stamps. In one sense, lumping liberal-supported food stamps with conservative-supported industry initiatives is smart politics: considering them together could force each side to make concessions and pass the bill. But it also opens up the possibility that one side holds up passage on ideological grounds regarding one portion of the bill, preventing action on any of the numerous programs in the larger bill. The latest iteration of the Farm Bill, passed in 2014, faced this problem due to pressure from conservative groups. Congress passed the legislation two years after the prior farm bill had lapsed. Our hypothetical Science Budget Bill of 2017 could face similar conflict and delay.

Beyond the political logistics of getting an omnibus science bill passed, a single policy and budget for science would disconnect science from the various public goods it is funded to support. Accountability and responsiveness to public concerns would be lost because all federally funded science would be funded because it’s science in the science bill, not because of its (potential) links to public goods. A recently passed bill intended to modernize and promote technology transfer in NOAA’s forecasting endeavors, for example, would be reframed as a few budget lines for forecasting science, a small part of a massive general science bill. But this might undermine the public good this funding is intended to promote: protecting lives and property from weather-related harm via better forecasting.

The Necessity of Politics

In our hypothetical scenario, then, politics poses a huge risk to all federally funded science by hamstringing the process by which it is funded and disconnecting science from public goods. In reality, politics are critical to federally funded science, hence the group of scientists we cited above suggests scientists brush up on policy!

There are at least three important reasons for scientists and other citizens to engage in the messy, complex political machinations behind any science-related policy. First, politics are necessary just to reach decisions about what science to fund. There are seemingly unlimited types of research we could fund, but the United States has a limited budget and the government must make decisions about what it should fund. Second, politics in the context of a fragmented science policymaking landscape prevents any one person, group, or idea from overtly dominating the process. This is the wisdom behind the separation of powers and federalism stemming back to the country’s founding. Third, political debate around science ensures that questions of public costs and benefits can influence decision making. Do the costs of certain scientific or technological discoveries outweigh potential benefits? This is a question of values and politics, not methods and theory.

We publicly fund science in the United States for more than the awe of discovery. Politicians, agencies, and scientists justify science in terms of the public benefits connected to it. And if we are talking about public money spent for public benefits with potential public costs, we are talking about democratic politics. Scientists and those who care about publicly funded science definitely need to “acquire policy acumen,” in the words of the august group cited above, if they wish to influence the science budget and science policy more broadly.

Trump’s budget and science: Time for a rationale refresh?

Science advocates fall back on a tired argument for the importance of federal investment in research.

Trump’s budget proposal received a lot of criticism from science advocates1. Rather than discuss the proposal, I’d like to comment on reactions to the proposal. And yes, I know it’s a bit late but grad school gets in the way of the 24-hour news cycle2. First, let’s set aside that the president’s budget proposal is remarkable in its cuts to non-defense discretionary programs, that Trump’s military proposals might be strategically questionable, or that NSF is ominously (luckily?) absent from the proposal3. Let’s also get some disclaimers out of the way just to make clear that Michelle and I do, in fact, think Trump’s proposal is less than desirable:

  • A lot of what we think are valuable programs could be lost, some of which actually align with Trump’s ‘America First,’ jobs, and national security stances. The Advanced Research Projects Agency-Energy (ARPA-E), for example, relies on a proven structure from the Department of Defense to achieve both strategic goals and potential commercial spillover. Another example are coastal programs run by the National Oceanic and Atmospheric Administration (NOAA), which provide valuable information to the fishing and shellfishing industry and bolster knowledge of and preparedness for coastal hazards.  
  • If enacted, Trump’s cuts could be a big ‘branding’ loss, which is surprising given that he is all about branding. The U.S. is known for its scientific prowess, attracting envy (and lots of talent) from around the world. Perception matters, but only if we care about our perceived standing and what that affords us, something Trump understands well4.
  • Finally, huge cuts to science funding could mean the U.S. loses its influence over science and technology related decision-making beyond our borders. I’d rather have developments like human modification and AI happening in a context where democratic accountability for such research is at least a possibility (looking at you, China and Silicon Valley).

Okay, I’ve put forth our credentials as card-carrying science advocates; now I can be a little critical, with the intent of exploring how we might further the conversation about science funding given Trump’s proposal. But first, the criticism: Too many responses to Trump’s budget blueprint and its impacts on federally funded science rely on dubious connections between research and public value to justify funding.

A common refrain in op-eds and reactions to Trump’s blueprint is:

Funding cuts to science means not funding progress/innovation/the economy. Lives will be lost, cures left undiscovered, and ‘the next big discovery/breakthrough/thing’ will never arise (or at least won’t be American) because private entities won’t fund the research necessary to reach these outcomes” 5 (see here and here, for example).

In other words, if we invest in science, we invest in knowledge that leads to technologies, cures for disease, and subsequent social goals like better health, economic growth, and the like. This rationale has guided U.S. science policy since Vannevar Bush wrote his landmark “Science: The Endless Frontier,” establishing modern U.S. science policy in 1945. A key part of this rationale is that the details are fuzzy and unpredictable: Water the seeds of research but don’t worry about which seeds sprout because the process is unpredictable. I point out the centrality of unpredictability to this rationale not because this is reflected in the op-eds and reactions to Trump’s proposal, but because many agencies that fund research are structured with this rationale in mind. The National Science Foundation and the National Institutes of Health, for example, rely on a science-centric model of funding in which proposals are evaluated by expert reviews focused on mostly scientific grounds.

From a historical perspective, this rationale is problematic: It assumes that we cannot pursue certain outcomes without the basic science in hand, despite the histories of many important breakthroughs that have proved otherwise (e.g., Bell Lab’s pursuit of miniaturized electronics giving rise to the discovery and science of semiconductor properties and the transistor).

From a public accountability perspective, this rationale justifies disparity between predicted or promised public goods and actual outcomes of research: One can argue that we should invest in research given the chance that important applications may result. It’s okay that we cannot guarantee outcomes of public value. And a research program does not have to appear to be useful (or be guided toward application) in order to have potentially tremendous societal benefits.  

But mounting evidence is eroding the sanctity of a hands-off approach to science policy based on this rationale. Large portions of published work in psychology and biomedicine are not replicable, and thus of dubious use for achieving the public goods underwriting funding decisions. Years of investment by the National Institutes of Health have had minimal (if any) impact on our country’s health. Why, then, do science advocates fall back on this rationale in the face of (potential) funding cuts?

Why wouldn’t they? The rationale has ‘worked’ politically since WWII, fueling explosive growth of federal research funding throughout the 50’s and 60’s. More recently this argument continued to win support for federal research: From 1996-2003 non-defense research funding increased by about $20 billion, or about 48%. As the old adage goes, “If it ain’t broke don’t fix it.” But Trump’s budget blueprint is evidence that this is a failing political argument, at least for his supporters6. And if you agree with our premise, the rationale is conceptually broken as well.

So what to do? I’ll focus on two intertwined challenges: (1) that of dealing with looming potential cuts (Trump’s proposal means cuts of some sort might be inevitable), and (2) that of drawing up a new rationale for federal support of science and research. To start with, it behooves scientists and science advocates to reflect on the role of science in society, including reflection on the fact that federal investment in science is public money and is thus tied to some expectation of public benefit, from both conservative and liberal perspectives. Scientists should remain receptive to the restructuring of research agencies given budget cuts. Budget cuts suck, sure, but they are also opportunities to reframe outcomes and change research structures accordingly. I’m not implying that the consequences of cuts can be negated via organizational changes that promote efficiency, etc. Rather, I’m implying that substantial cuts provide an opportunity to address the very shortcoming discussed above: a disconnect between predicted or promised public goods and actual outcomes of investments in research. I’m also not calling for purely ‘applied’ research: there are fundamental scientific questions that might be standing in the way of achieving some public good.

A good start might be looking to federally funded research programs that perform well linking research to public goods, or at least specific outcomes. The Department of Defense’s Defense Advanced Research Projects Agency (DARPA), for example, has long connected research to specific defense-related outcomes. Indeed, ARPA-E, the Department of Energy’s Advanced Research Projects Agency, was based on DARPA’s successful institutional model. While DARPA-like arrangements are not necessarily appropriate for questions of basic science, they provide a complementary arrangement to basic science programs that help steer research towards specific outcomes or strategic needs. Critically, DARPA and ARPA-E take an active role in avoiding continuing funding for projects or project areas that lead to entrenchment and little progress towards outcomes, thus avoiding the disparities between research and public goods present in other federal research programs.

Other federal scientific programs well-connected to public goods and outcomes include hurricane tracking and warning at NOAA, NOAA’s Office of Water Prediction, and the US Geological Survey’s (USGS) work on natural hazards. Just this week, Congress passed the Weather Research and Forecasting Innovation Act of 2017 , which awaits President Trump’s signature. While the administrative structure of this Act will be determined by NOAA, the act provides a potential case study in mission-oriented science that 1) supports basic research in atmospheric science, 2) supports social science related to forecasting, and 3) calls for technology transfer efforts within and outside of the federal government. Tellingly, the act received bipartisan support in the House and Senate.

Are there problems with these models? Sure. For starters, we might question how to prioritize outcomes for such agencies. The aforementioned act’s focus on “weather data, modeling, computing, forecasting, and warnings for the protection of life and property and for the enhancement of the national economy7 is a politically ‘easy’ priority. On the other hand, ARPA-E’s focus on energy as it relates to the environment and an entrenched fossil fuel industry is more contentious, which might explain why it’s on the Trump administration’s chopping block. But these models nonetheless provide proven alternatives to current structures at other federal research agencies. Tied to restructuring is a need to update rationales for federal support of research to reflect the success or failure of different institutional models.

I’m sure there are some holes in my argument. For example, one might be concerned whether conservatives or Trump will accept any rationale or structure for some types of research. But perceived unwillingness is a poor reason to ignore existing shortcomings of federally funded research programs. And ignoring shortcomings only risks exacerbating a partisan fight over science that could be disastrous for science and democracy. At the very least, the fact that Trump proposed substantial cuts to science should lead us to reevaluate the relation between federally-funded science and public goods and outcomes.

-Nich Weller

Footnotes

  1. If you’re like me and don’t actually read federal documents cited in popular media, you should take a look at the budget proposal. If for no other reason, it’s called “A Budget Blueprint to Make America Great Again.”
  2. On some days it’s the other way around…
  3. Given conservative critiques of federal overreach, the Trump administration is likely happy that their proposal is remarkable in this way.
  4. The U.S. is also known for it’s economic and military prowess, something Trump is keen to stress. His campaign slogan and posturing about the perceived weakness of the U.S. in military and economic affairs are very much about addressing ‘branding.’
  5. Of course this is a gross generalization but the basic structure undergirds many op-eds regarding Trump’s proposal.
  6. Importantly, it’s not failing because the administration is waging a ‘war on science,’ but because federally funded science is linked to administrative overreach that conservatives have been fighting for decades (Miller, 2017. Why it’s not a war on science. Issues in Science and Technology, Spring 2017).
  7. Weather Research and Forecasting Innovation Act of 2017, Title 1, Section 101. Emphasis added.

Science as proxy politics?

In a recent article in Issues in Science and Technology, Dan Hicks, a AAAS Science Policy Fellow, wrote a piece on scientific controversies becoming proxies for deeper political disputes and why this switch is problematic. It’s worth the read and relates to a few points Michelle and I made in our debut post on the March for Science.

Hicks criticizes a popular refrain on the disagreement between scientists and the public regarding issues such as genetically modified organisms (GMOs), stating that,

“…the simple explanation that the public must be ignorant of scientific facts—what STS researchers call the ‘deficit model’—misses the ways in which members of the public offer deep, substantive criticisms of those ‘facts.’”

He breaks down political, epistemological, and philosophical disputes and shows their relation to what appear at first glance to be technical conflicts about GMOs, climate change, and vaccines. And while we might quibble about the specifics of his arguments*, he brings a useful perspective during a time when the ‘deficit model’ is becoming ever more popular as a justification for political action in defense of, and inspired by, capital-S Science.

Most notably, Hicks documents the multivalent perspectives scientists and the public have regarding contentious issues like GMOs. Not only do scientists and the public disagree but scientists also disagree amongst themselves, deploying various scientific facts and underlying theories and assumptions to support their positions. Hicks calls for a reframing of the issue around what’s at stake rather than the facts or who’s right. Doing so puts us “in a better position to design policy compromises that address the concerns of both sides.” This is a critical point: there’s little room for compromise when conflict is centered on matters of truth, namely because notions of truth are deeply held. Just think about it: Would you bargain over truth? Shifting the locus of conflict to what’s at stake creates a situation where bargaining is at least a possibility. 

— Nich Weller

Footnotes:

His assertion that the connection between fossil fuel interests and climate skepticism is unacknowledged in conversations about climate science seems suspect, for example.

I’m Running for Office!

(Okay, not really. But here’s why it’s a useful notion and maybe even a good idea.)

In our last post, Nich and I told you why we are not marching for science. So if we aren’t marching, how do we plan on voicing our concerns over, for example, the release of the Presidential budget, which proposes funding cuts for many agencies who sponsor research, such as the National Institutes of Health, the Environmental Protection Agency, and the Department of Energy?*

Well, STEM friends, have you ever considered running for office? It’s a proposition my family members mock me with anytime I get heated about politics. But beyond student government, I’ve never seriously considered myself to be politician material, perhaps because of potent social norms about what it takes (money and a law degree). Also, abandoning my research agenda for a few years in office would halt my research productivity (publications and grants) and thus jeopardize long-term success in my field. Fellow scientists, you might relate.

And yet there’s been a recent upsurge in scientists running or planning to run for office, and they’re catching national attention.This is in part due to the efforts of 314 Action, a nonprofit founded in January 2017 to ‘champion’ candidates for elected office who have a STEM background. They also boast a broader campaign to improve STEM education and promote evidence based policy. (Oh, and yes, they are named for pi, because it’s science and “it’s everywhere.”) Since January, 314 Action has had over 3000 scientists and STEM professionals sign up for campaign support at all scales of government, from the local school board to the U.S. Congress (with sights set on the 2018 election cycle).

Shaughnessy Naughton, a chemist and former Democratic congressional candidate, is the founding board president. She struggled during her 2014 congressional run, discovering that donors and political support were limited for a non-traditional, outsider candidate. Hoping to help others avoid the same challenges, Naughton believes 314 Action could level the playing field for scientists and STEM professionals running for office by providing assistance and training in campaign organization, fundraising, political platforms, and communications.

But why should we care whether scientists can successfully run for office? Besides the fact that it’s their right to, here’s what Naughton herself has to say in an interview with the Washington Post:

“There’s nothing in our Constitution that says we can only be governed by attorneys…Especially now, we need people with scientific backgrounds that are used to looking at the facts and forming an opinion based on the facts.”

Naughton believes that electing more scientists will lead to more evidence-based policy (Red flag!). She argues that the attorneys have done enough damage, with their refusal to acknowledge and act on climate change issues, for example. With the help of 314 Action, it’s time for scientific experts to take the policy-making wheel; enough with the ineffective backseat-driving.

In the 2018 election cycle, the 314 Action team will target the House Committee on Science, Space, and Technology, hoping to replace three, climate-change denying, Republican incumbents including Committee Chair, Rep. Lamar Smith (R-TX). 314 Action will only support Democratic candidates for office (Red flag #2!). Using climate change denial as a litmus test for anti-science sentiments, Naughton argues that Democrats are more likely to address scientific issues like climate change, while Republicans tend to be climate-change deniers.

Did you catch the red flags?

First, contrary to Naughton’s claim, having more scientists in office will not inevitably lead to more evidence-based policy. Naughton and the 314 Action group assume that Republican Congress members don’t understand science, and/or they don’t respect the facts. But members of Congress have plenty of scientific information at their fingertips (for example, reports by the National Academies) with the skilled staff-force to help them wield it. There are also myriad non-staff personnel who provide support on the Hill such as AAAS Fellows, Think Tanks, and the Union of Concerned Scientists. And sure, a politician with a STEM background might rely a little less on these resources to translate new research findings,** but whether you understand science personally or with some help, it is still not the only knowledge that guides policy making, nor should it be. Suffice it to say a lot of ink has been spilled over what can go wrong when governments ignore societal norms, constituent experiences, and cultural contexts and customs in favor of governing under strictly scientific principles.

Okay moving on, red flag #2… 314 Action’s partisanship is problematic because it reinforces the view that science is what Arizona State University Professor Dan Sarewitz calls, “Democratic politics by another name.” And by only supporting Democrats for office, 314 Action purports that only Democrats are trustworthy arbiters of science for policy, and risks alienating Republican leaders and constituents from science. In fact, there are many scientific research programs and agencies (e.g., NASA, the NIH, the Cancer Moonshot) that enjoy Republican and bipartisan support in Congress. Alienating Republicans could undermine such support and delegitimize scientific knowledge at the decision-making table as science becomes a political instrument of Democrats as opposed to one of many bipartisan policy-making tools.

And yet, I do imagine a scenario in which electing scientists into office is a good idea, particularly when the experience is viewed less as a science lesson for politicians, and more as a civics lesson for scientists.

Enter Kate Knuth, a conservation biologist and former member of the Minnesota House of Representatives (2006-2012).*** It’s worth quoting her here (from The Atlantic):

“I never felt like I knew more about how people were thinking about the problems in their community, what they wanted from government, and their hopes and dreams for the future. Is that scientific information? No. Is it vetted through peer review? No. But it was invaluable. Scientists need to learn and appreciate the value of other ways of knowing about how the world works.”

Kate discovered that science is not the only type of knowledge that guides policy-making. Further, through the grueling legwork of her campaign, Knuth realized that, “In politics, people first want to know that you care about them and their problems…” No less important than awareness of the evidence, is awareness of experiences. Who are your constituents? What do they need? What are their desires? This is in stark contrast to the scientist who considers their curiosity the master guide to selecting research problems.

Granted they do the necessary groundwork to become a candidate, scientist-politicians will find themselves confronting many American people who feel forgotten to the promise of government investments in science and technology, or who frankly may prioritize other concerns over science (like a coal miner, who may know about climate change but also feels the more urgent need to put food on the table or to align his or her beliefs with community contexts). When you engage with constituents (assuming they are not all ‘academic elites’ and Silicon Valley millionaires), you might discover that scientists are not defining research problems that matter for people’s day-to-day problems, resulting in tax-funded research with little direct relevance to the general public. And you’ll find that you need more than evidence to make good policy; you need to rely on and understand constituent values, preferences, and concerns, all imperfectly collected through voting, town-halls, and public surveys.

Scientists running for and elected to office could share these lessons about the nuances of policy-making that are too often unknown to or ignored by the scientific community. Forget about p-values; what do your constituents need and desire?

-Michelle Sullivan

Footnotes:

*I acknowledge that this a budget proposal and that Congress ultimately controls the purse strings, and yet this document does indicate policy and administrative priorities, and for that reason, it should not be ignored. Also, a footnote to this footnote: stay tuned for a related, upcoming blog-post!

**Though I doubt it, given the sheer volume and high specificity of disciplinary scientific studies on top of the busy schedule of a politician.

***It might be interesting to perform a study of scientists who have become politicians. Where do they come from? What have they learned? Have their habits, perspectives, and intellectual lenses changed? Perhaps it could be a before, during, and after snapshot. I will consider this for my next extracurricular project titled, “Just another thing to distract me from my dissertation.”

Why We aren’t Marching for Science

To be completely honest, we do feel guilty for not marching–we would just feel guiltier if we did.

Since we’ve already called out the March for Science, and some of the commentary surrounding it, we figured this would be a good inaugural topic. There’s no shortage of support, criticism, or journalistic assessment for the movement. Advocates raise numerous motivations for participating including standing up to misinformation, advocating for funding, and “reinstating evidence-based decision making” in government. Critics fear the politicization of science, and regret the possibility that participating scientists could appear to be self-interested political actors.

Setting aside the conflation of the March for Science with other political objectives, the March for Science debate swirls around whether it’s a good thing for capital-‘S’ Science and whether scientists should participate. But there are many sub-currents at play, including definitions of science, the reality (or not) of ‘evidence-based policy,’ and different conceptualizations of the role of science in policy. A healthy reflection on these sub-currents might inform judgments of the overarching debates; in particular, should we participate?

First, we argue that march organizers and would-be participants should clearly define the ‘Science’ they are marching for. The scientific method comes to mind as a unifying feature of a capital-‘S’ Science but it is often more a normative goal than a reflection of science in practice. Karl Popper’s premise for a unifying theory of science was falsifiability: Science cannot prove a hypothesis to be true, it can only demonstrate it as false. Yet Popper’s ideal falls short in practice. We tend not to throw out our best theories in the face of empirical evidence showing them wrong; we wait until we have a better theory. Thomas Kuhn highlighted this phenomenon in his groundbreaking, “The Structure of Scientific Revolutions.” Perhaps we don’t have ‘one’ Science but instead have many sciences, all updating theories not as they are proven wrong but as better theories and ideas develop.

Further, there are myriad scientific disciplines differentiated by objects of study, scales of investigation, and supporting institutions. For example, electrons, the study objects of many chemists and physicists, are a great deal more mathematically predictable than human cells, the objects of some biologists and medical researchers. And studying human cells is quite different from investigating entire cultural and social human-systems, the research subjects of anthropologists. Layered on top of the multiplicity of science is the diversity of scientists themselves who, despite being rather politically homogenous, still have unique political, social, and scientific ideals.

Okay, so you may be wondering, what does the non-monolithic nature of Science have to do with the decision to march? Well actually, it matters quite a bit if we’re marching under the guise of “Science for Policy!” These differences have implications for how we (decision-makers included) interpret research outcomes. For example, perpetual “physics envy” has many social scientists fighting to prove the statistical significance in their research outcomes when, of course, people are not electrons. In this case, metrics and statistics cloud rather than clarify, particularly when they neglect qualitative explorations of the phenomena and of assumptions inherent to statistical analyses. These differences hinder our ability to measure scientific excellence, thus having implications for “Policy for Science!” as well. If our metrics are biased toward quantitative sciences, which potentially important qualitative research programs are going without support? And which research questions are going unanswered, or worse, being unrealistically explored in the realm of mathematics, where qualitative, descriptive explanations of phenomena are more suitable, and often more illuminating?*

Which leads us to “evidence-based policies in the public interest,” an ideal of march advocates emphasized in the March for Science mission statement. Let’s set aside that defining the public interest is no small feat and break down what ‘evidence-based policy’ is and its role in current policy-making. Most people would agree that evidence-based policy is valuable and desirable. Most can agree that we should govern ourselves in some empirical reality. But a focus on science as the sole arbiter of evidence can be counter productive, particularly when dominant governance structures for science-related issues (e.g., environmental regulation, health) emphasize a ‘knowledge-first’ approach. A knowledge-first approach emphasizes understanding a problem before proposing solutions. This is problematic for two reasons: First, it demands absolute knowledge of a problem, making dispute over evidence the site of political negotiations, in turn leading to lengthy and expensive litigation, little improvement in the situation, and corrosion of the very notion of objective evidence. Second, we as humans tend not to stop doing something until we have something that does the job better (even with scientific theories!). Sarewitz and colleagues lay out why a knowledge-first approach was (and continues to be) detrimental in environmental policy. They also detail ways we might restructure policy to focus on solutions, including an innovative program in Massachusetts that emphasizes finding substitutes for commercially important but harmful chemicals, rather than an outright ban of them. Recognizing 1) that science can tell us about the world but not what we ought to do, 2) that a knowledge-first approach to policy is misguided, and 3) that the role of science in policy can be more than simply establishing understanding of a problem should lead us to think critically about calls for evidence-based policies or reframe them in terms of a search for better alternatives.

This all sounds very academic at this point and perhaps, like academics often do, we’ve lost sight of the decision tied to our analysis: should we participate in the March for Science? What is the potential for positive or negative impacts of the march? This is not a simple choice, and we have no prescription for action. We just ask that one reflects on the nuances; can you justify your decision in light of constructive criticism?

After muddling through, we are set on the political long game, namely making science a contributor to policy, and not an alternative venue for politics (if you accept our rationale). Scientists voicing their values are critical to the institutions of science and our democracy, but we ought to reflect on where such advocacy happens and on what scales. We will not participate though we share concern about the Trump administration’s approach to science policy. To be completely honest, we do feel guilty for not marching–we would just feel guiltier if we did.

 

*As a corollary, what about capital-‘P,’ Policy? Also fantasy. Laws are layered, decision-makers are ideologically and culturally diverse, and our government is designed as a heterodoxy. No single idea or ideology can continuously prevail, leading to diversity and competition for resources across time, even in science policy.  

 

Some extra resources:

http://theconversation.com/to-tackle-the-post-truth-world-science-must-reform-itself-70455

http://www.latimes.com/science/sciencenow/la-sci-scientists-politics-trump-20170209-story.html

http://www.nature.com/news/researchers-should-reach-beyond-the-science-bubble-1.21514