More than meets the eye? The value of Rand Paul’s BASIC Research Act

Rand Paul deserves credit for seeking to integrate more perspectives into the research funding process, but his vaguely defined taxpayer advocate would fail to better connect research to desired societal outcomes.


The newsbeat is, well, beating Michelle and I thanks to proposal defenses, holidays, and our own involvement in politics around the latest tax bill. But we’d like to turn back the news cycle to October and look at a proposal from Senator Rand Paul related to research funding decisions at federal agencies. The BASIC Research Act would require a “taxpayer advocate” on research proposal review panels, the groups of scientists who decide what proposals to fund. Senator Paul deserves credit for seeking to integrate more perspectives into the research funding process, but his vaguely defined taxpayer advocate would fail to better connect research to desired societal outcomes.

Congress has a history of skewering federal agencies and scientists for wasting taxpayer money on research that puts shrimp on tiny treadmills, asks questions of single people on speed dates, studies snail sex1, or examines farm-based tourism in rural China2. Senator Paul regularly complains about NSF, often through poorly written Waste Reports that attract the ire of science advocates. His bill invited similar reactions. For example, one reporter accused Senator Paul of politicizing or misunderstanding science because the bill proposes interventions in the peer review process that federal agencies, such as the National Science Foundation (NSF), use to dole out research money. At the hearing where the bill was introduced, Senator Paul justifiedthe bill as a mechanism to weed out “bad science,” to create a more transparent funding system, and to fight the perceived cronyism of scientists approving funding for the work of friends and colleagues. The bill covers a lot of ground, including requirements to make grant applications to federal research agencies public. The most interesting pieces of the bill, however, pertain to proposal review panels, the groups of scientists who decide which requests get funded. If passed, the bill would forbid researchers from suggesting potential reviewers for their applications and would require agencies to add two people to their review panels: 1) an expert from outside the discipline and outside academia; and 2) someone to serve as a taxpayer advocate.

Senator Paul’s bill centers on long-standing debates about which projects should receive taxpayer funding and who should make those funding decisions. Vannevar Bush, the lauded architect of postwar American science policy, and Senator Harley Kilgore of West Virginia, engaged these questions 70 years ago at the inception of the National Science Foundation3. Senator Kilgore argued for increased federal investment in social sciences, more equity in the geography of federal science investment4, and a shift away from military research. Bush parlayed his success managing the wartime U.S. Office of Scientific Research and Development, which led the Manhattan Project, into a different vision for federal investment in research, one that placed scientific excellence as judged by (mostly physical) scientists at the forefront of funding decisions. Today’s NSF largely resembles Bush’s ideal: Scientists make decisions about which proposals merit funding within individual disciplines based on the scientific merit of those proposals (as opposed to other criteria, like regional economic development or potential technological applications). While Senator Paul’s bill is no present day incarnation of Kilgore’s science policy, it nonetheless attempts to bring non-scientists into the fold of science governance with the aim of improving science5; Bush might be appalled.

That bit of historical context in place, let’s dig through Senator Paul’s proposal for a taxpayer advocate on grant review panels. Frankly, I agree with the spirit of this proposal, though not Senator Paul’s word choice nor his analysis around the topic. Precedent for public involvement in federal decision-making exists in many agencies, often taking the shape of required public comment periods6. The myth that only scientists can govern science, argued by Kilgore, leaves no place for such external input in science funding decisions. But bringing non-scientists to the table could, in theory, better connect federal investment in research to the ‘goods’ scientists and science advocates parade about to justify that investment, such as economic development, new products, new medicines, better health, happier people, and so on. Non-scientists provide broader perspectives, ideas, and critiques of research decisions, forging research robust to conceptual and practical criticism. For example, non-scientists might question troubled research mechanisms like studies of brain health that rely on animal models shown to be disconnected, even irrelevant, from processes in the human brain. And by nature of being external to engrained research pathways, non-scientists could offer constructive insight and critique where institutional inertia has taken hold7.

That said, Senator Paul’s proposal contains many loose ends that cloud assessment of the bill: Who should taxpayer advocates be and how do agencies select them? Are taxpayer advocates on the federal payroll (would love to see Taxpayer Advocate on or is this a service expected of people external to the federal government? Are they paid at all (reviewers for NSF, for example, are generally not paid)? What, exactly, does it mean to advocate on behalf of the taxpayer? Is the taxpayer advocate just Rand Paul (we kid…)? Different answers to these questions could lead to very different outcomes. A member of the public could face a steep learning curve to understand different research fields. Yet having someone internal to federal agencies might fail to bring new perspectives to the table or could amount to a rubber stamp on projects8. Further, structural issues with Senator Paul’s proposal pose more questions. Is one person sufficient for establishing taxpayer interests on a review panel? Is having input external to the scientific community most useful at the individual grant level? The bill’s language fails to clarify any of these questions, meaning we might see little change in the role of public value in research funding decisions.

Let’s assume that the taxpayer advocate is someone disconnected from the scientific enterprise: they’re a member of broader society, which ostensibly sounds like a win for both the taxpayer and for research (at least in our view). But the involvement of non-experts brings up more issues.

Non-expert panelists could be too intimidated to question the rest of the panel members due to a lack of topical expertise9. Perhaps the logistics of getting public members to such review panels would mean the ‘usual suspects’ might show up: those already highly engaged in debates around science, like people from science advocacy organizations. Or perhaps other forms of political power would lead to competition for seats on panels, turning them into politically contested arenas; imagine a lobbying firm or an interest group pushing hard to get ‘friendly’ public members on the panels through one mechanism or another. Further, selecting members of the public to sit in as reviewers creates layer upon layer of difficult decisions about recruitment and preparation. Simply put, involving the public in this way is a big challenge, particular for agencies without experience in this type of engagement10.

Given these questions, Senator Paul’s proposal for a taxpayer advocate would do little to clear the air on research funding decisions. Whatever his intentions, Senator Paul’s bill avoids the important, broader question of how to better connect investments in research to public goods and outcomes used to justify research, despite nods to this ideal in his hearing. Perhaps he and other legislators have avoided (and continue to steer clear of) the question because it’s a hard nut to crack and would require substantial institutional changes at federal agencies.

But what can we take away from Senator Paul’s proposal that might be effective? And how could we build on and improve it?

Having diverse input as to which research programs merit federal investment is a laudable goal. Rather than involve the public on individual proposal review panels, non-scientists from the public could be brought into decision-making about federal research earlier in the funding process to set larger goals for a particular research program. At this stage, non-experts can interface with scientists and determine priorities for research investments without digging into the nitty-gritty of a particular method or theory for every proposal considered. Further, public involvement at this level allows for conversations about the ethical and political ramifications of research. Research on public participation in technical decision-making shows that non-expert people from all walks of life can thoughtfully and constructively contribute to technical decisions when given the opportunity. While large-scale involvement might be unduly expensive for any given research program, smaller panels (5-10 people) could spend a few days working with program managers to identify potential issues and societal outcomes and help set research priorities accordingly.

Senator Paul should look elsewhere in the federal government for inspiration; in DARPA, for example, research funding decisions revolve around specific problems and empowered program managers, not peer-review panels (or perhaps he should wait for Michelle’s dissertation to be published regarding the National Park Service). Alternatively, Congress could ask federal agencies to do a bit of experimentation. Invite non-scientists to serve on a few review panels, even if only in an advisory role. Create panels for various research initiatives and ask non-scientists, or better yet diverse members of the public, to help set research agendas and write calls for proposals. In short, play around with different mechanisms appropriate to the agency. Several agencies, including USGS, EPA, NSF, and NASA, support external research, each with distinct institutional shapes and sizes. A little experimentation could go a long way to creating new ways for non-scientists to contribute their expertise and values to funding decisions.

Would taxpayer advocates on review panels better connect research to the outcomes we use to justify funding research? Without more detail, we’re stuck with an ambiguous “maybe”. Perhaps Michelle and I should apply for NSF funding to test citizen review panels for NSF research programs (no really, that would be super cool to study). Or maybe that proposal would end up in Senator Paul’s next Waste Report.

– Nich Weller


  1. Michelle was part of this research team as an undergraduate.
  2. The National Science Foundation is used to congressional criticism and does its best to encourage researchers to communicate public values. When I received a small NSF grant for the project examining Chinese farm-based tourism linked above (a relic of a past dissertation avenue), NSF program officers encouraged me to clearly articulate the public value of my proposal in the summary for NSF’s website, ostensibly to keep congressional threats at bay. In our opinion, this does little to promote public value in research but that’s a topic for another day.
  3. See Gregg Pascal Zachary’s Endless Frontier: Vannevar Bush, Engineer of the American Century for more on Kilgore and Bush’s shared ideas and disputes.
  4. Notably, West Virginia lacked, and still lacks, major sites of federal science investment like national labs.
  5. Senator Paul sees external input, which Bush would have detested as interference in scientific excellence, as a path to ‘good’ science, or at least a path away from ‘bad’ science (in the words of the Senator’s testimony).
  6. Federal public comment periods, however, have glaring problems as 1) existing interest groups dominate the process and 2) they require substantial time, expertise, and resources to participate in.
  7. Are broader perspectives a panacea for what ails science? Of course not. Believing so is just as detrimental as believing that non-scientists and the public have nothing relevant or wise to say about research.
  8. An assessment of NSF’s policy requiring ‘broader impact statements’ on proposals might help answer these questions.
  9. This is a very real phenomena I must account for in my research.
  10. Agencies could try to do this independently, without a legislative mandate, and “see what works,” which might also provide them political clout if congress passes legislation on the topic.

Revisiting the Value of Public Forums for Building Consensus and Changing Perspectives

Over the summer, Nich wrote about the ‘power of conversation,’ reflecting on a public forum in Boston that brought together public participants and facilitators and staff from the Museum of Science to discuss the challenges of sea level rise and extreme precipitation in the Boston area. The Boston forum is part of a national program, funded by the National Oceanic and Atmospheric Administration (NOAA). And this past September Nich invited me to facilitate at the second forum hosted by the Arizona Science Center, with about 70 members of the public from the Phoenix metro area. Working from educational and discussion-prompting materials developed by Nich and his colleagues, participants shared their values, concerns, and questions, as well as proposed resilience strategies, all related to drought and heat. Each group was provided with real data and visuals to understand the impacts of their choices and to evaluate, and reevaluate, their resilience strategies.

In some ways, my experience reflected Nich’s experience in Boston. For instance, I similarly felt uplifted by the tone of discussion, or as one forum organizer put it, the fact that participants could “disagree without being disagreeable.” I frequently heard phrases such as, “I could see why you’d think that,” and questions like, “can you please explain why you feel that way?”

The participants at my table came from various jobs, age-groups, and locations across the valley, and likewise, they came with a range of motives. The most universal motive revolved around ownership of place, Phoenix: “Because this is my hometown.” This strong connection to and knowledge of place was evident in the afternoon exercise on resilience to extreme heat in Heattown.1 Participants moved quickly through sorting hypothetical stakeholder perspectives without much discord, though not due to lack of care. I heard multiple iterations of, “We live in Phoenix so we know heat and how we feel about it. We get the ‘heat jargon’.”

Although participants shared a connection to Heattown (in their minds, Phoenix) and asked about and acknowledged the values and concerns of others with respect and genuine interest, there were still contested choices and contradicting perspectives on the best path forward for resilience. I noticed that part of the reason we were able to reach ‘consensus’ was because participants were told that they didn’t have to agree with the majority at their table; their values and choices would be reflected in their personal worksheets if not on the consensus game-board. This led me to question the value of the forum for consensus building; I witnessed thoughtful consideration of shared perspectives, but I didn’t notice much in the way of changing minds.

First, I wondered, how is the consensus building format of the forum any better than voting or majority rules? According to the U.S. Office of Personnel Management, consensus means “finding a proposal acceptable enough that all team members can support it, with no member opposing it.” But that doesn’t mean a unanimous vote or majority rule, because voting doesn’t require opposing parties to listen effectively to each other as they discuss their differences.

This is why voting often proceeds more quickly than consensus building (e.g., a show of hands vs. an eight hour forum). For instance, at another table, forum participants resorted to voting whenever they felt their discussion was running behind the other tables or that their discussion was stagnant. But despite its efficiency, voting has it’s downsides. Primarily, it creates losers (to the majority winners), and if the losing minority feels misunderstood and unheard, distrust and disgust with the process can fester. On the other hand, consensus building takes time because participants have to know and trust each other, and in the case of the forum, learn together, to work constructively and listen effectively. The participants at my table openly commented on the arduous process of reaching consensus and believed that it demonstrated the robustness of their consensus. They felt that although some parties disagreed with the consensus, all parties were heard. But they couldn’t quite articulate why that made the consensus acceptable.

There were still majority and minority views, and there were certainly perspectives that were unaccounted for in the final resilience plans (even after a round of revision). Nich actually recorded data regarding this phenomenon that illustrates what I witnessed; we can celebrate that people were able to disagree without being disagreeable, but at my table they didn’t often change their minds about what resilience plan they thought best. One in five changed their mind for drought and three in five chose the final plan that matched the table’s consensus. For heat, no one changed their mind, and again, three in five agreed with the table consensus. But across all tables, the data captured some compromise (with the caveat that we had a lot of missing data points): 43% of people changed their minds in drought and 20% did not (37% were unknown, missing sheet, didn’t ‘follow the rules,’ etc).  And for heat, 45% changed their minds, while 23% did not (32% unknown).

Where we saw changed minds, were they only willing to compromise because the stakes were relatively low? That is perhaps something we cannot answer unless we performed the forum under different conditions (real-life scenarios, no personal worksheets to turn in, etc.) Most of all, however, I wondered about the times when we did not see changed minds: what’s the value of a mind unchanged but understood? To get at that I had to let go of numbers and return to what I saw.

First, under the conditions of the forum, there was a noticeable change to the tone and language of discussion. When I encounter people discussing climate resilience out in the world, it rarely involves respectful discourse and active listening. At the forum, forced and facilitated exposure to divergent perspectives pops participants’ perspective bubbles, so to speak. This was evident in the noticeable effort by participants to frame their opinions in terms of the moral values of participants who disagreed with them, or at the very least to notice that differences of opinion were, beyond being “right” or “wrong,” rooted in different values. This is a remarkable and important accomplishment. In a joint study from the University of Toronto and Stanford, researchers found that people often struggle with this feat in political discourse; it’s difficult to set aside personal reasoning and step into the perspectives and values of opponents. “Moral reframing,” as they call it, is a true exercise is perspective taking, and participants had their fair share of practice with it throughout the forum.

Further, because moral reframing helps you understand your opposition in terms of what they value, it thus alerts you to what they are giving up when they compromise. And if you understand the stakes of their concession to build consensus, then perhaps in the future you might be willing to concede something in return (i.e., an exchange in political capital). In future forums it would be interesting to have note takers observe the language and patterns of consensus building. Did compromises on drought policy earn a participant leverage when negotiating heat resilience?

So maybe this post just ends up with me realizing that, to borrow from Churchill, public forums are the worst form of consensus building, except for all the others.2 But that doesn’t diminish the value of critical reflection, and for that I leave you with a quote from Sheila Jasanoff:

“Science and democracy at their best are modest enterprises because both are mistrustful of their own authority. Each gains by making its doubts explicit. This does not mean the search for closure in either science or politics must be dismissed as unattainable. It does mean that we must ask and insist on good answers to questions about the procedures and practices that undergird both kinds of authority claims.”

We spill a lot of ink on this blog encouraging healthy (and scientific!) questioning of the ability of Science alone to address issues in the public sphere like climate resilience. We should be willing to explore the limits of our democratic solutions as well.


1Heattown was based on real data from anonymized Louisville, KY.
2And Nich pointed out that we can also question whether seeking consensus is the best role for forums. Perhaps consensus is only instrumental– the real goal is to get people to talk and listen thoughtfully to other perspectives.

Image: Participants, planners, facilitators, and Arizona Science Museum staff enjoying some coffee-break sunshine during the eight-hour forum.

Evidence-Based Policy-Maker since 1998

From Santa to Climate Change

(Warning parents, this post contains Santa spoilers!)

As a student and researcher of evolution during my undergraduate thesis and now climate change in my dissertation, I am no stranger to debates over the proper use of science in policy-making as well as over the validity of science itself. But recently, as I was reflecting on my experience with the challenges of evidence-based policy-making, I realized that to get to the start we have to journey back to 1998, the height of my suspicions about Santa Claus.

No way did Santa fit that new bicycle down the chimney. And elves don’t make American Girl Dolls. But I couldn’t just say I didn’t believe anymore. I had to know. And to be certain, I had to do some serious research. This was before high-speed internet; it took an hour to log on to our dial-up system, and should someone call during my Google search, all would be lost. Plus, let’s be real. My mom wasn’t going to let seven-year-old Michelle surf the web without supervision.

So instead, I undertook some top-secret archival work in my mother’s basement office, and as is often not the case with archival research, it didn’t take long before I found what I was looking for. To this day, my mother meticulously prints, marks, and files receipts for everything she purchases. On November 29, 1998, Karen Sullivan spent $115.00 on “Kit, the American Girl Doll, with book and accessories.”

Truth hurts. And always one for drama, I ran upstairs to the dining room and tearfully screamed, “All adults are liars!!!!!!!” at my mother right in the middle of her lunch-date with friends.

A crusader for honesty, I shared my discovery with my friends on Monday at school. My mother paid for this dearly when my friends’ parents called to complain that I had unapologetically RUINED their daughters’ childhoods. Though most of my friends appreciated me (even if begrudgingly) for disseminating my research, one friend stopped me in my tracks.

“Maybe Santa just gave your mom a receipt in case you didn’t like the doll. That’s smart,”she said.

The evidence was…inconclusive?! How could she disagree with my findings? How could she criticize my policy plan to stop the generational cycle of deceit?

Early Lessons from “Santa Policy”

Now, maybe you’re thinking I just needed more information to convince her. I could have set up a video camera to capture her parents setting up her toys. (But maybe Santa wasn’t feeling well so he shipped the gifts to her parents?) Or what if I could have flown over the North pole to show her it’s just a barren ice-cap? (They probably live underground?). If she wanted to believe in Santa Claus, she was probably going to find a way to believe.

At the time, I was shocked. Fast forward about two decades, *almost* three degrees, a few eye-opening mentors, and some real world practice with this evidence-based policy stuff, and I can now say I encounter this all the time in my research of climate change politics and policy. Researchers often have noble intentions for sharing their work with decision makers, but this doesn’t always translate into the policy actions they propose. That some scientists even propose specific policy actions creates uncertainty around their science for those who disagree with their proposals. And decision makers may use evidence to justify their policy actions, only to find critique from colleagues gets louder.

Disagreements over issues like climate change are often argued in terms of lacking or contested knowledge, as well as conflicting notions of risk. But increasing the quality and amount of evidence doesn’t seem to dissolve dissension as much as you might expect. Why is that? What are the limits of scientific knowledge for addressing today’s pressing policy issues?

The Excess of Objectivity

First, particularly with climate change, there are myriad perspectives from which you could conduct your research due to various and overlapping natural and human-causes of climate change and an even wider array of potential environmental, economic, and socio-cultural impacts. Each perspective comes with its own body of knowledge, values, and action-items which may contradict those of another. And in the vast space of climate change research, those holding different views are sure to find some academic in some university who holds a hypothesis or theory that fits their perspective.1

My co-blogger, Nich has a helpful analogy for this: We have a dozen cupcakes, all of different flavors and decoration fitting one of twelve people’s preferences. Each person can choose the cupcake that best suits them based on taste and appearance. Even outside of that dozen, you’re bound to find a cupcake that suits you, considering the vast number of bakeries, recipes, and ingredients. Now, substitute preferences of flavor or appearance for “values/aims” and cupcakes for “evidence/science.” Because of the various perspectives that characterize the extensive amounts of climate change science and evidence out there, you can find contradicting facts to support contradicting value- or aim-based positions on climate change, and a whole host of other issues.

Scientific Uncertainty, Caught in the Middle

Scientific uncertainty often lies at the center of debates over climate policy. One side will prescribe a policy based on a scientific claim, while dissenters will invoke scientific uncertainty to rally against action.

Some scientists argue that the public and their elected officials simply don’t understand uncertainty.  But I have to say, I can hardly blame my grandmother2 for misunderstanding “scientific uncertainty.” First, it has different meanings in different fields, largely owing to the mathematical differences among studying electrons, atoms, cells, humans bodies, and human societies. Second, uncertainty is an abused concept in debates over climate change policy (among other controversial science policy arenas, e.g., GMOs, vaccines, etc.).

Case in point: earlier this year, New York Times columnist Bret Stephens used his first column to challenge climate change scientists and activists, noting that there are many unknowns and uncertainties when it comes to climate change, enough that proposed ‘abrupt and expensive changes in public policy’ should be delayed and conversation (i.e., debate) should continue. Former Times columnist and blogger, Andrew Revkin swiftly replied to the column, in which he was oft quoted. In his reply, he argues that the basics are clear; climate change is happening. Unclear are the scope and scale of impacts including answers to questions that extend far beyond the bounds of climate science, ‘how dangerous?’ and ‘what do we do?’ But, Revkin argues, such uncertainty is still actionable knowledge.

Uncertainty has been caught in the middle of this debate, with one side declaring that it is reason to forestall action and the other countering that it’s the reason to act urgently. Yet, Revkin and Stephens would likely both agree that climate change is a (mostly) political problem involving really difficult values questions that are consistently couched in terms of (un)certainty by advocates and opponents of action. With such debate, centered on competing interpretations of, misunderstanding of, or misuse of scientific uncertainty, it’s fair to see why my grandmother is sometimes skeptical of the facts.

Uncomfortable Knowledge

Some people cannot accept evidence for climate change because it is inconsistent with their social-cultural identity. To explain this phenomenon, Yale Law Professor, Dan Kahan suggests that there may be two ways people use reason, (1) to know what is known (e.g., the latest climate science), and (2) to be who we are. Sometimes who we are doesn’t align with what is known. Understandably, most people choose to protect their social-cultural identities; it’s what they have to live with everyday. Put another way, whether or not someone “believes” in the evidence for climate change may be less an expression of what they know and more an expression of who they are. Climate change is wrapped up in a host of cultural and socio-economic problems, so it isn’t surprising that many individuals and institutions find evidence for climate change to be “uncomfortable knowledge.”

For example, my conservative, Republican uncle, who works in steel, refuses to accept climate change is real. But given that proposals to address climate change threaten his work and his ideology, it makes sense that he would have a hard time accepting evidence for climate change. And it’s worth noting that he approves of my climate change work with the National Park Service. Perhaps this is because national parks are ideologically neutral (their 75% ‘favorable’ approval rating is only second to the US Postal Service among federal agencies3) compared to debates over energy, infrastructure, and lifestyle.

The Role of Science

So if evidence is so contested in political negotiations, what’s the use?

Well one idea, in the words of Philip Handler, President of the National Academy of Science from 1969-1981, is that “The estimation of risk is a scientific question… The acceptability of a given level of risk, however, is a political question, to be determined in the political arena.”4 In other words, the role of science is to understand how different policy choices can lead to different outcomes (or, in Handler’s example, different levels of risk). The role of politics, then, is to choose which outcomes (levels of risk) and thus which policy choices are acceptable. But Handler’s point is not entirely sound because even our tools of research and estimation can be politically subjective: the way scientists and policy analysts pose research questions can bias research programs toward certain conclusions and policy suggestions. In a recent National Affairs article, conservative pundit Oren Cass argues that this is one of the ways in which he believes evidence-based policy falls short. He uses the example of policy analyses around health care access to make his point:5

“The debate over how best to ensure that low-income Americans have access to health care in the most cost-effective way possible is one of the most controversial and complex policy quandaries in our politics. Yet the researchers providing the evidence on which to base policy were investigating whether the value of Medicaid is larger than zero…Proponents of Medicaid expansion understandably delighted in this framing, which established a bar of “not worthless” for the program.”

Cass argues that the research and results are biased because the experiment was designed without regard for alternative ways to spend Medicaid money, or some might say, with a liberal mindset. Cass then purports that the government philosophy should come before the research design: “…assessment should begin from a philosophical inquiry into the proper role of the state and its relationship to the development of healthy families and communities….”

Such an inquiry could lead to different measurements, different experimental designs, and the use of different research tools. If this is true, our ‘objective research’ can be politically biased from the outset because the questions we choose to ask, the frames we ask them in, and the tools and experiments we use to answer them can all be ideologically influenced. Cass even suggests that we should abandon the premise that policy-related science is objective: “…let’s couch that science in its political perspective upfront.”

It’s important to note that transparent alignment of a research program with a political perspective doesn’t mean the research is “false” or “wrong.” But it could limit contributions to bipartisan policy-making. In the case of the Medicaid research bearing the brunt of Cass’ criticism, the utility of the results was constrained because the research design neglected a host of other ways in which we might improve access to health care for low income Americans.

The Upshot

Before you lose your mind down a postmodern wormhole wondering about the (non)existence of “truth” or “objectivity,” let’s get back to what’s important here: Santa isn’t real. My friend could spend her whole life believing, but that doesn’t change reality. But of course, telling her this didn’t change her mind at the time.6 In retrospect (and this is what’s really important here), I was learning an important science policy lesson at the ripe-old-age of seven: Two people can look at the same facts and reach two different, even opposite, conclusions, and not because the facts aren’t true, but because the world and its problems are complicated and our ability to “know” is limited.

Awareness of this “Santa Policy” lesson, and all of the others above, is necessary when creating, acquiring, using, and sharing information. Plus, it invites us to question what we know and why we know it. After all, blind faith in the value of evidence isn’t scientific.

I’m personally still muddling through. First, how do I know when to stop questioning (i.e., how do I avoid that postmodern wormhole)? Perhaps, it has to do with improving the transparency of perspectives that contribute to research. I personally believe climate change is real because smart people who work on climate change and who demonstrate understanding of both sides of the political argument (Stephens and Revkin, for example) agree climate change is real, but still disagree about what to do about it. But that confidence in expertise is just confidence, founded or unfounded, in certain people’s opinions anyway, which can seem an insufficient justification for policy action.

My own research examines the role of science in decision making for the National Park Service, often concerning climate change. And through that work, I’ve started to understand why more or better information rarely solves disagreements over climate change. But how can such disagreements be solved? And how can we effectively use evidence to inform policy? I’m learning everyday. Stay tuned for another post, another time.


P.S. My Santa Policy has evolved, and I promise not to break the news to your small children. Also worth noting that my sister’s policy was to pretend she still believed because then you guarantee a consistent gift-flow…So of course there is more than one policy to craft based on the evidence!


1ASU science policy professor and practitioner, Dan Sarewitz, calls this the “excess of objectivity.” He claims that it’s not for a lack of knowledge that we can’t all agree; rather it’s the excess of knowledge.

2My grandmother is my litmus strip for thoughts from the average American. I love you, Grandma!

3The latest data on this is from 2015, but the parks are only increasing in popularity year-to-year so I think it’s safe to assume this number is probably steady.

4Quoted in Risk and Culture, Douglas and Wildavsky 1982, 65

5And then there are other places where I disagree with his analysis, but that is outside the scope of this post.

6And similarly, recent work suggests that constantly barraging climate deniers with the “97.1% consensus” is a failing strategy.

A Tired Critique of a Tired Pro-Science Op-Ed

Reflecting on the Use of Science as a Political Tool

Although it may feel we are always on the science-policy news beat, fieldwork, summer jobs, new puppies, and Game of Thrones watch parties sometimes put us on a news delay. Lucky for us, we have friends (and guest bloggers) like Christian Ross who keep us on our toes. Last week, Christian brought an editorial to our attention; in the latest issue of Science, Delaware’s junior Senator, Chris Coons, sounds off on why “scientists can’t be silent.” Christian asked us, “does this strike you as just political posturing?,” and questioned some of Coons’ claims about the public benefits of the historic and current public support of science in the United States.

The rapid-fire emails that ensued led us to a reflection on science’s increasing use as a political tool. If our critique sounds hackneyed that’s because Coons’ piece is just another standard “pro-science” op-ed.1 Maybe we are cynical, but Senator Coons’ vague declaration of the importance of science for decision making makes him another politician playing the “science card” for political points. And the card game Coons and other “pro-science” politicians play increasingly draws a line between the “right” and “wrong” sides of science on issues like climate change, vaccines, and GMOs. Dangerously, these sides map onto political agendas, with democrats like Coons on the side of science and republicans painted as the opposition.

We all agreed the piece tasted strongly of political posturing. Coons includes lots of democratic talking points from immigration to climate change – and a sincere call for action. And he offers himself shameless self-congratulations for co-founding the Senate Chemistry Caucus, a bipartisan effort to “promote the use of sound science in policy-making.”2

We also questioned the misleading lack of context for Coons’ claims.3 For example, he laments the 17% cuts to research funding in Trump’s Skinny Budget but ignores the rest of the budget proposal and accompanying questions: What else is cut? Is scientific progress really being “threatened”?  And as Nich notes in a previous blog post: “Too many responses to Trump’s budget blueprint and its impacts on federally funded science rely on dubious connections between research and public value to justify funding.” To us, Coons’ outcry fails to address the real problems facing our country’s relationship with science.

Coons’ call for scientists to more widely publicize their work and reach out to elected officials is a noble one, but lacks nuance and detail. We know it’s imperative that scientists share and publicize their work, but it’s important to consider how. Are they openly advocating for a particular policy or agenda item? Are they presenting a set of policy options based on evidence? Are scientists setting their recommendations in the context of political realities? (For more on this, see the work of Roger Pielke, Jr.). Recent work on the failure of scientific consensus messaging in climate change policy points to the importance of these questions. Further, how will Coons’ recommendations for scientists help him accomplish his vague goal to elevate the role of science and fight ‘anti-science’ sentiments?

Finally, Coons fails to note that at the root of debates surrounding climate change, GMOs, and vaccines, are questions fundamental to political ideology: how much control should the government have over our personal choices (lifestyle, nutrition, health)?; how much government regulation should businesses tolerate?; etc. Science has become implicated as the justification for different answers to those questions. It’s concerning to us that being ‘pro-science ‘is becoming synonymous with a being a democrat. In op-eds like Coons’, science represents a political talking point to garner votes and exchange barbs with opponents instead of an effective tool for evidence-based governance. In doing so, ‘Science’ borrows at high interest against its future status and trust in the public sphere.

-Michelle, Nich, and Christian

1You can find very similar messaging in rhetoric surrounding the March for Science. Also, see Coons’ speech to the AAAS from earlier this year.

2We will give Coons credit for “his” effort being “bipartisan,” but he’s still posturing. The caucus actually started in the House, sponsored by the largest scientific society in the world, The American Chemical Society, to spotlight the role of the chemistry enterprise in the U.S. economy. The caucus is not the subject of our critique, but rather Coons’ seeming touting of it for political gain; our (admittedly short) web search did not uncover any other senators putting out press releases on the matter.

3Though we acknowledge you don’t typically have the luxury of a generous word count when writing editorials.

Science Policy Immersion

Learning to swim in the deep end of science policy

We harp on the importance of scientists engaging with policy and politics but it’s admittedly easy to do so from behind a laptop screen. Talking to policy makers and learning about the politics and mechanics of science policy requires time and emotional and intellectual bandwidth. Pressures to publish, submit grant applications, and myriad other professional responsibilities quickly drown out the ability of researchers to engage with the public and policy makers, despite the importance of such efforts.

Fortunately, smart and forward-looking people created a program to immerse scientists and researchers in science policy amidst the chaotic backdrop of Washington, DC. And we’ve both participated in this ASU-run program, called Science Outside the Lab (SOtL)1. Michelle attended in 2016, Nich earlier this summer. The program centers around conversations with people who work in science policy including federal-agency staff, former congressional staffers, and representatives of scientific societies. With each, participants discuss a variety of topics such as how scientific models factor into decision-making, the intricacies of the science budget, and the politics of congressional science committees. Rather than sharing our reflections on the program, we asked a few of our fellow participants to reflect on their time in DC and to share their insights. The big takeaways? There’s no single ‘science policy’, networking facilitates policy making and careers, and DC has some good restaurants.

Ryan Edwards (RE), is a PhD Candidate from Princeton University in civil and environmental engineering. He researches the potential environmental impacts from hydraulic fracturing in shale gas formations and has served as a policy consultant for the National Governors Association.

Ana Lopez (AL) is a Masters of Science and Technology Policy student at ASU, focusing on water and the water-energy nexus. She has interned at the Kyl Center for Water Policy and is doing her applied project on the Navajo Generating Station and its closure.

Walter Johnson (WJ) received his BS in Chemistry and currently studies the regulation of health biotechnologies. He attends Arizona State University, under the Masters of Science and Technology Policy program.

Wale Odukomaiya (WO) is a Research Associate at the U.S. Department of Energy’s Oak Ridge National Laboratory and a PhD candidate in Georgia Tech’s Woodruff School of Mechanical Engineering. His research interests include the development, integration, business/economics of sustainable energy systems and technologies, energy and climate change policy, and global energy infrastructure development.

1) Why did you decide to participate in SOtL?

RE: I have always been interested in the intersection between science, politics and policy-making and am considering a career in this area, so I saw SOtL as an awesome opportunity to learn more.

AL: The SOtL program is a part of the Masters of Science and Technology Policy program, so by enrolling in the program I made the decision to participate in SOtL. I thought the D.C. experience would be a valuable addition to my education and could help inform my career aspirations.

WJ: Though the DC experience fulfilled a degree requirement for me, SOtL was a natural complement to my science policy work and education. The US federal government is a huge complex system made of rapidly moving components, which can be learned in the classroom but takes on a much different meaning when experiencing it in person. The better appreciation of how DC functions and the contacts I made there certainly augmented my policy education.

WO: I heard about the SOtL program through a colleague in an interdisciplinary science and policy fellowship program that I am a part of at Georgia Tech. I had developed an interest in gaining breadth in my academic program to complement my very technical mechanical engineering curriculum. This is what led me to pursue the fellowship program in the first place, and subsequently the SOtL program. My interest in SOtL was heightened when I learned that it was run through an Arizona State University consortium, because I had known ASU to be quite disruptive in terms of creating innovative, out-of-the-box multidisciplinary academic programs.  For me, the biggest barrier to participating in the program was the financial cost. Luckily for me, my fellowship program had some funds earmarked for just this sort of interdisciplinary experience, so that took care of that, and I was on my way to D.C.!

2) The program organizers referred to the course as a ‘science policy disorientation.’ Were any of your ideas of science policy upended during your time in DC?

RE: The biggest idea that was upended for me was that science and scientists can direct science to its greatest benefit for society – the idea of “the free play of intellects working on subjects driven by their curiosity”. Dan Sarewitz gave us a great talk that dispelled the myth that curiosity-driven science leads to the best outcomes, and showed how important strong government direction (particularly from the U.S. Department of Defense) was in many of the great scientific and technological breakthroughs from World War II onward.

AL: I don’t think that any of my ideas of science policy were upended so to speak during SOtL, but I would say my preconceived notions of science policy became better informed. I gained a better idea of how the mechanisms of science policy work, and how public servants at various points in their careers make their contribution to the science policy realm.

WJ: My conceptualization of federal science policy was certainly challenged, and the sheer number of people that we heard from was sufficiently disorienting. The mechanics of the federal budget in general and appropriations to science related agencies is horridly complex. I had been told this in advance, but hearing it explained in greater detail was equal parts fascinating and headache-inducing.

WO: Yes, definitely. For me, I had this preconceived notion that science policy was a thing, a formal, well-defined, living, breathing institutional machine that literally forced the intersection of science and policy for the betterment of society. Through SOtL, I learned that this was not the case at all. In actuality, science policy could mean any one of several different things, and the definition continues to expand. Rather, science policy could be better described as a community of scientists, or people with scientific backgrounds who are currently working in a policy or social sector space. SOtL did a wonderful job exposing us to several individuals with science backgrounds currently working in Washington D.C. in some policy capacity. For example, we spoke to one person who had a PhD in a hard science and in a past job directed a think-tank housed at a university, but was currently working in the Congressional Budget Office. Another person we interacted with quite a bit throughout the program, who also had a PhD in a hard science, was working as a science communication coach, helping to equip scientists and engineers with tools to be able to better communicate with politicians and public servants.

3) What surprised you about the way science policy is made or the way science is used by the federal government? What was as expected?

RE: I guess I would say that I came into the program already being pretty cynical, because nothing surprised me much. But a couple things were really reinforced: firstly, that the government is vast and diverse and that there is no single “science policy”. Decisions are made by different branches of government and a number of different agencies. Secondly, that science is just one factor that plays into policy-making and it never will, or should, be the only consideration in government policy-making.

AL: At this point in my education and career, I didn’t think I would be surprised by the amount of bureaucracy in policy-making. Prior to SOtL, I thought that I had a firm grasp on understanding how bureaucracy affects policy in general, but I was still surprised about how hierarchy and political maneuvering grease the wheels of policy creation. One thing that was as expected was the fact that many of the public sector professionals we spoke with dedicate their professional lives to the work they do, and often work long hours. If you want to work in science policy, you are going to have to put the hours into get to a position where you are integral to the policy-making process.

WJ: The extent of the role of Congressional committee staffers was a bit shocking. I had heard anecdotes about the importance of staffers, but hearing from former staffers and those who interacted with staffers had an impact. The communication occurring between staffers and stakeholders seems to capture a lot of nuance which could easily be missed in the way science policy legislation evolves.

WO: In line with my previous comment, what surprised me the most was the non-existence of a formal structure or process of incorporating science or evidence-based decision making in general into policy-making or policy discussions. Especially in areas like research funding or climate change policy where you would think input from scientists would be vital. The way we lobby for this is by encouraging more people with STEM backgrounds to participate in some way in policy.

4) What advice might you give to someone who wants to get involved in science policy discussions in DC?

RE: Do the SOtL program! There are also some similar programs run by specific science organisations that may be of interest depending on your discipline (for example, the American Meteorological Society Summer Policy Colloquium). I have personally found it helpful to get involved with the public policy school at my university (even though I am an engineer) and to reach out to alumni who have gone into the policy world.

AL: My best advice for someone who wants to get involved in science policy discussions would be to get involved in a program like SOtL, and to get to know people who are a part of the process. Gaining an insider perspective and making connections is really valuable within science policy and can give someone a clearer understanding of how the process actually works. Beyond that, read as much as you can about current issues in science policy that interest you.

WJ: Connections make a big difference in any field, but DC thrives on personal relationships and connections. If you have contacts in DC or contacts who can introduce you to people in DC, take advantage of it.

WO: Take the time to find out for yourself what exactly science policy means to you. A great way of doing this is through a program like SOtL where you have the opportunity to speak with numerous people who are currently contributing to science policy in some capacity. Also, if you are a student, find ways to form connections with departments on campus that are more traditionally considered to align with policy (i.e. political science, public policy, law departments, etc…). This could be in the form of taking an actual course, or attending monthly seminars, or even trying to collaborate on a multi-disciplinary research paper of some sort.

5) Any thoughts on what this experience might mean for your career? Have your career goals shifted because of something you learned or encountered during the program?

RE: I was already intending to pursue a career in policy and the SOtL experience helped further solidify that goal for me. It was really helpful to learn about specific opportunities in D.C. and to meet people involved in science policy in different organizations (both governmental and non-governmental) and at different points in their careers. Knowing more about the wide array of opportunities has certainly helped me see the science policy career pathway as an exciting one.

AL: The experience definitely gave me a taste of D.C. life and a better idea of what my career and life would look like if I were to move out there. I’ve broadened my scope of potential long-term career goals because of the program and have a better understanding of what jobs are out there and what different professions in various agencies do on a day-to- day basis. I’m still figuring out what the SOtL experience will mean for me, but the program definitely broadened my horizons.

WJ: Meeting staff from GAO and CRS had a big impact. Both were agencies I was aware of, but hearing those individuals describe their jobs and day-to- day experiences has definitely sparked my interest. Against the cacophony of executive agencies, the Congressional agencies like GAO often seem to be forgotten, but play substantial roles in supporting government operations. And they have internships.

WO: My general approach to “career development” has been to gain as much exposure as possible and gain varied experiences to be as well-rounded as possible. I will soon have a PhD in mechanical engineering, which will almost by default make me an expert in a very technical area. I can differentiate myself from other technical experts by gaining as much breadth as possible in my education and experiences. I would say, if anything, that SOtL broadened my horizons because I previously thought that science policy meant one specific thing, when it actually is quite broad. Something else that was quite interesting to learn was that as a person with an advanced degree in a STEM area looking to work in policy, there is actually many times the opportunity to sort of create your own job, or tailor the issues that you work on to your interests. This is something that is quite valued among academics.

6) Last but not least: What was your favorite DC restaurant?

RE: Good Stuff Eatery! It’s a burger and shake place near the Capitol – a perfect destination after a walk around Capitol Hill.

AL: A place called Lincoln D.C.. The food and atmosphere were great!

WJ: Otello’s was a fun Italian place; our group celebrated a birthday there and had a great time.

WO: DC has a lot of great restaurants! Definitely enjoyed Founding Farmers, and Toro Toro for happy hour.


  1. Check out this paper written by program organizers for a more academic perspective on SOtL.

Basic Research Fails to Deliver to Society, and Applied Research Does Too!

Guest blogger, Christian Ross, argues that the basic versus applied research argument leads us astray in thinking about biomedicine and biotechnology funding and outcomes.

Our guest blogger, Christian Ross, is a PhD student in the School of Life Sciences at Arizona State University and a National Science Foundation Graduate Research Fellow. His research is focused on the intersections of science, society, and science policy, particularly surrounding emerging biotechnologies.

There is a recurring problem that I see in ongoing discussions about public funding of scientific research. And, perhaps surprisingly, this problem is not specific to the current US administration1, but rather seems to be endemic in scientific communities as much as political ones. Often, science policy is reduced to a simple question: do we need more basic research or more applied research? I think that this distinction between basic and applied research is unnecessary and unhelpful, particularly for biotechnology and biomedicine. Insistence on the basic-applied distinction as the question in science policy obscures growing evidence of the ineffectiveness of all publicly funded science to deliver on its promises to the public to provide life-improving technologies and products.

Thus far, the Trump administration has not made many friends within scientific communities. Threatened and realized budget cuts to research funding across disciplines and what is often called “anti-science” rhetoric have led to many impassioned responses from scientists and science advocates proselytizing about the importance and necessity of public funding of scientific research, especially basic research. But what counts as basic research, and more importantly how useful it actually is, nearly always goes unexamined.

Back to Basics: The Linear Model

The prevalence of the basic and applied research distinction in funding conversations in the US has strong roots in the work of Vannevar Bush and his report to President Harry Truman on the role of science in post-WWII America, Science: The Endless Frontier. Basic research, as opposed to applied research, focuses on pursuing fundamental understanding of natural phenomena without direct consideration about the potential applications of that knowledge to create novel technologies. Applied research, by contrast, harnesses the scientific knowledge generated by basic research to create technological solutions to societal problems in engineering, medicine, and other applied fields.

While the distinction between basic and applied research by no means began with Bush and Science: The Endless Frontier, Bush’s work did serve to legitimize the basic-applied distinction as a foundational aspect of post-war US science policy. Bush argued that national military and commercial interests depended on good science. And good science was basic science, unhindered by the constraints of applications. Moreover, basic research was the “pacemaker for technological progress” and invariably led to applied technologies and societal benefits.

Bush’s work also solidified an implicit social contract between science and society. In return for public funding, science would provide technological solutions and products to improve society through advances in healthcare, sanitation, travel, communication, national defense, or economic outcomes. Since then, federal funding for science has been characterized as supporting either basic or applied research with consistent pressure from scientists to keep basic research as the essential fuel for scientific and societal advancement.

Deceptively Linear Beginnings

Bush’s “linear model” for scientific development has already been widely and frequently critiqued (including at times by this blog) for oversimplifying the technical and political complexities of scientific research and development. Those are fair criticisms, and I am not intending to pile on much more here. Actually, I hope to defend the linear model a bit (or at least those that have bought into its ideology) as an understandable misstep. Though, I only want to do so to show that even if a linear model approach to science was once helpful or fitting2, it is not so for our present moment nor going forward, especially not in the fields of biotechnology and biomedicine.

That said, it is understandable how the linear model gained credibility in the US, particularly in biotechnology and biomedicine. Leading up to and during WWII, US applied research focused heavily on developing basic physics research (especially atomic physics) to support the war effort. After the war, there was a surplus of basic research in virology, cell biology, and genetics that had not yet been applied to developing technologies and products. This swell of untapped basic research directly led to the development of many new biotechnologies and drugs with the advent of recombinant DNA technology in the 1960s3. Recombinant DNA techniques enabled scientists to modify bacterial genomes to produce complex biochemical compounds for use in new, synthetic pharmaceuticals, like synthetic insulin to treat type I diabetes, erythropoietin (EPO) which treats chronic kidney disease, and tissue plasminogen activator (tPA) which breaks down blood clots to treat strokes. Much like the linear model describes, basic research acted as a precursor to biotechnological and biomedical applications.

The development of these new drugs through bioengineering was an unqualified success and provided justification for continued use of a linear model approach to science funding.

Biotech Paradise Lost?

However, we do not see similarly large leaps forward in biomedicine and biotechnology today. To be sure, we regularly develop new therapies, drugs, and techniques, but not at rates (or profit margins) comparable to the 1970s and 1980s4. And we certainly are not making progress on the high-priority problems like cancer in the same way as early biomedical and biotechnology research did with renal disease and diabetes.

According to linear model thinking, a slowdown of technological development is the result of a lack of basic scientific research. But over the past fifteen years federal support of basic research has been at least double what it was in the 1970s through the  mid-1980s. Evidently, increased funding for basic research has not translated into increased biomedical and biotechnological development. Even if we grant that the linear model at one time adequately described the relationship between federal funding of science and basic and applied research5, it is clearly not doing so now.

So, what changed? Why does the linear model no longer seem to describe the developments of biotechnological and biomedical innovation? Put simply, the reasons and context that enabled scientific and industrial boom in biomedicine and biotechnology in the 1970s and 1980s are different now in three major ways.

First, there was an unusual backlog of untapped basic life science research available for application by industry. During WWII and early years of the Cold War, basic research in virology, cell biology, and genetics seldom translated into applied research. During that time, the biological sciences did not receive nearly the same attention as other fields, like physics, resulting in much of its basic research to remain unconnected to tangible applications. However, once recombinant DNA techniques were developed and their potential became apparent, applied research in biotechnology and biomedicine grew dramatically. Researchers took advantage of surplus of basic biological research to jumpstart the applied research of the biotechnology and biomedicine industries in the US. New basic research became incorporated into the applied research and products pipeline as quickly as it could be published.

Second, new biotechnology and biomedicine industries and markets emerged. Before the 1960s, the life sciences and industry were more separated from each other. There was some overlap in medicine, but nothing of the magnitude that came with the developments in the 70s and 80s. For the first time, biotechnology and biomedicine startup companies emerged that proved highly profitable in a newly created market. Today, academic and corporate researchers now more readily recognize biotechnology and biomedical research as sources of technical problem solving and profit.Corporations are already established and dominate what once was a more open, competitive marketplace. There simply are not the same commercial opportunities for startups to be innovative in the lab insulated from the pressures of industry like there used to be.

Third, the problems that could be solved with biotechnology and biomedicine in the 1970s and 1980s were relatively simple and straightforward. The advances in the technical capabilities of biotechnology and biomedicine made accessible an entirely new class of biological problems that previously had been beyond the scope of science. In this newfound class of biological problems there were problems ranging from relatively easy to devilishly difficult. Understandably, researchers at new startup companies first picked the low-hanging fruit6.

Now, the biotechnological and biomedical problems that exist are technically and socially more complex than those of the mid-twentieth century. Although the development of recombinant drugs was certainly challenging, the biotechnological and biomedical problems that exist today by comparison are much more difficult. Also, because the problems are harder, research ventures to develop commercially viable products are more costly and more risky for both academic and corporate researchers.

Further, we are more aware that we work within larger, more complex systems with greater ranges and degrees of uncertainty. Science at every stage is hard, and it makes sense to solve the easiest problems first. But that means that once the relatively easy problems have been solved, only harder, often wicked problems remain.

Now What Do We Do?

So, if the state of biotechnological and biomedical research is totally different than it was in the 1970s and 1980s, what does that mean for current funding of research initiatives? Well, it certainly does not mean that we should stop supporting basic research. Even if federal funding of basic research is not the sole source of scientific progress like Bush and the linear model suggest, it is still essential to the generation of scientific and technical knowledge that advancement and innovation is based upon. The integration of biotechnology and biomedicine in industry has increasingly tied basic research to technological development based on the profitability of the resulting applications and products. So, perhaps reconceptualizing the value of basic research based on its contributions to societal outputs rather than its funding inputs will prove a more useful framing for understanding the roots of scientific progress.

At the same time, it also does not mean that we should simply prioritize applied research over basic research. Although some critics of the linear model suggest that applied research is the remedy to the shortcomings of pure basic research, there is little compelling evidence indicating that applied research is more effective at furthering scientific progress than basic research. One study published in Science by Danielle Li, Pierre Azoulay, and Bhaven N. Sampat in April of this year examined nearly 30 years of biomedical research funded by the US National Institutes of Health (NIH), the largest non-defense research funder in the US7,8.. The study by Li and her collaborators found that both basic and applied research displayed similar rates of citations in biotechnology and drug patent applications. In other words, neither basic nor applied research appear to be better suited to actually producing new products and solutions. It stands to reason, then, that prioritizing applied research over basic would not create meaningful differences in the kinds of technological outcomes generated.

So, what should we do then? Fundamentally, we need to reevaluate how we think about basic and applied research in biotechnological and biomedical research and development. The article on biomedical research from Science also found that of all basic and applied NIH-funded research only 10% is directly cited by product patents. Even including research indirectly cited in patents (meaning patents citing research that cites NIH-funded research), that number only grows to 30%. Put another way, 70% of NIH funded research was not related to new patents, even at two degrees of separation. But if the implicit social contract between science and society is that in exchange for public financial support science will produce technologies and products that improve society, then both basic and applied biomedical research seem to be coming up short on their end of the deal. Even if this is just the rate of return on investment in research—less than a third of it generates new biomedical technologies or drugs—at the least, that is not commensurate with the justifications given for furthering public funding of biomedical research.

Rethinking How We Evaluate Research

The problem of under-delivering science is frequently framed in terms of the role and merits of basic versus applied research. Yet, research suggests that, the problem seems to be across both basic and applied research equally. This seems to be not an issue of which research type is better at producing technological solutions and scientific progress, but indicative of a broader problem (at least for NIH-funded projects) with our approach and expectations of scientific funding—the distinction between basic and applied research for funding allocation.

When it comes to the technological outputs of science, a meaningful distinction between basic and applied research does not exist. We have seen that both basic and applied research are indistinguishable when it comes to the transfer of scientific knowledge into technological solutions and products. Emphasizing one over the other or trying to determine which should receive more research funding is the wrong kind of question to be asking. The question we should be asking is: which kinds of research leads to the societal benefits and outcomes we want?

Funding for science should be based on the problem-solving effectiveness of research and its potential usefulness in society. Whether the desired outcomes of research are patents, publications, commercially-viable technologies, new companies, development of human capital, economic stimulus, or fueling knowledge-generation enterprises, the basis of scientific research to receive funding should be based on the extent to which it contributes to societal goals, not based on the whether it is basic or applied research. We need to prioritize research that lives up to the social contract of public support for technological benefits and stop rewarding research that fails to deliver.

-Christian Ross


  1. Though I do have a litany of concerns about the Trump administration’s approach to science policy, that is not my purpose here.
  2. It really is not, but that is not the main point of the argument here.
  3. Historian of science, Nicolas Rasmussen has written extensively and excellently about the development of the biotechnology industry in the US in his book Gene Jockeys: Life Science and the Rise of Biotech Enterprise (2014).
  4. One prominent counterexample is the development of CRISPR-Cas9 as a genome editing technique which has taken the field of molecular biology by storm and stands to have similar impact as many early biotechnologies. I do not want to ignore or downplay the enormous impact of this biotechnology on science and industry (my own dissertation research is focused on it!). But the development of such a potent, accessible, and widely applicable technology for biomedical and biotechnological research is the exception, not the rule, over the past several decades.
  5. Again, it did not.
  6. The fascinating story of the motivations and social context that surrounded the development of many of these drugs is outlined in Rasmussen’s book Gene Jockeys.
  7. Li, D., Azoulay, P., & Sampat, B. N. (2017) “The applied value of public investments in biomedical research.” Science 356: 78–81. DOI: 10.1126
  8. While patent citations are certainly not the only measure of the productivity of scientific research and may not capture all cases, it does provide a quantifiable estimate of the impact of research and an easily compared standard of measure to evaluate between a wide variety of disciplines and projects.

Public Forums Help Us Explore What We Care About

To care about democracy is to care about conversations on shared and differing values. A recent public forum shows why.

Michelle and I are big supporters of democratic innovations in science and technology governance. But as academics, we can get a little caught up in the nuances of participatory events. A recent public forum at the Museum of Science, Boston reminded me of the power of conversation.

The forum, funded by the National Oceanic and Atmospheric Administration and held on June 11th, brought ~60 members of the public and a handful of facilitators and staff from the Museum together to discuss the challenges of sea level rise and extreme precipitation in the Boston area. I observed the forum, occasionally helping facilitators with technical issues and keeping conversations on track. (Full disclosure, I’m part of the planning team for this forum and others). A nice summary of the forum is available here and I posted photos and updates to my twitter. Rather than rehash the importance of public input in science-related decision-making or how the forum could be improved, I thought I’d reflect a bit on the conversations I heard. To care about democracy is to care about conversations on shared and differing values. This forum provides a nice example of why.

The conversations at each table impressed me. Participants, who broadly represented the Boston area’s demographics, took the topic seriously even as they talked about fictionalized communities with names like Rivertown. The Museum of Science worked hard to recruit participants who otherwise might not have the opportunity to contribute to these policy discussions. While some environmental groups were represented, they made up only a small portion of the audience. Participants drew on their own experiences with flooding, their own assessments of current development in flood-prone areas, and their knowledge of what strategies have worked elsewhere. They argued their points using data available to them (each table had a computer that visualized the potential impacts of policy choices) and in terms of who might be affected and how. Most encouraging to me, some participants recognized that their differences in opinion were not based on who had found the ‘right’ answer for dealing with sea level rise or extreme precipitation. Rather, they viewed their different preferences as related to what they valued. Some valued saving as much land as possible from the impacts of sea level rise. Others were concerned about key infrastructure like power plants. Yet others focused on impacts to those who lived on or made a living on the coast. Participants acknowledged other perspectives and ideas, saying things like, “I see where you’re coming from,” but still laid out what was important to them.

At a time in which every major public decision boils down to esoteric assessments of impacts far beyond the ability of most of us to comprehend, I found the tone of the forum’s conversations refreshing. While data was available to participants, the conversation still centered on something we can all relate to: what we care about.

The forum was the first of eight to take place across the country. Stay tuned over the next few months for updates and check out ASU’s Consortium for Science, Policy & Outcomes (CSPO) and the Expert and Citizen Assessment of Science and Technology (ECAST) network for more forum news. For another cool case study, check out the results of the public forum on NASA’s Asteroid Initiative.

– Nich Weller