The icon indicates free access to the linked research on JSTOR.

A roundtable discussion with Dr. Ellen Carlin, Stefan Böschen—senior research scientist at the Karlsruhe Institute of Technology in Germany—and Kevin C. Elliott—associate professor at Michigan State University

JSTOR Daily Membership AdJSTOR Daily Membership Ad

The purpose of science is to extract knowledge about the world. In its purest form, the scientific process allows researchers to reach this knowledge in an objective way. But, of course, scientists are only human, and so are the individuals who would leverage their findings. This means that biases and misinterpretations can sometimes consciously or unconsciously enter the equation.

Science can, in particular, be misrepresented or overstated in science-driven debates. With complex, controversial, and political issues, many advocate for change on the basis of “science,” a closer look at which the published literature does not actually support. In some cases, the literature does not specifically refute the claims, it simply lacks the studies that would back them. Especially in those cases where consensus is less than settled, misleading rhetoric can be commonplace.

The risks of misuse of science are manifold. When scientific myths are perpetuated, misguided laws get passed, problems can go unaddressed, and sometimes the situation worsens. Repetition becomes both the cause of the problem and the problem itself, as the various players feed off of one another’s public statements without verifying their validity (creating a culture of “reinforced ignorance”). Science as a discipline is demeaned, and if the hunches off of which policy makers are operating turn out to be wrong, the credibility of science as an arbiter of good policy becomes eroded.

Policy makers are left to handle difficult questions in this context: What does the science actually say about hot-button issues like antibiotic use in livestock, the safety of potentially toxic chemicals, or geo-engineering? When should government step in to mitigate health and safety risks—and how? Do we ever know anything for sure? And is it OK to act on suspicion alone?

I recently discussed these issues with Stefan Böschen , senior research scientist at the Karlsruhe Institute of Technology in Germany, and Kevin C. Elliott, an associate professor at Michigan State University, who studies the philosophy of science and practical ethics. The following is an edited version of our Q&A conducted by email earlier this year.

Ellen Carlin: Policy makers frequently butt up against the limits of science when trying to make decisions to reduce health or other risks. Can you give me some examples of this?

KElliott-Sept-2015sm
Dr. Kevin Elliott

Kevin C. Elliott: The example that comes immediately to my mind is the regulation of industrial chemicals. There are thousands of chemicals on the market, but we have very little information about their health effects, and we are especially ignorant of the effects that we might experience as a result of chronic, low-level exposure to mixtures of these chemicals. Moreover, we are finding that it is even more difficult to study these issues than we previously thought, partly because it appears that chemical exposures to parents can sometimes even affect their children and grandchildren, and partly because we are finding that chemicals can exert a variety of subtle effects in addition to cancer. Given the lack of data and the difficulty of investigating these questions, scientists and policy makers are often forced to depend heavily on value judgments as they decide what conclusions they can reasonably draw.

Steven Böschen
Dr. Stefan Böschen

Stefan Böschen: The case of genetically modified organisms (GMO) gives us important insights. In this case, we see the problem of inconclusive knowledge informing decision-making, but moreover conflicts about values are more apparent. This can be underpinned by the fact that the question “What constitutes damage?” seems to be unanswerable. For proponents of GMO use, only clear toxicological events (as in death or increased mortality) can be counted as damage, whereas for opponents the spreading of the transformed gene in the environment is evaluated as damage (in the sense of negatively influencing the structure and integrity of ecosystems). Such differences cannot be bridged by facts. Quite the reverse, they call for political decision-making.

How can decision-makers better deal with the inherent uncertainty with which some of these decisions are fraught?

SB: Institutionalized frameworks that evaluate risks and set restrictions are important when addressing uncertainty or areas we refer to as “non-knowledge.” The European Union established the REACH legislation (Registration, Evaluation, Authorization, and Restriction of Chemicals) in 2006 to help assess and manage real or potential health risks posed by chemicals. The now established processes of collecting data on confirmed and potential risks in the EU set a positive example of how to better deal with uncertainty.

This is also true in fields like medicine and public health. As costs are exploding, the price and risks of diagnostics and therapy need to be weighed carefully. In a clinical setting, evidence-based medicine seems to be a valuable strategy. This choice offers a method for reducing uncertainty. But, it does run the risk of privileging selected scientific methodologies over others, meaning that it can reject new but not fully explored methods or perpetuate untruths about established methods.

You mentioned the term “non-knowledge.” Can you explain what this is?

SB: Non-knowledge is that which we do not know at the moment, do not acknowledge that we do not know, or which is unknowable. Non-knowledge is not simply the other side of knowledge or the lacking of knowledge. It has to be seen in its multifaceted forms and functions. My colleagues and I have tried to show that the debate about non-knowledge is not an abstract methodological discussion, but really concrete not only with regard to the everyday work in scientific disciplines, but also with regard to societal or political problem-solving processes.

Dr. Elliott, what examples can you provide of the real-world impacts of non-knowledge, perhaps in an area you’ve studied closely, like environmental science?

KE: Several significant examples of non-knowledge come to mind, and I think it is valuable to think about the different reasons for the lack of knowledge in each of them. As Dr. Böschen intimated in his discussion of the European REACH legislation, we face a serious lack of knowledge about the human and environmental effects of the roughly 80,000 industrial chemicals in production. In this case, our non-knowledge is partly because our regulations, especially in the United States, have not demanded the collection of more information, but also because it is just incredibly difficult to collect detailed information about the range of toxic effects of so many different chemicals.

A second, and somewhat related example, is that in a variety of cases, manufacturers have actually had information about the hazards associated with their products but took steps to keep the public from knowing about them. Examples of this phenomenon in the environmental health context include lead, asbestos, tobacco, vinyl chloride, benzene, chromium, and dioxin, and there are numerous examples in the biomedical world as well. A third, and somewhat different example, of non-knowledge is that some diseases receive very little research funding compared to others, perhaps because those who suffer from the diseases have limited financial resources or political power.

You have both mentioned areas of non-knowledge in which some argue that, in the absence of information, a precautionary approach to mitigating risk is prudent. The “precautionary principle,” as it’s called, places the burden of proof on those who believe that no harm will follow from a given action. Are there some types of situations in which this approach is more appropriate than others? Do you think that policy makers invoke this principle too much or not enough?

SB: The precautionary principle comes into play in situations in which people can imagine, but not prove, a hazard. Any precautionary procedure reaches conclusions based on somewhat fragmented pieces of evidence. Therefore, the limits of science become a problem for political decision-making and its institutionalized procedures. A case in point is the EU legislation about GMO (2001/18/EC; directive on deliberate release). This was an institutional acknowledgement of non-knowledge; it allowed for research into possible negative outcomes within a time frame of 10 years. After that period, the law requires firms producing GMO foods and products to reapply to the European Food Safety Authority, which then decides anew on the basis of the evidence gathered while monitoring.

The case of chemicals differs from GMOs as there was enough data about hazards collected over time before a precautionary approach was implemented in 2006. To put it in slightly exaggerated terms, sometimes the principle is invoked because of historically experienced non-knowledge (as in the case of chemicals). Alternatively, the principle may actually be invoked in an attempt to actively explore non-knowledge, such as with the regulation of GMOs. In the latter one, the position of the decision-makers is stronger.

Dr. Elliott, you’ve looked at the precautionary principle in various contexts, such as the potential risks of nanotechnology. Do you think that it is reasonable to impose the precautionary principle when action appears to be necessary but a scientific basis to support that action is lacking?

KE: One of the difficult aspects of talking about the precautionary principle is that it means so many different things to so many different people. Some people formulate it in a way that makes it seem like an obviously good idea. For example, they say that if there’s not much to lose by avoiding a potentially risky activity, and if the activity could plausibly generate disastrous consequences, then one ought to avoid it or at least demand good evidence that it won’t really cause harm. Critics formulate it in a way that makes it seem pretty obviously unreasonable. For example, if there’s any chance that an activity could cause harm, then one should avoid it unless there’s decisive evidence that it won’t really be a problem.

Despite this potential for confusion, I think that there are some helpful ways of formulating the precautionary principle so that it can provide guidance for responding to scientific ignorance. Some of the approaches I find valuable call for us to look for alternatives to potentially risky activities, to carefully monitor risky activities to see if they do in fact begin to cause problems, and to choose policy options that will have tolerable consequences under many different circumstances. It can also be helpful to shift the burden of proof toward those who claim that no harm will come from a risky activity. But I think that the appropriate burdens of proof and standards of proof often need to be set on a case-by-case basis, depending on the social costs of taking action or not.

When a scientist, politician, journalist, or lay writer cites a peer-reviewed study, the viewpoint of that study becomes legitimized. The more this happens, the more ingrained in the “common knowledge” that claim becomes. Larry Reynolds, who published a fascinating account of this phenomenon in 1966, demonstrated that the claim that women can make finer color determinations than men was scientifically unsubstantiated. Yet it was rampant in the scientific literature.

What allows the perpetuation of scientific errors to happen: a faulty peer review process, insufficiently skeptical scientists, or something else?

SB: In newspapers and other forums, pieces of evidence shown in scientific studies are used as supporting elements in specific stories or narratives. The form of the narrative dictates the use of evidence. And the form of the story depends on the political background from which the story is told. A story is a call for considering and/or doing things in a specific way. These stories are inevitably molded by the basic normative and political assumptions the writer is following.

Against this background, the perpetuation of scientific errors is to be seen as quite normal. Communication—scientific and public—is in itself a risky process, and possible faults emerge in any stage of communication, starting with the creation of the plot and the debate about the different stories told about an issue. Therefore, science has implemented a broad set of procedures, like the use of established methods for empirical investigations or the peer-review process, to prove knowledge claims. Nevertheless, in cases where science is used as expertise, specific narratives are developed and errors are more likely to be perpetuated.

Dr. Elliott, as Imme Petersen and coauthors note in their piece on mass-mediated expertise in Western societies, science is still considered among the most credible of institutions. As a result, it is often leaned upon as having the answers we seek. Despite its flaws, what are the reasons that people cite science to support their claims?

KE: I think that the most basic reason that science is so widely cited is that it has proven in the past to be a highly reliable source of information about the world. However, part of the reason that the study of non-knowledge has become so interesting is that it reflects our growing realization that many of the major policy issues we now face are so complex that science cannot provide the sorts of decisive and reliable answers that we have become accustomed to in other contexts. Thus, science sometimes now plays a more central role in decision-making than it should. For example, a number of scholars have pointed out that highly politicized topics like genetically modified (GM) foods tend to be framed primarily as scientific issues about the safety of GM crops, whereas they actually involve a complicated mixture of different issues. In the GM case, many people’s concerns arise partly from worries about the power of large agribusiness companies, the potential to lose agricultural diversity, and the nature of global patent policies, but the debates are often treated as if they are solely scientific ones.

Science-policy expert Roger Pielke argues that an “iron triangle” composed of three different groups tends to push us toward framing these complex issues as narrowly scientific ones. Politicians like to focus on science because it enables them to evade difficult value-laden decisions and hand them over to someone else. Scientists find it appealing when issues are framed as scientific because it provides them with funding and prestige. Finally, special-interest groups like to focus on science because they can benefit by associating themselves with the traditional prestige of scientific institutions.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Science, Technology & Human Values, Vol. 35, No. 6 (NOVEMBER, 2010), pp. 783-811
Sage Publications, Inc.
Environmental Health Perspectives, Vol. 119, No. 6 (JUNE, 2011), p. A240
The National Institute of Environmental Health Sciences (NIEHS)
Sociometry, Vol. 29, No. 1 (MARCH, 1966), pp. 85-88
American Sociological Association
Science, Technology & Human Values, Vol. 35, No. 6, (NOVEMBER, 2010), pp. 865-887
Sage Publications, Inc.