The science loop: How cognitive biases contribute to the intellectual entrenchment at the root of junk science

By Joanna Szurmak

In an article in The Skeptical InquirerMay/June 2019 issue an author dissects a Flat Earther school board presentation seemingly surprised that it was “intelligently designed,” as if the key issue with junk science such as the flat earth model were the low intelligence, or poor communication skills, of its adherents. What is at the root of junk science, however, is neither stupidity nor lack of sophistication but the intellectual entrenchment due to a set of cognitive biases shared by scientists and laypersons alike.

John P.A. Ioannidis, a Stanford physician-scientist who has been studying scientific scholarship for years, defined bias as “the combination of various design, data, analysis, and presentation factors that tend to produce research findings when they should not be produced.” In the production and dissemination of scientific knowledge the two most problematic sources of distortion are confirmation bias and motivated reasoning.

Confirmation bias occurs when one pays attention to information that supports prior beliefs while rejecting anything that may challenge or oppose them. Motivated reasoning is a strategy to deal with challenging or opposing data by fitting them in so that they support — or at least do not contradict — the existing worldview.

There is nothing simplistic or unintelligent about the process of shoring up an entrenched belief through the use of cognitive biases. In fact, Stony Brook political scientists  Charles Taber and Milton Lodge noted that in all academic disciplines “(r)esearch findings confirming a hypothesis are accepted more or less at face value, but when confronted with contrary evidence, we become “motivated skeptics” (…), and only when all the counter arguing fails do we rethink our beliefs.”

University of Virginia psychologist Brian Nosek, the creator of the Centre for Open Science, and first author of the Transparency and Openness Promotion (TOP) Guidelines — intended to curb motivated reasoning through best practices for research design and dissemination — explained the process to journalist Philip Ball. Whether one is a scientist or not, “most of our reasoning is in fact rationalization,” or fitting what one thinks one observes to what one wants to see. Whereas confirmation bias is a filtering mechanism, motivated reasoning both filters and distorts the influx of new information.

Taber and Lodge extended their observations of biased thinking to non-scientists as well, finding that, in fact, “those who feel the strongest about the issue and are the most sophisticated — strengthen their attitudes in ways not warranted by the evidence.” Thus, our Skeptical Inquirer journalist should not have been surprised by the apparent cleverness of the flat earth presentation. It probably took a fair amount of commitment to the cause, and flat-out creativity, to contrive convincing arguments.

How can scientific practice and science dissemination be made more independent of cognitive biases or social and intellectual entrenchment? One tentative answer is that sticking with best practices will allow science to self-correct through ongoing refutation and discussion. The scientific method, at least as framed by the philosopher Karl Popper, appears to limit bias by specifying that scientists first seek to falsify or disprove their hypotheses, and only then attempt to find support for — but never “prove” — them through different sets of experiments.

Scientists often develop hypotheses inductively — from examples to generalizations — based on observations. Induction, however, may be used not only to identify new ideas but also to “prove” shaky theories because it cannot confirm the absolute truth of a statement. Popper’s famous example of this is the black swan problem.

If all observations of swans are of white ones, the inductively based hypothesis on swan pigmentation will have to be: “All swans are white.” But this is a hypothesis: a best guess based on preliminary data. Until one observes a black swan, one cannot falsify — or refute — this hypothesis by providing evidence to the contrary.

It is essential that when one sets out to test the “all swans are white” hypothesis, one does so with data not used to formulate it. This gives one a fair chance to see whether the hypothesis may be refuted. Science still never “proves” anything, merely provides evidence-based theories that get more sophisticated and better at predicting the real world through deductively formed hypotheses.

This inductive-deductive loop (see graphic) is the basis for much work in the sciences and social sciences: One observes and inductively develops some insights and hypotheses based on the preliminary data. These give rise — deductively — to hypotheses that are subjected to falsification using entirely new data. If these hypotheses cannot be falsified, there is a chance one has a valuable insight that still needs a whole lot more testing.

So far, so good. Nosek, however, pointed out a common violation of this falsification process: “One basic fact that is always getting forgotten is that you can’t generate hypotheses and test them with the same data. (…) At present we mix up exploratory and confirmatory research.” When scientists do that, motivated reasoning can take over experimental design or data analysis. In such cases, according to Nosek, “we have already made the decision about what to do or to think, and our ‘explanation’ of our reasoning is really a justification for doing what we wanted to do — or to believe — anyway.”

Nosek and others have studied cognitive biases that influence the design of experiments and the interpretation of data. Publication and funding biases researched by Ioannidis distort the availability and reporting of results, adding social and political dimensions to the distortions of science. The practice of science is, indeed, socially determined.

Thomas Kuhn described this social culture of scientific research as a framework of “normal” science punctuated by paradigm shifts when new ways of thinking first disrupt, then displace, the orthodoxies. The epidemiologist Ludwik Fleck had made similar observations about “closed systems of opinion” decades before Kuhn. Fleck noted that these systems were self-reinforcing socio-cognitive constructs that resisted outlier ideas with “tenacity.” Thus, the question of bias in science may not necessarily be a problem with the inductive-deductive method but with the human tendency to act … human.

The self-corrective mechanisms in science show evidence that despite individual cognitive biases and socially-induced systemic distortions, researchers following the scientific method are still our best bet for a sustained methodology of discovery. Remember the heated debate over the cause of ulcers? It was a big deal in the mid-1980s, at least for Robin Warren and Barry J. Marshall, two Australian researchers who, after refuting the consensus hypothesis that stomach acid caused ulcers, were awarded the 2005 Nobel Prize for their work showing that bacteria such as Helicobacter pylori were responsible for the condition.

The ulcer case is by no means an anomaly. Scientists are reacting to biased work all the time. In March 2019 ecologist Atte Komonen and his colleagues corrected the “strongly popularized unsubstantiated claims” about the imminent extinction of insects that, nonetheless, made waves earlier in 2019 in a peer-reviewed journal and in the media. The methodological flaws of the original study included a biased literature search: “By including the word (declin*), there is a bias towards literature that reports declines (…) If you search for declines, you find declines.”

Even Ioannidis’ research sustained constructive criticism by Steven Goodman and Sander Greenland  who wrote, echoing Popper, that instead of proving that “most published claims are untrue,” Ioannidis showed “that no study or combination of studies can ever provide convincing evidence.”

Enforced consensus and unmitigated social, professional or financial pressures are bound to distort the processes of questioning and falsification. There are, therefore, likely to entrench the cognitive biases of motivated reasoning and confirmation bias. And biased science is the seed of junk science that germinates under the conditions of intellectual or political polarization.

Joanna Szurmak is a Research Services Librarian, University of Toronto Mississauga and a PhD Candidate, Science & Technology Studies, York University

sites.utm.utoronto.ca/szurmak/

Published at Thu, 20 Jun 2019 11:27:45 +0000