FROM VACCINATIONS TO CLIMATE CHANGE, we make decisions every day that involve us in scientific claims. Are genetically modified crops safe to eat? Do childhood vaccinations cause autism? Is climate change an emergency?
In recent years, many of these issues have become politically polarized, with people rejecting scientific evidence that misaligns with their political preferences. When Greta Thunberg, the youthful climate activist, testified in Congress recently, submitting as her testimony the IPCC 1.5° report [1], she was asked by one member why should we trust the science. She replied, incredulously, “because it’s science!” [2]
For several decades, there has been an extensive and organized campaign intended to generate distrust in science, funded by regulated industries and libertarian think-tanks whose interests and ideologies are threatened by the findings of modern science. In response, scientists have tended to stress the success of science. After all, scientists have been right about many things, from the structure of the Universe (Earth revolves around the Sun, not vice-versa.) to the relativity of time and space (Relativistic corrections are needed to make global positioning systems work.).
Thunberg’s answer isn’t wrong, but for many people it’s not persuasive. After all, just because scientists more than 400 years ago were right about the structure of the solar system doesn’t prove that a different group of scientists are correct about a different issue today.
AN ALTERNATIVE TO THE QUESTION, “Why trust science?” is that scientists use “the scientific method.” If you’ve got a high-school science textbook lying around, you’ll probably find that answer in it. But this answer is wrong. What is typically asserted to be the scientific method – develop a hypothesis, then design an experiment to test it – is not what scientists actually do. Historians of science have shown that scientists use many different methods, and these methods have changed with time. Science is dynamic. New methods get invented, old ones are abandoned, and at any particular juncture scientists can be found doing many different things.
That’s a good thing, because the so-called scientific method doesn’t always work. False theories can yield true results, so even if an experiment works, it doesn’t prove that the theory it was designed to test is true. There also might be different theories that could yield that same experimental result. Conversely, if the experiment fails, it doesn’t prove the theory is wrong; it could be that the experiment was badly designed or there was a fault in one of the instruments.
If there is no identifiable scientific method, then what is the warrant for trust in science? How can we justify using scientific knowledge – as Greta Thunberg and many others insist that we must – in making difficult personal and public decisions?
The answer is not the methods by which scientists generate claims but the methods by which those claims are evaluated. The common element in modern science, regardless of the specific field or the particular methods being used, is the critical scrutiny of claims. It’s this process of tough, sustained scrutiny that works to ensure that faulty claims are rejected and that accepted claims are likely to be right.
The common element in modern science is the critical scrutiny of claims.
A scientific claim is never accepted as true until it has gone through a lengthy process of examination by fellow scientists. This process begins informally, as scientists discuss their data and preliminary conclusions with their colleagues, their postdocs and their graduate students. Then the claim is shopped around at specialist conferences and workshops. This may result in the scientist collecting additional data or revising the preliminary interpretation. Sometimes it leads to more radical revision, like redesigning the data-collection program, or scrapping the study altogether if it begins to look like a lost cause. If things are looking solid, the scientist writes up the results. At this stage, there’s often another round of feedback, as the preliminary write-up is sent to colleagues for comment.
Until this point, scientific feedback is typically fairly friendly. But the next step is different: once the paper seems ready, it is submitted to a scientific journal, where things get a whole lot tougher. Editors deliberately send scientific papers to people who are not friends or colleagues of the authors, and the job of the reviewer is to find errors or other inadequacies in the paper. We call this process “peer-review” because the reviewers are scientific peers – experts in the same field – but they act in the role of superiors who have both the right and the obligation to find fault. Reviewers can be pretty harsh, so scientists need to be thick-skinned and accept criticism without taking it personally. Editors sometimes weigh in too, and often their contributions are not all that nice, either.
Only after the reviewers and editor are satisfied that recognizable errors and inadequacies have been fixed is the paper accepted for publication and enters into the body of science. Even then, the story is not over, because if serious errors are detected after publication, journals may issue errata, or even retract the paper.
Why do scientists put up with this difficult and sometimes nasty process? Many don’t. A lot of them drop out along the way and move into other professions. But those who persist can see how it improves the quality of their work. Philosopher of science Helen Longino [3] has called this process of critical scrutiny transformative interrogation: interrogation, because it’s tough, and transformative because over time our understanding of the natural world is transformed.
A KEY ASPECT OF SCIENTIFIC JUDGMENT is that it is not done individually; it is done collectively. It’s a cliché that two heads are better than one. In modern science, no claim gets accepted until it has been vetted by dozens, if not hundreds of heads. In areas that have been contested, like climate science and vaccine safety, it’s thousands. This is why we are generally justified in not worrying too much if a single individual scientist, even a famous one, dissents from the consensus. There are many reasons why an individual might dissent: he might be disappointed that his own theory didn’t work out, bear a personal grudge, or have an ideological ax to grind. She might be stuck on a detail that just doesn’t change the big picture, or enjoy the attention she gets for promoting a contrarian view. Or he might be an industry shill.
The odds that the lone dissenter is right – and everyone else is wrong – are not zero, but so long as there has been adequate opportunity for the full vetting of his and everyone else’s claims, they are probably in most cases close to zero. This is why diversity in science is important. The more people looking at a claim from different angles, the more likely they are to identify errors and blind spots. It’s also why we should have a healthy skepticism towards brand-new claims. It may take years or sometimes decades for this process to unfold.
Excerpted from: Naomi Oreskes, “Science Isn’t Always Perfect – But We Should Still Trust It,” Time (24 October 2019). Read More.

For a more detailed, scholarly discussion, see Naomi Oreskes, Why Trust Science? (Princeton University Press, 2019; paperback edition to appear in March 2021).
1. UN Intergovernmental Panel on Climate Change Special Report, “Global Warming of 1.5°C,” October 2018.
2. “Listen to the science, Thunberg tells Congress,” You Tube, 18 September 2019.
3. Helen Longino, Clarence Irving Lewis Professor of Philosophy, Stanford University.
See also Naomi’s TED talk on the subject, “Why We Should Trust Scientists.“
Naomi Oreskes is the Charles Lea Professor of the History of Science and Affiliated Professor of Earth and Planetary Sciences at Harvard University. A Guggenheim Fellow, she is the author or coauthor of seven books, including Merchants of Doubt (2010), Discerning Experts (2019), and Why Trust Science? (2019). Her articles and essays on climate change have appeared in the New York Times, Washington Post, Los Angeles Times, and The Times of London. She was an early Orcas Currents lecturer.
Oreskes doesn’t come to terms with the fact that doubt is the foundation of science, and that science (which, after all, is not mathematics) cannot prove anything, it can “only” hypothesize, experiment, and disprove.
I am much more moved by Jacob Bronowski’s defense of science:
https://www.youtube.com/watch?v=ltjI3BXKBgY
That’s the philosophy of science espoused principally by Karl Popper, to which some (but not all) historians and philosophers of science subscribe. I, for one, think it a valuable but incomplete perspective. For example, I think that during the 1970s, high-energy physicists proved beyond a reasonable doubt that quarks (or something like them) indeed exist inside protons and neutrons. This is not a certainty, but it’s close. See my Orcas Currents article “The Hunting of the Quark”:
https://orcascurrents.com/the-hunting-of-the-quark/
When many steps are diligently taken to prove a point, as Naomi Oreskes argues, we can trust the process of proving that point, but it doesn’t ensure that the point itself is the right one for the purpose. No matter how well it is proved, what is proved may miss the mark as it often does.
Examples:
1. Agriculture:
We trust science’s proof that a certain toxin destroys certain bugs in plants. But I do not trust science to prove that this is a healthy goal as it destroys the earth, water, air and health.
2. Medicine:
Science can indeed prove with rigor that certain chemicals can destroy cancer cells, alleviate symptoms created by the body, or soothe a skin rash. But if the idea of suppressing the symptoms takes us away from understanding the actual cause of disease and the possibility of full cure of the cause, (when not related to injuries and such), then we are on the wrong path. What if we can repair the organs that allow a cancer to grow?
3. Oreskes trusts that all the scientists involved at all the rigorous steps of discovery can be relied on for commitment only to science. She is ignoring the many bribes, threats, and resignations of scientists who have faced a demand by for-profit producers to twists “scientific facts”for profit or to avoid litigation.
4. Her point that the lone scientist is unlikely to be correct flies in the face of history, and although today’s system includes more scientists and rigorous examination. Between possibly being on the wrong path, and the profit factor, it is still very likely that a lone scientist is right and often on the right trajectory, free of profit (and therefore is lone because no profit means no payments for large scale research). Many lone scientists through history were found later to have been correct. From Aristarchus of Samos, who discovered the sun-centered world, to Dr. Ignaz Semmelweis of Vienna General Hospital, who discovered hygiene but lost his job there and was put in a mental hospital. History is full of such scientists and physicians losing their licenses, defamed, jailed or institutionalized when their discovery didn’t fit the narrative, yet were found later to hold true.
All in all, Oreskes has knowledge of and trust in the system. But she did not convince me to have the same trust, nor do I think that the rigorous system proves that it is generally on the right path. The long process and great number of people involved can also be a hindrance and not an advantage. I do not dismiss the scientific process and have high regards for the hard work of scientists, but science is by no means immune to being on the wrong trajectory, or financed for distorted outcomes as well as other causes and intended for-profit outcomes.
Regarding your third example, Oreskes is the coauthor of a famous book about climate denial, Merchants of Doubt, in which she specifically takes on those scientists working for or at for-profit companies or institutions and publishing or otherwise generating specious information that masquerades as climate science.
And on your fourth point, Naomi, the era of the lone scientist essentially came to an end shortly after World War II, when ever larger teams of scientists became the norm. While there have been a few successful lone scientists since then — for example, my MIT classmate Alan Guth, who in 1980 came up with the Theory of Inflation as the driving force of the expansion of the Universe — they are an extreme rarity. Far more normal are the large teams of scientists like those of us who discovered quarks in the 1970s.