The scientific method, and the discoveries in science and technology that have resulted from its use, have revolutionized the world. Why does it seem to be less respected today?
By Mark D, Harris
Research brought the world scientific and technological advances that have changed the lives of men and women forever. During the period characterized by the philosophy of modernism, from roughly 1750 to 1950, conventional wisdom expected that science would solve all the problems of mankind, both material and moral/ethical. Scientific and social research, which would lead to technological supremacy over the physical world and enlightened policies in every society, would usher in a utopia. World War II, the Holocaust, and the atomic bomb shattered these hopes, demonstrating that science and technology, and the research behind it, can destroy as easily as they can save. Though we prate about following “science,” in the past 70 years, research has lost respect.[1]
Research comes in two categories, academic and applied. Academic research is generally considered to be research on natural phenomena which may be paradigm shifting and of interest to a vast swathe of humanity. Academic research is typically done in universities, of long duration, resource intensive, and expensive. Watson and Crick’s discovery of the double helical configuration of DNA is a triumph of academic research. Applied research is generally considered to be research on local questions of interest to a small group, such as leaders in a community or business. Applied research is not likely to produce paradigm shifting understanding of the universe, as it does not ask questions whose answers might change our thoughts in these seminal areas. It is relatively quick, cheap, and local. A study about the utilization of a community park is an example of applied research.
During the COVID-19 pandemic, the respect of the general public for research, especially medical, seems to have reached a nadir. Whether academic or applied, popular respect for other research also seems to be in the doldrums. What are reasons for the decline in respect for research?
The Ubiquity and Democratization of Knowledge
In the past, libraries of dusty tomes carried knowledge available only to those who could read, who were authorized to enter these privileged places, and who had the free time to study. Now, person A can learn about the 13th King of England or the cell structure of an oak leaf in seconds by doing an internet search. Smartphones make such knowledge cheap, and available anytime and anyplace. Person A, perhaps a high school dropout, can feel that she knows as much about English history as Person B, an English historian, and as much about biology as Person C, a botanist. Perhaps person A’s hubris followed directly from Person B or C’s arrogance toward them.
Furthermore, there is so much knowledge that it is difficult if not impossible for an individual, or even a group, to know which information is important to them. When faced with thousands of print and virtual bits of information per day, how do we know what is useful, what is accurate, what is pleasant, and what is dangerous? Our confusion floods our cognitive faculties, and we find ourselves filling our minds with entertainment, like videos of cats falling into toilets.
Academic Hubris
Professors usually don’t make a lot of money and don’t get a lot of fame, yet they want it as much as anyone else. When research is successful, researchers trumpet their success and raise expectations to unreachable heights. No one likes a showoff, and highly educated people sometimes seem to look down on those with fewer years in school. Since such highly educated people are likely to be the academics doing the research, it is little wonder that others don’t believe them.
The Start and Stop Nature of Discovery
Research is difficult and newer results often seem to contradict those from the recent past. Studies on breast cancer screening are a classic example. In the 1990s, the medical community was convinced that mammograms starting at age 40 were the settled standard of care, until studies in the 2000s showed better patient outcomes when mammography started at age 50. People were confused and angry. Not knowing who to believe, they believed no one.
Accuracy
One of the most important considerations in research is not whether it is academic or applied but whether it is accurate. Some studies are performed poorly, such as when researchers do not have control groups, do not randomize, do not blind, do not measure correctly, or commit any of a thousand sins of omission or commission. Some studies are performed well but are analyzed poorly. In the interest of getting published, authors are tempted to use large datasets and repeat their calculations until something reaches the p<0.05 significance mark, regardless of their original study question.[2] Statistical mistakes are common, such as using a T-test or regression technique with a categorial rather than a continuous variable. Some studies are fraudulent from the moment their authors began them, such as the study that purported to link the measles-mumps-rubella (MMR) vaccine with autism. Some major public health threats like the coronavirus (COVID) demand answers before good studies can be completed. Scientific trash results. Research takes time, and political and media pressure can warp otherwise proficient and moral researchers into political stooges. Academicians who have never known fame, power, and money may be particularly vulnerable to political pressures to make the science say what the politicians want it to say.
Writer J. Ioannidis makes a strong case to doubt much published research.[3] Penalties for poor research are high, both for individuals and communities. Individuals suffer when they develop complications from treatments prescribed by uninformed doctors reading poor studies. Communities suffer because people stop believing individuals and institutions who have real expertise. COVID is the most recent example of these pitfalls. It is reasonable to estimate that many people have died from a lack of willingness to use personal protective gear, social distancing, and vaccines. Similarly, a few who had sufficient natural immunity have become sick, and some may have died, from getting a vaccine which, for them, was unnecessary.
The Gap between Lab Learning and Life Learning
Science writer Matt Ridley argues, “Most technological breakthroughs come from technologists tinkering, not from researchers chasing hypotheses.”[4] This affirmation is vastly overstated. I suspect that whoever developed the wheel did so when he (the “technologist”) noticed that logs and other round things rolled down hill while non-round things did not. This primeval technologist then attached something round to a cart in such a way that it could roll. “Presto,” a transformative invention, the wheel, was invented. If the alternative is that a man came out of a cave and pondered the concept of roundness (“researchers chasing hypotheses”), it seems likely that technologists win this round of the debate. The same could be said for the invention of fire.
When considered further, Ridley’s statement rests on mushy ground. First, how does he quantify “most technological breakthroughs?” Does he mean that 51% of total breakthroughs come from tinkerers, not researchers? How does he define a “technological breakthrough?” Without clarity on his definitions, it is difficult to know what Ridley means.
The second concern is whether the distinction between the technologist and the researcher is valid. The Wright brothers did extensive foundational research on air density and movement. They also “tinkered” enough to build an airplane. Were they technologists or researchers? Louis Pasteur performed groundbreaking research on microorganisms and ushered in a whole new academic theory of medicine. He was anything but a “technologist,” but he revolutionized the field of infectious disease and indirectly birthed a thousand treatments for infections that had killed billions of people throughout history.
The distinction between academic and applied research is useful, but it can also obscure as much as it illuminates. In earlier eras, the practitioner and the researcher were often the same. The Mayo brothers invented techniques that saved thousands of their patients. Knowledge was “marketed” through word of mouth long before it was marketed in scientific journals or trade publications.[5] Knowledge was also checked and double checked over the years before it was trumpeted on CNN or even JAMA. The veracity of the information was demonstrated, or not, in the character and outcomes of the people who discovered and used the new knowledge.
Conclusion
Learning about how the universe works, and how people work, will help solve the vexing problems that our world faces today. Those involved in research must address all the issues noted above. Knowledge is not merely knowing facts but knowing interrelations, and interrelations are hard to find on an internet search. Arrogance is never attractive, and expertise in one area should not breed vanity but humility, as the world and all it contains is glorious enough to humble the wisest scholar. Scientific progress is slow and halting, and U-turns are inevitable, so scientists must shape the expectations of media and the public. Research must be accurate, and the people doing the study must be of the highest caliber. Finally, linking research advances to advances that directly impact the lives of people, such as medical studies, are paramount. [6]
[1] Rynes, S. L., Colbert, A. E., & O’Boyle, E. H. (2018). When the “Best Available Evidence” Doesn’t Win: How Doubts About Science and Scientists Threaten the Future of Evidence-Based Management. Journal of Management, 44(8), 2995–3010. https://doi.org/10.1177/0149206318796934
[2] There is a documented bias against publishing studies that do not have positive results, or at least statistically significant ones.
[3] Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(8), e124. https://doi.org/10.1371/journal.pmed.0020124
[4] The Myth of Basic Science, Wall Street Journal, 2015, https://www.wsj.com/articles/the-myth-of-basic-science-1445613954.
[5] Hamet, J., & Michel, S. (2018). Rigor, relevance, and the knowledge “market.” European Business Review, 30(2), 183–201. https://doi.org/10.1108/ebr-01-2017-0025
[6] Fraser, K., Deng, X., Bruno, F., & Rashid, T. A. (2018). Should academic research be relevant and useful to practitioners? The contrasting difference between three applied disciplines. Studies in Higher Education, 45(1), 1–16. https://doi.org/10.1080/03075079.2018.1539958