Whilst many people would argue that there aren’t unsolvable tensions between scholarship and activism I’ve sometimes found that there are tensions. It’s something that I’ve been thinking about regarding future career options and directions.
I was also stimulated to revisit this line of thought by a fascinating talk given by the social psychologist Jonathan Haidt (link). Haidt suggests that human foibles and cognitive biases (e.g. due to motivated reasoning, perceptual blindspots induced by tribal moral thinking, etc) mean that activist scholars often do bad science/poor scholarship. Haidt goes so far as to argue that universities need to choose between incompatible possible missions, especially between: seeking truth (i.e. making understanding the world their core mission) or seeking to change the world.
Regarding my own focus on sustainability-oriented action it’s often claimed that creating change towards sustainable futures is now more urgent than it’s ever been. Some activists and scientists further claim that the current period is “make or break” for humanity. Those are claims that can promote activism. But, as a scholar, I sometimes have my doubts about the evidential basis of some claims and related to this I’ve seen flawed concerns about risk and environmental hazards emerge and dissipate over the past 15 or so years. This promotes skepticism, which is less conducive to activism.
One of my core foci is how, and to what effect, knowledge claims about the future are produced, assessed and mobilised with respect to sustainability transitions. Such claims are frequently made by activists and advocates but as a scholar I’m often inclined to view these claims skeptically. When looking forwards (especially over the medium-long-term) precise and reliable knowledge – the traditional goals of science – is typically not available, resulting in the problem of irreducible uncertainty.
Related to this I’ve written critical blogposts about modelling (link), the use of worst case scenarios (link), preoccupations with collapse and crisis (link), trend analysis (link), and futures thinking (link, link).
Additionally, some of my earlier studies were in fields which have interrogated and, to some extent, have demystified science and probed related epistemological issues. For example, I understand scientific knowledge to fairly widely have a number of qualities which influence how I interpret scientific evidence. These include that scientific knowledge (or scholarly knowledge) is typically:
- Partial and corrigible: many sciences are inherently corrigible (especially social sciences), meaning that the state of knowledge is partial, flawed and constantly changing – but, crucially, the state of knowledge remains partial and therefore continues to be revised/corrected over-time. Often there are “shades of grey”, rather than the clear cut answers to questions that activists often prefer;
- Limited in its scope: that is, the questions that can be definitively answered by science are limited (for instance see the discussion of Alvin Weinberg’s distinction between science and “trans-science” in Daniel Sarewitz’s thought provoking essay “Saving Science”);
- Contested: related to the partial state of knowledge in many fields important claims/findings are contested. For example, various claims are made about climate sensitivity (i.e. the net climatic effect of a doubling of greenhouse gas emissions) and little further resolution of such questions has been achieved over the past few decades (e.g. see link, link);
- Social: scientific knowledge is often consequentially influenced by social factors and social conditions in ways that scholars often have limited or no awareness of. For example, the history of ecology provides many interesting examples of how scientists’ research agendas and conclusions were biased by ideological commitments and related cultural factors (e.g. the deep commitment that many ecologists once had to the idea/metaphor of the “balance of nature”); and
- Often political: scientific knowledge can often be viewed – in the broad sense – as political for example with respect to its origins, effects and/or implications. Related to this scientists often aren’t as neutral as they claim and can be motivated by political goals.
Some observers and users of science make large assumptions about the accumulation and progression of scientific knowledge – that is, that earlier sciences may have had these qualities or flaws but, progressively, we are moving beyond such limitations and we should have more confidence in today’s scientific knowledge. In some cases this is accurate but in others these assumptions can be shaky or incorrect. Many areas of scientific inquiry are quite novel and, moreover, new problems with scientific knowledge are frequently discovered. Perhaps the best contemporary example of this is the replication crisis (e.g. in psychological research [link, link]) and related claims that most studies published in peer reviewed journals are likely to be unreliable or false (link).
Daniel Sarewitz (Professor of Science and Society at Arizona State University) summarises the “ever-expanding litany of dispiriting revelations and reversals” that have occurred over the past decade as follows (apologies for the super-long quote):
“The science world has been buffeted for nearly a decade by growing revelations that major bodies of scientific knowledge, published in peer-reviewed papers, may simply be wrong. Among recent instances: a cancer cell line used as the basis for over a thousand published breast cancer research studies was revealed to be actually a skin cancer cell line; a biotechnology company was able to replicate only six out of fifty-three “landmark” published studies it sought to validate; a test of more than one hundred potential drugs for treating amyotrophic lateral sclerosis in mice was unable to reproduce any of the positive findings that had been reported from previous studies; a compilation of nearly one hundred fifty clinical trials for therapies to block human inflammatory response showed that even though the therapies had supposedly been validated using mouse model experiments, every one of the trials failed in humans; a statistical assessment of the use of functional magnetic resonance imaging (fMRI) to map human brain function indicated that up to 70 percent of the positive findings reported in approximately 40,000 published fMRI studies could be false; and an article assessing the overall quality of basic and preclinical biomedical research estimated that between 75 and 90 percent of all studies are not reproducible. Meanwhile, a painstaking effort to assess the quality of one hundred peer-reviewed psychology experiments was able to replicate only 39 percent of the original papers’ results; annual mammograms, once the frontline of the war on breast cancer, have been shown to confer little benefit for women in their forties; and, of course, we’ve all been relieved to learn after all these years that saturated fat actually isn’t that bad for us. The number of retracted scientific publications rose tenfold during the first decade of this century, and although that number still remains in the mere hundreds, the growing number of studies such as those mentioned above suggests that poor quality, unreliable, useless, or invalid science may in fact be the norm in some fields, and the number of scientifically suspect or worthless publications may well be counted in the hundreds of thousands annually. While most of the evidence of poor scientific quality is coming from fields related to health, biomedicine, and psychology, the problems are likely to be as bad or worse in many other research areas. For example, a survey of statistical practices in economics research concluded that “the credibility of the economics literature is likely to be modest or even low”.” (Sarewitz 2016)
Sarewitz also cites the editor-in-chief of The Lancet (an influential medical journal), Richard Horton, who similarly argued that:
“The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”
Some of the qualities of scientific knowledge listed above can be troubling for activists and advocates. For example, the corrigibility of much scientific knowledge can problematise claims made by activists and advocates (e.g. that the science is definitively “in” on a particular issue or phenomena). Such claims can go beyond what the science “says” when making political arguments.
I’ve often observed in environmental movements unquestioning confidence in scientific studies – in effect countering claims of doubt with claims of certainty, which can be problematic as Richard Denniss has argued:
“The strategic error that continues to haunt the environment movement is the decision to counter the sceptics’ message of “doubt” with a message of “certainty”. Such an approach was neither intellectually honest nor politically effective. It ignored the inconvenient truth that science is never “certain” and it placed the onus on the environment movement to have all of the answers, to all of the questions that the climate sceptics could think up. If you have ever seen a scientist try and explain the chronological dispersion of carbon isotopes in a 10-second news grab you will know what I am talking about” (Denniss, 2012).
New theoretical perspectives may add to this general picture and our understanding of related tensions/dilemmas.
For example, a relatively new theoretical perspective on reasoning – the argumentative theory of reasoning (link) – also addresses some of these issues. The theory proponents present evidence from psychological experiments which show study participants often defend and advance arguments at the expense of epistemic soundness (i.e. participants frequently seek to defend or justify views rather than impartially assessing all relevant evidence and arriving at better/correct beliefs).
This can be read as bad news for activist-scholars. One interpretation is that features of reasoning such as motivated reasoning and confirmation bias will serve them well in terms of advancing their arguments (e.g. for advocacy) but they are also likely to bias their research. The way the authors put it is: skilled arguers “are not after the truth but [are] after arguments supporting their views”.
Without going into specific examples here I’ll just state that this is something I’ve observed: scholars who appear to be more interested in validating their views and/or convincing others of these views than they are in exploring multiple valid viewpoints or seeking the truth. I find this worrying.
Where does all this leave us with respect to tensions between scholarship and activism?
One possibility is that the best approach for scholars is to study activism and social movements (and/or advocacy) in order to provide independent critical analysis that can inform future activism/advocacy efforts. Given the potential issues with activism-oriented scholarship that have been touched upon in this blogpost the dangers of biases and flawed studies may be so great that change-oriented analysis may be best left to others. However, many scholars would reject this view.
A further possibility is that we need to get better at demarcating what questions and phenomena science is capable of addressing (where scientific and scholarly research can therefore play important roles) and what questions and problems are, at their core, political dilemmas requiring political solutions (where activists and advocates play important roles, but science and scholarship are less decisive and consequently tend to play a more limited role). Related to this scientists and scholars may be overreaching in ways which generate problems for both science and society. This position has been argued by some Science and Technology Studies (STS) scholars.
A further possibility is that some tensions can be overcome; through particular research practices it is possible to avoid biases and/or flaws in how research is designed and how evidence is interpreted. As this post indicates I have doubts. But it is a possibility that is worth investigating further, perhaps by studying example activist-scholars who seem to have overcome such tensions.
These are questions, issues and possibilities that I plan to continue to ponder over the coming months. If you have thoughts to share I’d love to hear them.