Mistakes Were Made (But Not by Me) – subtitled Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts – is a social psychology book centred on self-justification and cognitive dissonance. The book also implicitly emphasises a particular philosophy of science which is linked to human psychology – it’s this aspect which got my attention.
In Mistakes Were Made Carol Tavris and Elliot Aronson argue that science is “a form of arrogance control”. Because of the influence of various psychological mechanisms they argue that we need scientific caution and methods to restrain our worst impulses and tendencies. They present many thought provoking illustrative case studies to support this contention which centre on self-protective cognitive biases and their effects.
A related core claim is that Irving Janis’s theory of cognitive dissonance “exploded the self-flattering idea that we humans, being Homo sapiens, process information logically” (p.21). Tavris and Aronson’s presentation of dissonance theory focusses on two things: 1) the “state of tension that occurs whenever a person holds two cognitions (ideas, attitudes, beliefs, opinions) that are psychologically inconsistent” (p.15) and associated mental discomfort; and 2) the underpinning claim that people strive to “lead lives that are, at least in their own minds, consistent and meaningful” (p.16). This need for consonance is further claimed to shape how people evaluate evidence – especially disconfirming evidence that challenges their beliefs or actions – and explain phenomena like confirmation bias.
They further suggest that “[i]n a sense, dissonance theory is a theory of blind spots – of how and why people unintentionally blind themselves so that they fail to notice vital events and information that might make them question their behavior or their convictions” (p.54).
The presentation and explanation of such phenomena in Mistakes Were Made is quite interesting because it makes some clear predictions. In particular their theory – and the cases they present – suggests that we should expect confirmation bias to be strongest in those situations where a person’s self-concept is threatened and, consequently, self-protective biases shape the way people interpret evidence. This is a useful step beyond the more sweeping claims about “blind spots” and biases that are often presented.
More broadly they critique the notion of “naive realism”: “the inescapable conviction that we perceive objects and events clearly”, ‘as they are’ (p.54).
A negative aspect of Mistakes Were Made is the presentation of scientific methodology and science, as it’s rather asociological. Science is simplistically presented as the contrasting positive ideal to the cognitive foibles of human beings, and as having a standard method that straightforwardly produces true knowledge: the scientific method and scientific reasoning exhibits caution (in contrast to much human thinking), weighs all evidence fairly (rather than selectively attending to what confirms preexisting views), actively considers the possibility we are mistaken, and so on. There is little discussion of how no method or form of reasoning can guarantee finding truth, nor does the book present cases of how scientists with formal scientific training (and are doing scientific research) can exhibit the same flawed thinking and errors presented in their cases (e.g. see Mercier & Sperber, 2017).
The most important aspect of the book is it gets you thinking about psychological factors that can shape the interpretation of evidence (e.g. self-justification). Knowledge practice theories (see earlier post) need to take these factors into account.
I also found their discussion of “entrapment” and escalation effects quite interesting – i.e. processes of “action, justification, further action … that increases our intensity and commitment” (p.45). They suggest initial ambivalence regarding, say, an initial decision often morphs into dogmatic certainty one step at a time in a largely unconscious fashion due to mental forces like self-justification and self-concept protection. They present the idea of a “pyramid of choice” to capture this process of “entrapment” (also see this presentation of these ideas) which they argue can even explain self-destructive courses of action done “to protect the wisdom of … initial decisions” (p.276). Their presentation of illustrative cases is both illuminating and slightly terrifying. If you decide to read this book you need to know that it will prompt you to examine your own thinking and behaviour.
This brings us back to their core notion of self-justification. Self-justifying steps, even small ones, can enhance the psychological difficulty of changing our minds or admitting mistakes.
The final chapter, “Letting Go and Owning Up”, is both the most practical and, in some respects, the most disappointing. The framing claim is that “an appreciation of how dissonance works [and related self-protective biases] … gives us some ways to override our wiring. And protects us from those who can’t. Or wont” (p.280). Well, maybe. These are huge claims. The authors seem to recognise this when emphasising the role of systems and formal procedures, and organisational cultures. But they assert that “most human beings and institutions are going to do everything in their power to reduce dissonance in ways that are favorable to them, that allow them to maintain business as usual” (p.286-7) and discuss other impediments. Nonetheless, they provide some initial suggestions for bringing psychological theory into our daily lives and, related to this, the book can inform the way we approach situations and other people in everyday life. This makes it both a very interesting and potentially very useful read.
References
Mercier, H & Sperber, D. (2017), The Enigma of Reason, Harvard University Press, Cambridge, Massachusetts.