Key readings on biases affecting consideration of the future

I’ve started developing a list of key readings which explore how and why engagement with the future can be biased (or problematic in other/related ways). This list of readings is a work-in-progress and will be progressively added to…

I’d be keen to hear from you if you have suggestions for additional readings exploring the following themes – or perhaps on different themes you think are relevant.

Underestimating risk (e.g. too sanguine projections)

Fligstein, N. et al (2017), ‘Seeing Like the Fed: Culture, Cognition, and Framing in the Failure to Anticipate the Financial Crisis of 2008’, American Sociological Review, Vol. 82(5), pp. 879–909: This paper develops a set of arguments regarding why the US Federal Open Market Committee was “so sanguine in its economic projections” before the financial crisis. Centrally the paper draws on cognitive sociology perspectives and uses frame analysis to explain biases in group deliberation. This paper is relevant to consideration of group dynamics and group decision-making processes as well as disciplinary biases.

Overstated predictions (e.g. exaggerating favourable/expected outcomes)

Skaburskis, A. & Teitz, M.B. (2003). ‘Forecasts and outcomes’, Planning Theory & Practice, 4:4, pp. 429-442: Paper notes that “many forecasts prepared for planning purposes project changes that turn out to have been exaggerations of the changes that actually take place” and explores why. Sections of paper include exploration of: the role of interests; the “seduction of outliers”; risk and uncertainty; incomplete models; ignored cycles and homeostasis; and “lies, deceit and wishful thinking”. Also notes that consultants can also be “rewarded directly and indirectly for making favourable forecasts” and the issue of “who notices predictions of the status quo?” (which can create incentives for producing overstated predictions to attract attention). Notes that “without overstatements in some forecasts, there may be little interest in planning”.

“Cognitivist” challenges (e.g. cognitive biases, judgmental errors)

Tetlock, P.E. (1999), ‘Theory-Driven Reasoning about Plausible Pasts and Probable Futures in World Politics: Are We Prisoners of Our Preconceptions?’, American Journal of Political Science, Vol. 43, No. 2, pp. 335-366: Argues that experts largely are prisoners of their preconceptions. Paper notes that “cognitive theories predict that even experts cope with the complexities and ambiguities of world politics by resorting to theory-driven heuristics”. Argues the reported studies show “experts neutralize dissonant data and preserve confidence in their prior assessments by resorting to a complex battery of belief-system defenses that, epistemologically defensible or not, make learning from history a slow process and defections from theoretical camps a rarity”.

Chang, W. et al. (2016), ‘Developing expert political judgment: The impact of training and practice on judgmental accuracy in geopolitical forecasting tournaments’, Judgment and Decision Making, Vol. 11, No. 5, pp. 509–526: This paper notes cognitive sources of error (e.g. related to the use of effort-saving heuristics) and other sources of error such as overconfidence. Reviews past research on “debiasing” (presenting four categories of approaches to attempting this) and presents the results of an intervention that seeks to achieve debiasing (probabilistic-reasoning training).

Markman, K. & Tetlock, P.E. (2000), ‘‘I couldn’t have known’: Accountability, foreseeability and the counterfactual denials of responsibility’, British Journal of Social Psychology, Vol. 39 Issue 3, pp. 313-325: Article explores ‘counterfactual excuse-making’, and related denials of responsibility, which have positive psychological benefits of maintaining self-esteem and maintaining a desired self-identity but are also a major barrier to learning and accountability.

Scientists’ biases and the influence of scientific norms/conventions

Brysse, K. et al. (2013), ‘Climate change prediction: Erring on the side of least drama?’, Global Environmental Change, Vol. 23 Issue1, pp.327-337: Paper argues that climate scientists “are biased not toward alarmism but rather the reverse: toward cautious estimates” – that is, “erring on the side of least drama”. The authors point to the following causes: “adherence to the scientific norms of restraint, objectivity, skepticism, rationality, dispassion, and moderation”.

Hansen, J.E., (2007), ‘Scientific reticence and sea level rise’, Environmental Research Letters, Vol. 2 Number 2: Hansen argues that a force he terms “scientific reticence” is “inhibiting the communication of a threat of a potentially large sea level rise”.

Ball, P. (2007), ‘When it’s right to be reticent’, Nature, 29 March.

Attention given to best-case and worst-case scenarios

Cerulo, K. (2006), Never Saw It Coming: Cultural Challenges to Envisioning the Worst (University of Chicago Press): Cerulo argues that sociocultural forces influence the level of attention given to best-case and worst-case scenarios, and critiques individual-oriented approaches (which only consider psychological and emotional factors). The book most strongly emphasises the United States and what forces may produce a bias towards focusing mainly on the best-case scenario, which Cerulo terms “best-case vision” (and she terms the opposite bias “worst-case vision”).

Sunstein, C. (2007), Worst-case Scenarios (Harvard University Press): this book considers the question “how do human beings and their governments approach worst-case scenarios?”. Sunstein examines cases where worst-case scenarios are mostly neglected or are given excessive weight (comparing responses to global warming and terrorism), along with the risks entailed by responses to worst-case scenarios (which he argues “can have worst-case scenarios of their own”). The book critically discusses policy principles like the precautionary principle. Sunstein argues that typical constructions of the precautionary principle are “incoherent”.

Social constraints on future-oriented inquiry

van Lente, H. (2012), ‘Navigating foresight in a sea of expectations: lessons from the sociology of expectations’, Technology Analysis & Strategic Management, vol. 24, no. 8, pp. 769-82: argues all “foresight” exercises have major social vulnerabilities, including the embeddedness of participants in prevailing discourses, which can lead to such exercises mainly reproducing ideas about the future which are already circulating (in social groups) and related arguments/images which are currently seen as credible.

Political bias and political forecasts (e.g. influence of partisan bias on forecasts, ideology, etc)

Wachs, M. (1990), ‘Ethics and advocacy in forecasting for public policy’, Business and Professional Ethics Journal, Volume 9 Issue 1–2, pp. 141–157. Notes that “[s]ome of the most egregious violations of the public trust to have occurred over the last several decades involve ostensibly objective forecasts which in retrospect can be seen to have been blatant attempts to manipulate public policy in order to promote certain interests at the expense of others”. Author explores ethical issues and reliability issues such as where “the real political purpose in making the forecast” is to justify a present action “rather than to honestly evaluate its potential social utility”. In such contexts “the accuracy of the forecast is irrelevant to its political utility”. Also notes that forecasts are technically complex (e.g. use of complex mathematical models) and are typically presented in weighty reports which are written in technical jargon. “Politicians who commission the studies can rarely themselves understand them, and they tend to quote the results in summary form…”.

Boylan, R.T. (2008), ‘Political distortions in state forecasts’, Public Choice, Issue 3-4, pp. 411-427: example analysis of the influence of political incentives; in this case, Boylan makes the case that budgets made before elections are based on overly-optimistic forecasts.

Tetlock, P.E. (2009), ‘Tarot reading on K Street’, The National Interest, No. 103, pp. 57-67: An insightful review of recent political forecasting books (e.g. by George Friedman, Ian Bremmer, etc). Relevant aspects include: the critique of the “superpundit” model of forecasting, which is typically grounded in a core ideological conviction; the use of specific political theories and their associated strengths and limitations. Much of Tetlock’s research has revealed the large gaps between the confidence of political pundits and their abilities (e.g. forecasting accuracy).

Frendreis, J. & Tatalovich, R. (2000), ‘Accuracy and Bias in Macroeconomic Forecasting by the Administration, the CBO, and the Federal Reserve Board’, Polity, Vol. 32 Number 4: The paper argues there is evidence of “partisan bias at work… indicating that decision-makers internalize their policy fears by exaggerating the macroeconomic problem of concern to their core political constituency” (a kind of political bias). “Republican administrations over-forecast inflation and Democratic administrations over-forecast unemployment”.

General issues in future-oriented inquiry (e.g. “narrative bias”)

Williams, R. (2006), ‘Compressed Foresight and Narrative Bias: Pitfalls in Assessing High Technology Futures’, Science as Culture, vol. 15, no. 4, pp. 327-48.

Bradfield, R.M. (2008), ‘Cognitive Barriers in the Scenario Development Process’, Advances in Developing Human Resources, vol. 10, no. 2, pp. 198-215.

Leave a Reply

Your email address will not be published. Required fields are marked *