A few weeks ago I published an article at an independent news and analysis website called The Conversation which discussed how forecasts are shaped by the interpretations and biases of whoever makes them. It referenced psychological studies that have revealed the many cognitive biases that can affect how we explore the future. Last night I expanded this analysis for an upcoming presentation.
I have found that the following six issues need to be grappled with in any futures or foresight exercise – they represent key pitfalls that often impair these exercises. I know that I frequently grapple with them, and they come up pretty consistently in books and articles published on foresight and futures research.
SIX KEY ISSUES:
1. Denial, and related self-delusions
An important issue is believing what we want to believe, rather than – as much as is possible given inevitable subjectivity – conducting an impartial evaluation of data. Linked with this, people often have resistance to examining particular scenarios and/or issues.
I was prompted to include this issue after speaking with a colleague about the recent announcements about manufacturing at Ford Australia. Ford has announced that it will close its Australian manufacturing plants in 2016, with the loss of hundreds of jobs. This colleague had discussed the news with senior manufacturing staff at Ford to get a sense of their reactions. In essence, the conversation went like this: “surely you saw this coming” (for good reason, this announcement was widely expected), to which they replied “no, not really, we were hoping it wouldn’t happen”. Additionally, they hadn’t given much serious thought to what they’d do if this day came. In other words, rather than preparing for a plausible, perhaps probable, future they “shut their eyes” and hoped for the best.
This reminded me of work I did years ago with school teachers on the increasing, and possible future, roles of technology in learning and the classroom. Many teachers had strong resistance to engaging with these issues and possibilities. The result was a lack of preparedness and, moreover, not exploring the possibilities such scenarios might offer them (as well as considering the threats).
It was also one of my motivations for writing an article about “Climate Action Under An Abbott Government”. Many people I speak to about Australian politics and climate policy have been in denial about the likelihood of a change of Federal Government [additional comment on 14/7/13: although time will tell if the “re-birthing” of Kevin Rudd changes this], as well as the consequent possibility of the carbon price and related policies and programs being repealed by a conservative government.
2. Over-extrapolation, and related linear thinking and modelling
Extrapolation is a very basic method of prediction – usually much too basic. One of my favourite examples is the concern of many city planners about the growth of horse manure at the turn of the twentieth century. One prediction made in 1873 stated that “given the constant growth of transport with horse-drawn carriage the territory Of England will be covered by 1 metre of manure by 1961”. (See this paper, which includes this quotation in a discussion about the limits of “linear” thought in its analysis of possible mobility futures). Another prediction made in 1894 warned that every street in London would, by the 1940s, be buried under nine feet of horse manure.
Similarly, in The Ingenuity Gap Canadian scholar Thomas Homer-Dixon points out that in complex systems we cannot count on things developing in tidy, straight lines. However, he also points out that experts in many fields base their predictions on linear projections of past developments.
He provides the example of ozone depletion and CFCs (chlorofluorocarbons). Scientific models largely assumed a linear relationship between CFC emissions and ozone depletion. Consequently, we reacted too slowly – depletion grew more rapidly than expected, causing a major surprise. The computer models analysing data were programmed to assume linear depletion and discarded anomalous results!
3. Contextual insensitivity and/or naivety
It is essential to pay very close attention to the context when evaluating data. This is a point that Nate Silver makes repeatedly in his recent book The Signal and The Noise. A key example he provides is the many predictive failures and emerging model risks that contributed to the global financial crisis.
This issue has also surfaced in my own work. For example, when I was looking at the promised green consumption revolution in late 2007 (for a management consultancy), few analysts – if any – were paying attention to the crises that were “brewing” in USA and how they might affect the trends they were making confident predictions about. The complexity of the big picture is a challenge here, so is time lags in data. It was later shown that in late 2007 the United States was already in recession.
Additionally, the “mental models” we develop based on past experiences are also potentially subject to the same problem – which technically refers to uncertainties when we move “out of sample”. The key assumptions that underpin our “mental models” may not hold in future, requiring reflection.
4. Cognitive limitations (especially regarding complexity)
An important issue is our limited cognitive capacity for processing complexity. This is important for dynamic systems (such as natural and economic systems), making forecasting very difficult.
In his book Reframe, Eric Knight describes a similar issue he terms the “magnifying glass problem”. By this he means a “tendency to zoom in and fix on one corner of the universe [e.g. what’s most dramatic, easiest to see] and miss those elements… lying just outside the lens” (p.7). For example, we may fixate on one way of looking at an issue, or one way of defining a problem, and miss the bigger picture.
I thought of Knight’s analysis when reading Nate Silver’s interview with Paul Ehrlich, author of the controversial 1968 book The Population Bomb. Ehrlich was prompted to write the book after witnessing rapid population growth in developing countries such as India (the dramatic trend, which was most visible). But Ehrlich notes that he didn’t spot the more subtle cultural shifts that led to much lower fertility rates in developed countries, as well as key scientific advancements which were making it possible to produce larger volumes of food. In the end the book was an alarmist analysis.
5. Paradigm blindness, often leading to surprise
The term ‘paradigm blindness’ comes from a paper on traps in futures thinking by Dr Mika Mannermaa, and references earlier work on the importance of “paradigm busting” (a paper in the journal Long Range Planning). Mannermaa writes that this “phenomenon can limit our thinking and prevent us from seeing essential factors outside of our paradigm” and is related to “the pressure to think in a similar way with our fellowmen and women” (e.g. in our professional field or our communities).
Similarly, the paper on “paradigm busting” discussed the “difficulty for forecasters to reject orthodoxy”. Adding: “At any time, there is orthodoxy, a dominant logic, which controls our perceptions of reality. It is promoted by different forecasters who have studied the same body of information with much the same set of themes and values in mind.” They discussed the Limits to Growth study as an important example of this phenomenon, stating that “at that time, ideas of an ever increasing population and the exhaustion of resources were popular to the point of being unchallengeable”.
Another possible example is the intensifying ‘peak oil’ fears that emerged over the past decade. Increasingly, prominent environmentalists such as George Monbiot and Bill McKibben are arguing that the fossil-fuel industry in the past few years has proved ‘peak oil’ theorists wrong – at least for the time being. Having looked at the issue through a particular paradigm they were surprised.
Linked with the above aspects, psychologist Philip Tetlock has identified what he calls ideologically-biased reasoning: a “blurry fusion between facts and values”. In his analysis so-called “hedgehogs” often fall into this trap, whereas “foxes” are able to remain more detached and skeptical.
6. Subjectivity and related biases
There are two key biases that I see time and time again in futures thinking:
6a) Zeitgeist bias (which was coined by Adam Gordon in his book Future Savvy): this refers to how forecasts and prediction often say more about the time in which they were created, than the future they purported to describe or explore. Similarly, Harvard’s Daniel Gilbert calls one set of biases “presentism”, because imagined futures often look “so much like the actual present”. His research shows that how we feel now affects how we view the past and imagine the future, often leading to poor forecasts.
I think management practices like scenario planning are an attempt to address such biases.
6b) Narrative bias (which was coined by STS scholar Robin Williams [see this paper]): this refers to the tendency for assessments of potential or expected futures to revolve around an established ‘repertoire’ of stories, which prefigure the analysis. Williams concentrates on pitfalls when assessing high technology futures, pointing to the influence of entrenched stories about technology and society. However, I think the argument applies much more widely, linked with paradigm blindness (#5).
QUESTIONS FOR REFLECTION
For each of the above issues we can ask important questions. For example, we can pose critical and reflective questions to ourselves when we are creating a set of scenarios. Or we can ask related questions when we’re using or interrogating scenarios or forecasts created by others.
|Issue||Questions – self and others|
|Contextual insensitivity and/or naivety
|Subjectivity (e.g. zeitgeist biases, and narrative bias)||
I would welcome feedback on the above analysis. Do these futures thinking issues and reflective questions resonate with you and your futures work? Have I missed any key issues in futures thinking? What strategies do you use to deal with these issues?