Having spent the best part of 15 years working in roles where I’ve conducted and taught trend analysis and scenario analysis and read the work of many others who work in this area, I’ve concluded that a lot of this sort of analysis (including lots of my own work) is often of very little value. This may seem like a strange conclusion to reach. This post outlines some reasons why I’ve come to this conclusion with a focus on the ways that trend and futures analysis are often done, by whom, and with what effects.
First, there is some value which I’d like to acknowledge. For example, these activities encourage greater awareness of change and the collection of related data which can be useful (e.g. from a strategy or decision-making point of view). To the extent that such an “empirical attitude” is fostered, for lack of a better term, great. Trend and ‘weak signal’ monitoring services and systems can be useful in this respect, and some academics have offered methods and frameworks for making these more robust (e.g. here). These approaches may, in some cases, have related benefits such as seeing new opportunities more rapidly that your competitors, or spotting risks earlier so as to enable a proactive approach.
In the courses I taught we certainly tried to achieve and to foster all of the above.
Where these practices almost always fail, or at least significantly fall down, is when folk try to move from description to interpretation and then on to anticipation. For example, making “sense” of weak signals is extremely challenging – they can be strategic disinformation, as often as useful strategic intelligence. More broadly, interpretation requires an understanding of why events occurred or why a particular trend has emerged. Such understandings can inform forward reasoning about whether the trend can be reasonably expected to continue or accelerate in the future, or the various ‘weak signals’ you’ve picked up are significant signals of particular future changes or events or, instead, can be ignored.
So the first key problem is that of meaningful interpretation. Most work tends to fall into the category of “armchair philosophising” (rather than insightful analysis). Practitioners and executives try to do the work of social scientists and tend to fail miserably. In some cases, more informal data collection enabled by social networks and observational forms of data collection can improve this. However, most often this aspect is very poor. One solution is to explicitly seek out and apply relevant social scientific theory; however, this is often difficult and challenging work that folk have little or no relevant training for.
More often than not people fall back onto basic heuristics such as ‘what goes up must come down’ and ‘what goes around comes around’ (both of which suggests skepticism about the continuation of a trend and a future counter-trend / returning to an earlier state), or the S-curve model of change, or ‘from little things big things grow’ (which takes an optimistic view of emerging novelties and suggests the opposite expectation to heuristics like ‘what goes up must come down’). Given the complexity that typically must be grappled with in applied social analysis it is not surprising that simplifying heuristics are used.
Related challenges noted by Day and Schoemaker are that such analysis can be “compromised by information overload, organizational filters and cognitive biases”.
The second, closely related key problem is that of anticipation. For example, there are often a wide range of contrary, yet legitimate views on the future of a particular issue or trend(s) that the analyst needs to try to make sense of. Irreducible uncertainties also typically thwart such analyses and the search for clear answers or the certainties that decision-makers may be seeking – although most practitioners would argue that their methods and practices are about better coping with uncertainty, not removing it (something I wholeheartedly agree with). Most scenarios that I’ve read are fairly superficial, more or less entertaining “stories” which are being told for a range of reasons (which can be driven by the storyteller’s own agenda, more so than clear causal analysis) and rarely provide new strategic insights.
What to do? What are some ways forward?
The first key point is that these tasks are too often underestimated. People often think that useful, robust scenarios can be developed in a one-day scenario workshop, or that anyone can be a skillful, insightful trend analyst. Neither assumption is true. Consequently, one part of a way forward is to not make these mistakes and to recognise the complexities and difficulties that are inherent to these tasks.
Personally, one task I assigned myself was to become a more knowledgeable in applied social research methods and relevant theories (this will be a decades-long personal research project). Trend analysts make complex assessments of social, psychological and cultural processes and factors related to the phenomena they are studying – often with zero or very little theoretical basis for their assessments.
An even more fundamental action is to recognise the need to develop a more sophisticated understanding of the nature of both “the social” (i.e., the social worlds that we study and observe such as organisations, markets, communities and cities) and social scientific explanation. As philosopher of the social sciences Daniel Little has argued too many areas of social scientific research are “motivated by bad analogies with the natural sciences” such as when false assumptions are made that “the goal of social science research should be the discovery of generalizations” across types of phenomena (like the physical laws developed by scientists), or “facile assumptions [are made] about ‘social structures’ in analogy with ‘physical structures’” (the quotations are from this very interesting book chapter).
I’ve lost count of how many papers and analyses I’ve read where practitioners uncritically apply social scientific theories (e.g. as part of a scenario study), or mistakenly try to identify broad generalisations or uncritically extrapolate these theories beyond the time and place in which they were constructed (which is technically termed the ‘external validity’ problem). The point is that unless we adequately engage with these issues we have little hope of deeply understanding the trends and changes we are studying (e.g. socio-cultural trends), meaningfully interpreting them, and anticipating future developments. More often, our “analysis” will be strategic disinformation and not a good basis for decision-making.