A great discussion has developed at http://futuristpaul.com in response to a recent post by Paul Higgins titled ‘Futurists, What are they good for?’ In it a bunch of practitioners, myself included, discuss lots of issues relevant to my doctoral research project. Here I’ve collected under different themes some key comments, as well as some quotes from the original article that’s being discussed.
What does “success” look like in foresight work?
The journalist, Catherine Armitage: “Those who do admit to working with futurists speak of the benefits in vague terms.”
Oliver Freeman: ‘But how to measure success? ”We don’t really know, to be honest,” Freeman says. Futurists have yet to find a way to demonstrate a return on investment to their clients. ”We know that what we deliver is an experiential learning activity to develop managers,” Freeman says. ”We are quite clear that that is often very successful but what we don’t know is the extent to which it builds resilience within the organisation.”
Maree Conway: “What counts as success is in the eye of the beholder.” And: “Success is in the eyes of the people we work with, and that will mean different things to different people, whether they work in organisations, non-profits, governments.”
Ambiguity and diversity (of language used, goals, etc); importance of context
Paul Higgins: “We are our won worst enemy when it comes to language. I think that the core problem with the message is that there is no consistent approach or process and I do not believe that should change – so it becomes trapped in a conundrum.”
Maree Conway: “In a field that has multiple philosophies and multiple ways of knowing, coming up with a universal set of success measures is unlikely to be achievable. As others have said, context matters (this should be one of the core mantras of our work)”.
Stephen McGrail: “Claiming ‘more effective decision-making’ as success – this seems to be what many practitioners promote – is both vague and poorly differentiated. Oliver Freeman gets closest to stating a specific aim when he talked about ‘the extent to which it builds resilience within the organisation’, but there is little underneath this in terms of indicators, measures, and so on.” And: “It’s no surprise that she [the journalist] looked for prediction examples, as no alternative notion of ‘success’ has been coherently defined or clearly substantiated by the field.”
Brett Martin: “On the measures thing because of the diversity of the way we individually think and how we practice maybe a set of scales or some form of framework is required where we can point to the field and indicate where we are operating from and where in that particular assignment we may be practicing from or what we are applying to the assignment.”
Simon Dehne: “So although the article categories this profession in terms of thinking about the future, as a fellow cohort student so elegantly described it in a paper he posted on our blackboard blog, for him the question is more about the present.”
Easy measurements may be the wrong measures; measuring the wrong things
Paul Higgins: “Agree with Stephen about measurement but we must be careful about that as well because if success is limited to the easily measurable then we may end up looking under he lamp post because that is where the light is. I wrote a post on that on my Tumblr blog the other day – doctors use BMI as a measurement because it is easy to do but there is lots of evidence that more sophisticated measurement of lifestyle, exercise and diet work much better but because they are hard to do they get pushed to one side.”
Maree Conway: “I used to manage two departments that produced KPIs for strategies. In one case in particular, it was after I arrived at the university and reviewed the KPIs and the strategy to discover that the data being collected was disconnected from the strategy – entirely. So the Council kept ticking these reports off, and no one was the wiser. Seek to measure = compliance response, the easiest option.”
Maree Conway:“If we can adopt broader ‘success measures’ than just those that are data based, we might have a chance of demonstrating that thinking about the future systematically does make a difference to who we are, and how we operate in the world today.”
Difficulty defining and measuring outcomes
Paul Higgins: “As the work we do is primarily focused on assisting others to think differently about the future it is less able to be quantified as compared to a change management process,or a sales training program, or a software delivery project. I often think about a presentation that I saw Edward de Bono do a few years ago where he likened the process of innovation in ideas being like telling a good joke. Before you tell a joke the punchline and the thread of the story are not obvious. After the joke has been told they are. Helping people in finding new possibilities is a bit like that.”
Paul Higgins: “We should always seek to measure results but we must also accept that quantitative measurement of some things is impossible.”
Maree Conway: “We have to ask people as individuals how they see their thinking as having changed – no ready made data source that can be turned into a KPI or an ROI measure there. And then there is the link between that changed thinking and impact on an organisation’s performance – again no ready made data set.”
Stephen McGrail: “Looking at the corporate context, there’s little clarity. The European academic who’s published more on this than anyone (to my knowledge), Rene Rohrbeck, recently commented that ‘I have struggled myself to decide which ‘success’ criteria or which value creation one should expect from scenario planning or corporate foresight in general’. Adding: ‘Is it: (1) enhancing organizational practices, such as decision-making, innovation planning, etc.; or (2) the final organizational outcome like performance, survival, superior profit?’.”
Establishing causality (causal attribution challenges)
Paul Higgins: “The problem with the sort of things like ‘building resilience’ is even if you can measure that (which is a problem in itself), what impact did a particular project or consultant have within that change? It is the same problem that not for profit organisations are wrestling with all over the world when faced with questions of impact investing – what was the specific contribution worth if the whole community improves and how do you separate out the contributing factors?”
Approaches in other/peripheral fields
Stephen McGrail: “Related areas like technology assessment (TA) seem to have greater clarity. TA practitioners are clear that prediction is not the aim; instead their aims include reducing the human and social costs of learning how to handle new technologies in society – beyond a trial-and-error approach – which is enabled by anticipating possible impacts of technological choices and feeding this back into decision-making and research and development activities. There is a clear normative agenda, rather than neutral notion of success/failure.”
There appears to be interest in getting together and discussing this further. I would welcome that, and the above themes – amongst others – would be good topics to discuss. It also resonates with some futures literature I’ve been reading the past 24 hours. In one paper the researchers talked to users of scenario planning (i.e., in firms) who similarly reported a lack of assessment tools and difficulties in measuring outcomes. Some even suggested that it’s impossible to measure the influence of scenario planning on organisational performance! Clearly this is a complex challenge and an area needing more attention in research studies and from practitioners.