The use and, perhaps more frequently, non-use of the outputs generated by prospective knowledge practices is a big issue discussed by many people who conduct these activities. As one person I interviewed this week put it, you don’t want the end result/product to just be yet another report sitting on the shelf gathering dust! This is a common issue. Put slightly more formally scenario planning expert Thomas Chermack asserts that “perhaps the most common reason for disappointment in scenario projects is a lack of use” (Chermack 2011, p. 22). Chermack also further argues that “how scenarios are used is the crux of scenario planning” (p. 167).
A survey of scenario planning practitioners involved in climate change adaptation here in Victoria, Australia, raised similar issues. The researchers noted that:
Scenario projects were reported to have a weak influence on subsequent adaptation decision making. To the extent that the value of scenario planning is judged by its impact on decision making, this represents a serious shortcoming (Rickards et al. 2014).
I discussed this topic during interviews with researchers at CSIRO. For example, the Chief Economist of the Energy Flagship argued that producing a balanced report that considers all legitimate views (rather than pushing one view) and doesn’t have an “agenda” is central to usage/adoption:
When an industry is facing a strategic issue a lot of reports will be put out, and a lot of views be floated, and the ones that don’t get traction are often the ones that are seen as only representing one view, and not well-based… The extent to which they’ve thought about how others might view what they’re saying ranges quite a bit. So some will be quite extreme and be completely uncensored. Others will make a moderate attempt at censoring themselves to get more traction but ultimately still fall short. A [CSIRO] forum [report] will almost never fall short in this area. It can never not be seen as a whole of industry, reasonable view. It instantly has that status when it’s released. There’s no question.
Interviewer: why is having that status important?
I think that’s important for adoption and by adoption I mean, this is something that I should seriously refer to, it doesn’t have an agenda, it is a source of objective facts, so it is worth my time to read. It is worth my time and to say ‘OK that’s something that I’ll have to take on board’. With the other stuff I think it will be seen as ‘I don’t need to read that I already know what those people think’, I already know XYZ’s analysis will be ‘cooked up’ just to support their point of view. There’s no point [reading/reviewing it]. It’s like ‘why pick up and read The Australian [newspaper]?’ you know? <Laughs> There are other people who can achieve something close to a [CSIRO] forum report. They’re better at censoring themselves and have a stronger commitment to an evidence-based approach, but it’s rarer, in the minority.
Interviewer: Is part of your working theory also that a more ‘extreme’ viewpoint is a barrier to adoption and influence?
Yeah that’s right. We think we have a better chance of all the views in our report being accepted, much faster, well, certainly by many more people…
Compare the views expressed above with the following advice for how scientists can be more influential in the policy-making process provided by Paul Cairney, professor of politics and public policy at the University of Stirling in Scotland:
Start being manipulative, go beyond the evidence, form relationships with groups, recognise that you need to influence lots of different people at lots of different levels, and [recognise] it’s a long-term full-time process [see video; also see this page on his website].
In a recent opinion piece published in The Guardian Cairney notes that such approaches “may be difficult to accept (how many scientists would be comfortable making manipulative or emotional appeals to generate attention for their research?) or deliver (who has the time to conduct research and seek meaningful influence?).” My research suggests he’s right about that. But he contends that “only by engaging with the practical and ethical dilemmas that the policy process creates for advocates of evidence, can we produce strategies that are better suited to a complex real world”.
This week I’ve been reflecting on some of the interviews I’ve done with CSRIO scientists and also been thinking about other examples of prospective knowledge practices (i.e. other than the CSIRO cases I’ve been researching), in particular those that have had a large impact and/or those where the outputs have been widely used and widely discussed. Such examples often seem to be challenge the working ‘theory’ outlined by the CSIRO researcher that I quoted above.
To take one prominent example from the late 1960s, The Population Bomb, this book clearly had an agenda. Moreover the author Paul Ehrlich later admitted that he went beyond what one could reasonably conclude based on the available data (i.e. Ehrlich went beyond the evidence as Cairney recommends). Ehrlich acknowledged that “I expressed more certainty [than was empirically justifiable] because I was trying to bring people to get something done” (see earlier blogpost).
Another example that comes to mind is the Limits to Growth study. The lead researcher Dennis Meadows acknowledged that the research team didn’t question the limits they examined. Instead the aim was to “try to understand the nature of social growth within those limits”.
I am also reminded of more recent examples. Advocates and analysts such as Paul Gilding, Giles Parkinson, and Richard Slaughter in my view often take a very different approach to CSIRO-based scientists, one which is more akin to the approach adopted by Ehrlich and Meadows.
Considering the case of the CSIRO futures forums
I have found some support for the working theory of CSIRO scientists. In some cases the credibility of reports and analysis was increased by including multiple points of view or being judged to have presented a whole-of-industry view. Linked with this CSIRO emphasise having a balanced range of interests present at such forums. Some key government informants did question the rigour and credibility of analysis that was seen to include more ‘extreme’ views that they felt wasn’t clearly supported by adequate evidence, which influenced the extent to which such analysis was used in governmental decision-making processes. However, the use (or ‘adoption’) of the outputs is also related to many, many other factors. I’ll give a few brief examples which provide a high-level sense of this, along with some of the factors I’m further investing as part of the case study research.
Use of the outputs from the Future Fuels Forum is an interesting case. For example, the Biofuels Association of Australia (BAA) participated in this exercise. They used the outputs during public policy debates as part of their efforts to secure an extension to fuel excise policies (the then new Rudd Government had promised to alter taxation arrangements for biofuels and the BAA opposed this). The outputs from the Future Fuels Forum were selectively used as part of this lobbying as they were seen to strengthen the case for ongoing/additional government support. In the BAA example we can see the importance of context. Additionally, forum participants tended to emphasise scenarios aligned with their point of view and to ignore the other scenarios. For example, peak oil activists emphasised the scenarios that considered a potential near-term peak in oil supply in their advocacy activities and argued that the other scenarios developed in the study weren’t credible. This is not surprising! However, this example helps to make a couple of points: 1) perceived credibility is often also related to how well the analysis fits existing ways of thinking; and 2) the use of the outputs from a prospective exercise is related to the perceived utility of the outputs. In this case, peak oil activists hoped that this scenario analysis would help to legitimise their point of view and, linked with this, they emphasised the fact that CSIRO, the national science agency, led the project. There are many other interesting examples in this case, but I’ll leave the discussion of this case there.
I’m currently researching the latest CSIRO forum, the Future Grid Forum. In this case interesting examples include the electricity network businesses and ClimateWorks Australia.
A forum participant from SA Power Networks noted that a peak body representing the electricity networks businesses in Australia – the Energy Networks Association (ENA) – used the outputs to “start a debate amongst network businesses on future strategic direction”. Subsequently the Energy Flagship division of CSIRO has formed a closer relationship with ENA and formally partnered with them for the Network Transformation Roadmap project, a multi-year process of engagement and research. This could be seen as consistent with Cairney’s recommendation of forming relationships with groups that are involved in policy processes and therefore as effective from the point of view of increasing impact. On the other hand, I recently interviewed a policymaker who spoke “off-the-record” and viewed this relationship quite critically. He was concerned that some of the analysis being produced reflected the views of ENA. In other words his judgement or allegation is that scientific rigour and independence has, to some extent, being compromised in this partnership. This may or may not be a fair judgement (I haven’t examined this myself). The reason I mention it here because it highlights the dilemmas that scientists can face along with the potential reputational risks.
Shifting now to ClimateWorks Australia their Head of Research, Amandine Denis, stated that their use of the outputs from the Future Grid Forum has been quite limited. The reason provided is quite interesting. Denis stated that their forward-looking analysis focusses on two types of scenario, the business-as-usual (BAU) future and deep decarbonisation futures. In doing so it aims to clarify and communicate the gap between these futures. The Future Grid Forum scenarios were seen as falling in-between these futures, i.e. neither BAU nor deep decarbonisation. The consequence is that the scenarios are seen as less relevant to their work and have reduced perceived utility.
These examples clearly convey that the outcomes of prospective knowledge practices are, in part, driven by the perceived utility of the outputs and the capacity of actors to utilise them effectively for their own purposes. Contextual factors are clearly very important. This research also suggests that greater attention to these factors could lead to less disappointment following projects.
In addition, such examples also suggest that the scenarios can be constructed and used for a wide range of purposes. Most of the literature on scenario planning emphasises managing uncertainty and creating more ‘robust’ or ‘resilient’ strategies and decisions. In many of the examples provided in this post point other uses are central. Moreover, what Cairney terms the politics of evidence-based policy-making (see his forthcoming book which looks worth a read) suggests that dilemmas must be faced in the construction and use of anticipatory knowledge if the aim is to be influential in policy processes. The way Cairney puts this point on his blog is to argue that efforts to increase impact come at the expense of objectivity. Given that the image of (or reputation for) objectivity is often an important resource for scientists this reinforces some of the dilemmas for scientists.