4 Comments

  1. Stephen McGrail

    Hi Andy, yes I have seen this model – it’s a good example of implementation evaluation, which is very common approach when conducting process evaluations.

    Grim’s paper in Journal of Futures Studies is notable (in my view) for explicitly discussing some key challenges facing foresight evaluation efforts, e.g.: key results can be far in the future (not immediate); the many uncontrollable relevant variables, which may – for example – mean a very high-quality process can, unexpectedly, generate poor outcomes in some cases; and the need to also consider “non-events”, i.e. whether undesirable outcomes were avoided, which necessitates consideration of counter-factuals.

    Grim’s response to this – appears to be – to view outcome-oriented evaluation as impossible (or at least impracticable) and to settle on process evaluation.

    Others such as Thomas Chermack (Colorado State University) argue that more focus on evaluating outcomes and measuring the “value-add” is required.

    I’d love to hear about your views and experiences, Andy, if possible? How have you evaluated your work both in-house (working in organisations) and/or your consulting projects?

  2. Great laying out of the options Stephen. Clearly, we can’t do the kind of positivist social experiments required to ‘prove’ that a particular project had a particular impact. So the question is, what can we do? Do we throw up our hands and just try things at random each time we engage with a wicked problem? Or do we try to learn from each engagement in ways that are imperfect but do seem to make a difference to our practice? I’m firmly in the latter camp, so it’s great to have you set out the options so clearly and succinctly.

    A pragmatic, realist approach to evaluation looks like it shows a lot of promise for foresight work, and other kinds of engagement with wicked problems. The only other thought that occurred to me while reading this was that collaborative approaches to evaluation might be worth highlighting as well. The people involved in a foresight or other engagement process will be able to shed some light on whether the process made a difference to their practice, or other things happening at the time were responsible. Obviously, this is not infallible – there are all sorts of reasons why individuals can have an inaccurate view of their own motivations and internal processes. Nevertheless, bringing more perspectives into the process is likely to be a useful thing to do. What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *