The concept of intelligent muddling is proposed in a provocative little book called The Techno-Human Condition by two American scholars – Braden Allenby and Daniel Sarewitz from Arizona State University. The book is ostensibly a critique of transhumanism movements; however, it addresses much broader themes as per their notion of the ‘techno-human condition’, and critical discussion of “the continual emergence of apparently existential challenges at the interface of technology and society (climate change, economic meltdowns, stupid wars, and so on)” (p. 159). They contend that “it is impossible to imagine a plausible scenario in which technological change comes to a halt, save through a (perhaps technologically-induced) catastrophe” (p. 85). However, they also argue and warn that:
“little of what occurs at the frontiers of rapidly evolving technological systems is planned in advance, especially insofar as technological systems are continually co-evolving with one another and with underlying social and cultural patterns” (p. 80).
In addition to technological change they point to “two [other] essential realities of the human condition: conflict over values and uncertainty about the future” (p.88). Regarding values they write:
“Intelligent, well-meaning people may – and commonly do – have incommensurable values, preferences and worldviews. No optimization function exists for their diverse beliefs. In the trade-off between justice and mercy, for example, you may prefer more mercy and I may prefer more justice. In the context of terrorism, what is the appropriate trade-off between freedom and security? In the context of reproductive freedom, what is the point at which a developing embryo acquires the rights of a human being? There are no right answers” (p. 88)
Regarding uncertainty about the future they assert that:
“No one knows how to intervene in complex social, human, built and natural systems to reliably yield particular desired results over the medium or the long term. How did all our advanced economic modelling and theoretical capacity help us to avoid the 2008-09 global -economic melt-down? In fact, overconfidence in such models and theories helped to create the problem. On a wide range of subjects – ecosystem management, weapons non-proliferation, organizational management, immigration policy, improving the conditions of our inner cities – hundreds of thousands of academic publications have certainly added in some sense to our intelligence, but without adding much to our capacity to act with consistent or increasing effectiveness” (p. 90)
The concept of incremental, but more intelligent, muddling flows from this – that is, from “the limits of the type of intelligibility that can reliably guide action when the future is uncertain and values conflict” (p. 91). Perhaps more intelligent “muddling” is often the best we can do.
Similar to Charles Lindblom’s theorisation, back (in 1959), of how public administrators deal with complex problems, Allenby and Sarewitz argue that “complex, value-laden problems … don’t get solved; at best they get managed, and at worst we lurch from crisis to crisis” (p. 93). In ‘The science of “muddling through”’, Lindblom also argued that, in practice, policy is constantly made and re-made. A public administrator “never expects his policy to be a final resolution of a problem”.
In the context of wicked problems and what they term “wicked complexity” and Allenby and Sarewitz argue that muddling through is “is not a second-best approach to be dropped when appropriate optimization techniques are developed: it is the best we can do” (p. 110).
As part of their argument they develop a framework of three “levels” of technological function:
Interestingly, despite their arguments about uncertainty and ignorance (more on this below) they discuss scenario methods and outline lots of interesting scenarios and related thought experiments. Indeed they advocate both “playing with scenarios” and questioning predictions (pp. 164-165).
And they wisely suggest that we “must be careful not to reify our pet scenarios” (p. 104) and raise questions about how modelling results are treated, as per strong claims about the future.
Related to their concept of ‘intelligent muddling’ and anticipatory practices, Allenby and Sarewitz pose provocative questions like “can we say anything at all about what is really likely to happen?” (p.34). Informed by a range of historical cases they make related assertions such as:
“Technologies often surprise because they introduce into society novel capabilities and functionalities whose uses are constantly being expanded and discovered – capabilities and functionalities that interact with other technologies, and with natural and social phenomena, in ways that cannot be known in advance” (p. 39)
“…with each wave of innovation came disturbing and unpredictable institutional, organizational, economic, cultural, and political changes… Projecting the effects of technology systems before they are adopted is not just hard but, in view of the complexity of the systems, probably impossible” (p. 79-80)
If that’s right, where does it leave us?
In brief, Allenby and Sarewitz argue that “if you want a new measure of rationality in this world, one that suits the complexity we are creating, you’ll need new concepts, new tools, new arrangements, and perhaps even new gods to replace those old ones like individuality, rationality, predictability, and the like” (p.91). They make the rather sweeping argument that this will entail questioning “the Enlightenment commitment to rational action by individuals living in a comprehensible world” (p. 92). Yikes…
On a more practical level, they suggest it means: eschewing the quest for lasting comprehensive solutions; recognising that complex problems will “drag on and on… [and] action is forged from compromise and is rarely more than incremental, and then the whole painful process is repeated when conditions change so much that action cannot be avoided” (p.169); and focussing on “adaptability in the face of change, not stability in response to problems” (p. 162).
They also call for enabling “anticipatory self-negation” (rather than reactive and corrective) as part of technological change processes, proposing two related precepts:
- Intervene early and often: “the best time to start talking about alternative technological pathways and perspectives is when ignorance is great and the horizon is fuzzy”. They further suggest that open-minded discussion about related decisions and policies is easier to achieve before various vested interests get organised and determine what the stakes are
- Accept and nourish productive conflict: they argue humans are most adaptive and creative in periods of “bounded conflict” (p. 174) – foster contests of ideas, people, interest groups
Linked with this they call for us to “explore with humility” and not “attack with rigidity” (p. 105). Given the latter seems to be normal practice in many sustainability debates this is a timely call. The “philosophic flexibility” they argue is “necessary to respond to complex systems unrolling in unpredictable and uncertain majesty” points to the need to develop ways of fostering such philosophic flexibility.