Like many people I’m pondering the intensifying COVID-19 infectious disease crisis, with daily life here in Australia (I live in Melbourne) beginning to transform as governments implement policies which upend daily life and many industries. Initial forecasts suggest we’re likely to see the worst level of unemployment since the Great Depression. I have no idea what the next 6-12 months have in store, both professionally or personally, as we enter what currently appears likely to be a long-term process of disease management. I suspect this will be the first of many posts which address the COVID-19 issue and related topics.
Something that popped into my mind today was Fred Guterl’s book The Fate of the Species: Why the Human Race May Cause Its Own Extinction and How We Can Stop It, which was published in 2013. I’ve previously discussed his book on this blog (e.g. link) as part of my exploration of the question of how worst case scenarios are considered.
One reason that this book came to mind is that, following discussion of various threats (see below) and future scenarios, in the closing chapter Guterl argues that “biology – viruses in particular – poses the most immediate threat” (p.182). The book discusses:
- ‘Superviruses’, including the next major influenza epidemic, the threat of viruses mutating in such a way as to jump from another species to humans, threat of new strains (like the new H1N1 virus of 2009) which could be more pathogenic;
- Extinction of other species and related potential threats;
- Climate change;
- The potential collapse of ecosystems and related risks (e.g. risks to food production/security);
- Synthetic biology; and
- Threats related to increasing reliance on computer systems, related potential network vulnerabilities and security threats, and emerging forms of ‘artificial intelligence’ (this chapter is simply entitled “Machines”)
In making the argument for asking ‘what if?’ and considering worst-case style scenarios Guterl suggests that the goal isn’t to predict the future but, rather, to try to “avoid a gross failure of imagination” (p.157). So, he wouldn’t claim to have predicted the COVID-19 crisis, but he might criticise the lack of preparedness. Related to this, over the past week the themes of unheeded warnings (e.g. predictions being ignored or dismissed until it’s too late) and poor pandemic preparedness have been prominent (e.g. link, link, link, link).
However, it’s worth noting that something Guterl doesn’t attend to is which threats and worst-case scenarios are more or less attended to by scientists, politicians and/or society. It’s not a sociology or policy studies book. It only reports on and explores a set of threats the author thinks is important, presents his analysis of these (which is informed by expert interviews) and presents a little bit of related critical analysis/discussion.
Indeed, it might be useful develop a more sociological analysis and think about whether and how this could be done. When I look at Guterl’s list of threats something that occurs to me is we often seem (at least in Western contexts) to attend most closely to risks related to technology. Issues to do with automation and robotics, along with networking and communications technologies, have been especially high profile of late particularly as they relate to concerns about technological unemployment and privacy (e.g. personal data). Perceived threats of biotechnology and nanotechnology have also been prominent over the past two decades, as I explored in my Master’s thesis. Attention to environmental changes and threats has clearly been rising as well over recent decades, but outside of crisis situations (like the recent bushfires in Australia) advocates often struggle to increase the level of attention given to such threats. Attention given to health threats like infectious diseases may be similar outside of crisis situations, though places like Singapore, Hong Kong and Taiwan appear to have created a culture that supports greater vigilance (link).
I intend to give this more thought over the coming weeks and months. An initial thought is that more sociological inquiry could draw on my earlier work which examined prospective knowledge practices (PKPs) as social activities (see Chapter 5 of my PhD thesis) and related analysis which further examined PKPs as political practices (Chapter 6). PKPs are central to the exploration and consideration of worst-case scenarios and related ‘what if?’ style thinking, particularly where formal methods like simulation models are used to develop future projections and the results get factored into policy-making. If we were to examine such PKPs as social activities then we need to consider the social causes of actor practices and related social patterns, including the social factors and processes which influence what scenarios get produced, what sets of assumptions are made in these modelling exercises, and the amount of attention given to particular scenarios. Moreover, we would need to attend to how particular PKPs are socially situated as per the social contexts and social lives of actors undertaking these activities. My doctoral research revealed how the situatedness of PKPs is often highly consequential for the analysis that’s conducted and its effects.
Additionally, earlier work on worst case scenarios could be drawn upon to develop a broader guiding framework. Cass Sunstein’s work is one example: he has done comparative case analysis of societal responses to what he terms low probability risks of disaster (focussed mainly on the cases of terrorism and climate change), the factors that shape public attitudes and the response of governments and their officials, and “susceptibility to two opposite problems: excessive overreaction and utter neglect” (p.5). A related argument is that “both problems affect individuals and governments alike” (p.5) and concludes that “often, public officials have two unfortunate incentives: to give undue attention to worse-case scenarios and to pay no attention at all” (p.277). Sunstein also theorises how worst-case scenarios should be handled and he cautions that societal “responses to worst-case scenarios can be both burdensome and risky – and they can have worst-case scenarios of their own” (p.4). This aspect is something that governments will likely be forced to further contemplate as the wider consequences of their policy decisions begin to emerge.