Yesterday I read an interesting book chapter on the evidence-based practice movement. The author (Dennis M. Gorman from Texas A&M University) argues that is a major driver of pseudoscience in various public health practice areas (the chapter focusses on drug prevention research). Gorman presents evidence of “questionable data analysis practices to produce positive results” – and related gaming of systems – which he contends has been incentivised by the growing focus on evidence-based practices. In Gorman’s view, “drug prevention research has largely evolved into a pseudoscience” and, somewhat surprisingly, he points to the evidence-based practice movement as a major contributing factor. Gorman suggests that these issues are likely to be widespread in applied sciences.
It prompted me to further consider whether the impact agenda in research policy could have similar unintended consequences. This policy agenda require researchers, universities and research organisations to present evidence that their activities generate “positive results” (however defined), similar to the evidence-based practice movement.
Such questions are somewhat speculative given I’m not aware of any research on this. However I have noticed that there can be strong incentives against honest appraisals (e.g of research impact) and barriers to investigating both success and failure.
Previous posts on this blog have considered the positive potential of the increasing focus on impact (e.g. potential for enabling greater research on knowledge practices and associated social scientific inquiry into the impact of science on society and what influences the use of scientific knowledge), but the realisation of such potential will require careful scholarship not pseudoscientific production of “evidence” of positive results/effects.
So key questions are which way will it go in Australia, why, and with what effects?
This post cannot answer these questions nor offer any firm predictions, though there are good reasons to expect that the impact agenda will be a driver of pseudoscience, perhaps in a similar fashion to the evidence-based practice movement.
If this does occur it could help those research organisations that are best setup to play these games to maintain or increase their research funding, etc, but it may also have negative unintended consequences. It may encourage bad research practices, much like the dodgy analytic and research reporting practices in public health contexts that are highlighted by Dennis M. Gorman. For example, as Gorman notes, use of “data dredging and selective reporting” practices can result in misleading cherry picking.
On the other hand, the impact agenda could prompt greater evidence-informed reflection on the effects of research activities (and related innovation activities) and thereby have a range of positive consequences. This was one of the aims of my doctoral research: to help CSIRO Energy staff understand the effects of their research activities, in particular whether their interventions were making a difference and why (e.g. in terms of helping to enable changes in energy supply and use to reduce greenhouse gas emissions). The resulting insights could be used to help CSIRO Energy have greater impact in the future.
Which way do you think the impact agenda will go in Australia and why? What are your thoughts?
I’d like to better understand what research is being done (if any) to examine the development of the “impact agenda” in the Australian research system context, its effects and key issues raised in this post. If you have any suggestions please let me know.
Gorman, D.M. 2018, ‘Evidence-Based Practice as a Driver of Pseudoscience in Prevention Research’, in A.B. Kaufman & J.C. Kaufman (eds), Pseudoscience: The Conspiracy Against Science, The MIT Press.