Technology, risk and decision-making: are we asking the right questions?

Research
Institute for Science Society and Policy
Research centres and institutes

By Robert Walker

Senior Fellow, ISSP, Retired Senior Executive

Mountain climbers helping each other reach the top of a mountain.
The following blog is a teaser for the ISSP’s soon-to-be released open access book – Democratizing Risk Governance: Bridging Science, Expertise, Deliberation and Public Values. The book includes chapters on democratizing risk governance in public health, genomics, energy and COVID-19.

Introduction by ISSP Director, Dr. Monica Gattinger

The COVID-19 global health pandemic has underscored that public trust in risk decision-making is crucial. Whether trust in the safety of vaccines, trust in the necessity of lockdown measures, or trust in the very existence of the pandemic itself, successfully addressing the crisis has hinged on public confidence in government decisions.

The pandemic has also made visible how perceptions of risk can differ among and between experts and the public. Experts did not always agree about the necessity of things like school closures, travel restrictions or vaccine mandates. Public opinions about risk, for their part, could be as varied as the values, ideologies and life experiences of the people holding them, and public perceptions of risk were forever vulnerable to misinformation and disinformation. Accurate and trusted risk communication has been pivotal. So has listening to citizens, communities and stakeholders, who expect their views to be taken into account in government decision-making.

The centrality of public trust to effective risk governance, the fragmentation of perceptions of risk, and growing expectations for public involvement in risk decision-making, all characterize risk governance in the twenty-first century. Risk scholars and practitioners are grappling with how best to govern risk in this context.

Against this backdrop, the ISSP created the project, @Risk: How to Strengthen Risk Governance in Canada. @Risk aimed to advance scholarly and empirical understandings of public participation in risk decision-making, of ways to conceptualize and address differences in public and expert perceptions of risk, and means to foster public trust in risk governance. The project comprised a multidisciplinary research team of more than two dozen scholars and graduate students from eleven Canadian and US universities, along with half a dozen senior practitioners from five partner organizations1. Central to the project were practitioner members of the research team who gave generously of their time, experience and insights throughout the study to ensure the research was grounded in and informed by the ‘real worlds’ of risk governance.

The following blog is authored by Dr. Bob Walker, practitioner member of the @Risk research team – and now Senior Fellow at the ISSP. It is the first in a series of blogs authored by our practitioner team members. Crucially, it is a teaser for the ISSP’s soon-to-be released open access book – Democratizing Risk Governance: Bridging Science, Expertise, Deliberation and Public Values – that profiles the findings of @Risk. The book includes chapters on democratizing risk governance in public health, genomics, energy and COVID-19.

 

1@Risk was funded by a Partnership Development Grant from the Social Sciences and Humanities Research Council of Canada (co-funded by Genome Canada), along with contributions from project partners Canadian Nuclear Laboratories, the Canadian Nuclear Safety Commission, the Canadian Public Health Association, the Genetic Engineering and Society Center at North Carolina State University, the ISSP and the University of Ottawa’s Faculty of Social Sciences.

---

Blog by ISSP Senior Fellow and member of the @Risk research team, Dr. Bob Walker

There is compelling evidence that decision-making by public authorities at the interface of public policy, science and society is ever more challenging.  An important case in point are efforts to address risks to the public and the environment arising from technology or humans’ use of technology.  Vaccine hesitancy, opposition to nuclear power, opioid drug addiction, artificial intelligence regulation (or the lack thereof).  The list is long and growing as the pace of new technologies entering the market and society accelerates and as their negative impacts become evident.

The term risk governance has come into popular use for defining such decision-making.  It involves a complex set of interactions among actors and influencers including politicians, government bureaucracies, regulators, corporate Boards of Directors, NGOs, scientists, insurers, investors, the media and the public.  Problematic symptoms of stresses include a reduction in public trust in governments, regulators and industry, increased activist opposition to legislation, policy and regulatory decisions and increased politicization and polarization of debates over the way forward.  Social scientists are now proposing that a potential antidote to these stresses may lie in greater societal participation in - a democratization of - risk governance.  But how?

From my perspective - drawn from a career of more than three decades as government scientist, science advisor and science executive - I pose five questions that may help focus efforts to address these stresses.

  1.  Systemic issues facing risk governance need systemic solutions.  Do we recognize and understand these issues, what are potential solutions and how can they be implemented? 

A potentially important thesis that the social sciences could help unravel is that systemic solutions may not be within the ability of those designing or using risk governance systems to address or resolve.  The negative impact of society’s waning science literacy on risk governance is a case in point, where an element of the systemic solution would require a retooling of education curricula.

  1. Does the mantra of evidence informed decision-making need to be re-framed?

The notion of using evidence to help inform decision-making, whether personal choices taken by individuals or decisions coming out of risk governance systems more generally, would at face value appear to be both rational and desirable.  However, there are indications that the operationalization of the concept is frequently, if not systemically, wanting.  A common refrain of political parties is that their policy platforms are informed by evidence, while those of their political opponents are not, resulting in an unproductive politicization of the concept. When the inferred evidence is scientific, it may not be apparent whether explicit account has been taken of the inherent uncertainties in the evidence. And while there may be scientific evidence that policy intervention is required, for example, evidence that growing opioid drug addiction is having grave societal consequences, what may be less apparent to the public is the evidence that has been used to inform and justify that the actual policy choice is likely to be implementable and effective.  

A goal of public policy is to modify societal and corporate behaviours towards a public -good outcome.  However, people perceive risks and evidence about risk in a way that’s shaped by their values, identities, worldviews, and membership in various social and political groups.  How individuals process evidence through their unique motivated reasoning lens may be the determining factor as to whether or not they will accept and comply with the policy.

  1. How safe is safe (enough)?

Governments have constitutional responsibilities to protect the safety and security of their citizens, with a generally accepted extension of this responsibility to include the protection of the environment.  Enter the role of the regulator – a quasi-judicial body of independent experts, mandated through legislation to regulate the introduction of products and services into societal use.  The broad goal is to achieve or support a public-good outcome, using science to inform such regulatory decisions.  What does the public hear?  “The nation’s public health authority has determined that the new vaccine is effective and safe for public use.”

But implicit in the regulator's decision is a value judgement: there is sufficient scientific evidence to justify a claim (a decision) that the vaccine will be safe (enough).  What generally does not get explicit attention in the communication to the public is the basis of the regulator’s decision.  Stated another way, one could argue that the following language better captures the nuances of such a decision: "The regulator claims that the evidence is sufficient to deem the vaccine to be effective enough and the risk to the public to be acceptable."  Add to this the adversarial nature of the regulatory processes, where media reporting is less about the evidence and more about the conflict, and it should come to little surprise to see waning public confidence in regulators and their decisions. 

  1. How do risk governance systems address the risk cascade?

The systems on which our society depends are interdependent.  A consequence is that efforts to mitigate risks to the public and/or to the environment in one dimension may in fact increase risks in another. How do risk governance systems take into account this risk cascade?  The COVID-19 pandemic has made the risk cascade a top-of-mind concern for law makers and regulators, for businesses and for society more generally.  From a public health perspective, protection of the public from contracting, spreading and potentially dying from the virus has been the top priority.  However, the legislative and policy interventions to achieve this public good have also had dire consequences for the economy, for students’ education, for society’s mental health, with highly disproportionate impacts on marginalized communities.

  1. How can the science and technology community better support risk governance systems?

The nation’s science and technology community is an important enabler of our social and economic well-being.  No debate.  However, can the community better serve the needs of society and the economy when it comes to the governance of risk? After all, many risks facing society today have had their origins in technologies emerging for the S&T community.  But it has too often been the case that not until emerging technologies have been deployed at scale in markets and society, have we discovered negative consequences for society and/or the environment that then need to be address. What better example than anthropogenic climate change?  And who uncovers this evidence?  Typically, it is again the S&T community, although most often through different disciplines than those engaged originally in the technology development. 

Stated another way, we need to do better than a science-then-technology-then-science model for for addressing the benefit-risk calculus for emerging and potentially disruptive technologies. While at risk of oversimplifying a complicated machinery, it does point to the opportunity to recast the typical sequencing of scientific inquiry. The goal must be to value and invest sufficiently early in multi-disciplinary scientific inquiry that helps foresee potential negative consequences in advance of at-scale technology deployments and to take these risks into account as technology roll-out proceeds. 

Society needs fresh insights into the stresses we are witnessing in risk governance and into pathways that can help reduce these stresses – insights that our S&T community can inform.  But let’s first ensure that we are asking the right questions.