The AI + Society Initiative announces the 2022 Scotiabank Global AI + Regulation Emerging Scholar Award

Centre for Law, Technology and Society
AI + Society Initiative
Law and technology
tabaret building
The AI + Society Initiative is delighted to announce that Dr. Henry Fraser and Jose-Miguel Bello Villarino have won the inaugural Scotiabank AI + Regulation Emerging Scholar Award, and Aileen Nielsen has won the runner-up award.
Trophy

The AI + Society Initiative is delighted to announce that Dr. Henry Fraser and Jose-Miguel Bello Villarino have won the inaugural Scotiabank AI + Regulation Emerging Scholar Award, and Aileen Nielsen has won the runner-up award. 

Following an international call, the AI + Society Initiative invited emerging scholars in the field of artificial intelligence (AI) and regulation to participate in the Global AI + Regulation Emerging Scholars Workshop to present their draft paper to leading scholars in the field of AI and the law. An international review committee selected eight paper proposals that received feedback from established scholars and other participants in the field during a two-day virtual workshop hosted by Dr. Florian Martin-Bariteau and Scotiabank Postdoctoral Fellow Dr. Karni Chagal-Feferkorn.

In addition to offering a forum to exchange ideas and receive constructive feedback, the AI + Society Initiative was able to support emerging scholars thanks to the support of the Scotiabank Fund for AI and Society at the University of Ottawa which funded a Scotiabank Global AI + Regulation Emerging Scholar Award of $1,500, and a first runner-up award of $500, to recognize the best presented papers.

The Scotiabank Global AI + Regulation Emerging Scholar Award for the best paper was attributed to Dr. Henry Fraser from Queensland University of Technology and José-Miguel Bello y Villarino from University of Sydney for their paper “Where Residual Risks Reside: Lessons for Europe’s Risk-Based AI Regulation From Other Domains”.

Paper Abstract: This paper explores the question of how to judge the acceptability of “residual risks” in the European Union’s Proposal for a Regulation laying down harmonized rules on Artificial Intelligence (the Proposal). The Proposal is a risk-based regulation that prohibits certain uses of AI and imposes several layers of risk controls upon “high-risk” AI systems. Much of the commentary on the Proposal has focused on the issue of what AI-systems should be prohibited, and which should be classified as high risk. This paper bypasses this threshold question, engaging instead with a key issue of implementation.  

The Proposal imposes a wide range of requirements on providers of high-risk AI systems (among others) but acknowledges that certain AI systems would still carry a level of “residual risk” to health, safety, and fundamental rights. Art 9(4) provides that, in order for high-risk systems to be put into use, risk management measures must be such that residual risks are judged “acceptable”. Participants in the AI supply chain need certainty about what degree of care and precaution in AI development, and in risk management specifically, will satisfy the requirements of Art 9(4).  

This paper advocates for a cost-benefit approach to art 9(4). It argues that art 9(4), read in context, calls for proportionality between precautions against risks posed by high-risk AI systems and the risks themselves, but leaves those responsible for implementing art 9(4) in the dark about how to achieve such proportionality. This paper identifies potentially applicable mid-level principles both in European laws (such as medical devices regulation) and in laws about the acceptability of precaution in relation to risky activities from common law countries (particularly negligence and workplace health and safety). It demonstrates how these principles would apply to different kinds of systems with different risk and benefit profiles, with hypothetical and real-world examples. And it sets out some difficult questions that arise in weighting the costs and benefits of precautions, calling on European policy-makers to provide more clarity to stakeholders on how they should answer those questions.

The first runner-up award was attributed to Aileen Nielsen from ETH Zurich for a paper titled “Can an Algorithm be too Accurate?”.

Paper Abstract: Much research on social and legal concerns about the increasing use of algorithms has focused on ways to detect or prevent algorithmic misbehavior or mistake. However, there are also harms that result when algorithms perform too well rather than too poorly. This paper makes the case that significant harms can occur because algorithms are too accurate.  

This paper proposes a novel conceptual tool and associated regulatory practice for reigning in resulting harms: accuracy bounding. Accuracy bounding would limit the performance of algorithms with respect to their accuracy. This technique could provide an intuitive and flexible tool to address concerns arising from undesirably accurate algorithms. Importantly, accuracy bounding could be complementary to many existing proposed governance and accountability tools for algorithms, such as fairness audits and cyber-security best practices.  

To date, legislators and legal scholars alike have largely ignored the risks that come from overly accurate algorithms. However, there is a rich history in law and regulation, as well as in the scientific and engineering disciplines, of analogous forms of performance bounding. Thus, accuracy bounding represents an expansion of existing regulatory and technical techniques. Such techniques offer a path forward to address many as of yet unresolved concerns regarding the rise of the algorithmic society.  

Congratulations to the recipients!  

The call for the 2022 edition of the Global AI + Regulation Emerging Scholars Workshop and the Scotiabank Award will be published in the spring 2022. The workshop is expected to take place in Europe in Fall 2022.