Cynthia Khoo: If an Algorithm Falls on Human Rights but No One Coded it, Does the Legal Harm Really Happen? Reporté
31 mars 2020 — 13 h 30 à 15 h
Cynthia Khoo is a technology and human rights lawyer called to the Bar of Ontario and a Research Fellow at the Citizen Lab (Munk School of Global Affairs & Public Policy, University of Toronto).
Présentation (en anglais seulement)
Le Centre de recherche en droit, technologie et société en collaboration avec le Partenariat CRSH “Autonomisation des acteurs judiciaires par la cyberjustice” (AJC) est enchanté de présenter :
Message important | Mise à jour au 27 mars 2020
Nous continuons de suivre l’évolution rapide de l'épidémie mondiale de COVID-19. Il n’ y a pour l’heure aucune certitude quant à la date à laquelle la situation sera résolue et nous permettra d’organiser des événements publics.
Nous vous communiquerons les nouvelles dates dès que nous aurons pris une décision. Vous pouvez vous inscrire à notre lettre d’information pour recevoir nos mises à jour. Si vous avez des questions ou des préoccupations, n'hésitez pas à communiquer avec nous en écrivant à [email protected].
À propos de la conférencière (en anglais seulement)
Cynthia Khoo is a technology and human rights lawyer called to the Bar of Ontario and a Research Fellow at the Citizen Lab (Munk School of Global Affairs & Public Policy, University of Toronto). She serves on the Board of Directors of Open Privacy and holds an LL.M. (Concentration in Law and Technology) from the University of Ottawa, where she interned as a research student and junior counsel at the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC). Cynthia is the founder of Tekhnos Law, a sole practice law firm, and has represented clients in interventions before the Canadian Radio-television and Telecommunications Commission (CRTC) and the Supreme Court of Canada. Her work focuses on the impacts of the Internet and technology on marginalized communities, particularly equality and related human rights issues arising from online platforms and algorithmic decision-making. Cynthia has extensive experience working across key digital rights issues such as privacy and data protection, freedom of expression, copyright, online censorship, net neutrality, and intermediary liability. She holds a J.D. from the University of Victoria Faculty of Law.
À propos de la présentation (en anglais seulement)
Discourse around online platform-facilitated algorithmic harms is rife with the rhetoric and premise of "unintended consequences", where platform companies may not have necessarily contravened applicable regulations, nor necessarily engaged in unethical conduct, and yet still caused significant harm across multiple spheres of sociopolitical life. For example, platforms including Facebook, Google, Twitter, YouTube, Airbnb, and Uber have fundamentally impacted political issues such as electoral integrity, social welfare issues such as affordable housing and public transit, and equality and human rights issues such as online abuse and harassment, which disproportionately harms historically marginalized groups, including those protected by equality and non-discrimination laws and the Canadian Charter of Rights and Freedoms. This presentation will interrogate and break down the notion that harmful consequences from platform algorithms are necessarily "unintended" or "unforeseen"--or "unforeseeable".
The core concept developed and proposed by the presenter is that of "emergent systemic harm", drawing on the notion of emergence from artificial intelligence and robotics literature: systemic harm which emerged unexpectedly from a complex system, whose constituent components did not in isolation necessarily raise legal or ethical concerns, nor individually indicated foreseeable or predictable harm, but which worked together in such a way as to produce a harmful result that was more than the sum of the system’s parts. Where such emergent systemic harm occurs to a historically marginalized community or its members, there may be grounds for legal action based on a more expansive, intersectional interpretation of the negligence analysis in tort law. This presentation will discuss some of the groundwork underlying a potential framework for liability, including the implications of unsupervised machine learning algorithms, which may lead to harms considered genuinely unforeseeable by those involved and impacted.
Cette conférence publique est présentée dans le cadre d’un atetelier de travail du Partenariat CRSH “Autonomisation des acteurs judiciaires par la cyberjustice” (AJC). Le reste de l’atelier est sur invitation seulement.
Les participants peuvent faire un don en ligne au Centre selon leurs préférences.