#WebSci20 – Workshop Explanations for AI: Computable or not? by Robert Thorburn

Posted on behalf of Robert Thorburn

Day two of the 2020 Web Science conference saw a series of workshops covering topics ranging from Cyber Crime to Digital (In)Equality. The fourth of these workshops, chaired by Prof Sophie Stalla-Bourdillon, investigated whether explanations for AI are computable. Focus areas included the participation of AI systems in socially sensitive decision making and how to approach such systems when they function as black boxes.

The workshop took the form of four papers and associated discussions, with the University of Southampton’s Kieron O’Hara opening proceedings with his paper entitled: “In No Circumstance Can or Should Explanations of AI Outputs in Sensitive Contexts Be Wholly Computable.” O’Hara advances the position that computed accounts of outputs from AI systems can not solely be used as explanations of decisions made by those systems. This, in part, rests on an understanding of computation as an act of derivation relating only to elements contained within the system under study. Further complicating the issue are legislative requirements such as those of the GDPR.

Next, Jennifer Cobbe from the University of Cambridge presented her paper entitled: “Reviewable Automated Decision-Making.” Reviewability was presented as a broader and more inclusive counter to traditional auditability. Specifically, due to the former’s inclusion of issues such as deployment context, engineering decisions, and other factors to form a socio-technical understanding of automated decision-making. Rounding out the paper presentations, Perry Keller and Gerard Canal both from Kings College London introduced their papers respectively titled “Paternalism in the Public Governance of Explainable AI” and “Trust in Human-Machine Partnerships”. Both papers yet again explored the richer but also more challenging understanding of AI decision-making that results when a socio-technical approach is taken.

Directly following the paper presentations, Niko Tsakalakis from the University of Southampton, introduced the Provenance-driven & Legally-grounded Explanations for Automated Decisions (PLEAD) project. PLEAD is an interdisciplinary undertaking aimed at explaining “the logic that underlies automated decision-making”. Notably, the project aims to provide a provenance interface.

Finally, the workshop was concluded by way of an open discussion by the presenters both of each other’s points and also of questions raised by attendees. A first key point of discussion was the role of data provenance in understanding AI functionality, specifically with regards to data flows across an organisation. It was also mentioned that this process could be undermined by inconsistencies in data treatment and naming conventions. This discussion of data flow over time then finally lead into a consideration of real-time AI. It was proposed that explaining AI decision-making in real-time is not truly feasible at present due to technical limitations. This is however mitigated by such a request being highly unlikely, with explanations generally being requested after the fact.

Leave a Reply

Your email address will not be published. Required fields are marked *