Workshop: Socio-Technical AI Systems for Defence, Cybercrime, and Cybersecurity

https://twitter.com/AshKingdon/status/1280440522339909632
This interdisciplinary workshop brought together researchers, academics, and practitioners to examine and discuss some of the challenges currently surrounding socio-technical AI systems. Presentations and discussions within the workshop touched upon key issues including transparency, accountability, trustworthiness, bias, and the need for human judgement to be incorporated within ethical frameworks when developing future socio-technical AI.

The first keynote of the day was delivered by Professor Steven Meers from the Defence Science and Technology Laboratory (DSTL) who focused on some of the emerging technological trends affecting the Ministry of Defence, detailing the opportunities and potential threats that they posed. Discussion was centred upon the ability for AI to become a transformative national security technology on par with nuclear weapons, aircraft, computers, and biotechnology. The presentation concluded with the powerful message that a socio-technical approach to AI will be beneficial in helping to address barriers when operationalising AI technologies within the defence and security environment.

The second keynote presentation was delivered by Professor David Wall from the Centre for Criminal Justice Studies at the University of Leeds. The focus was on the development and use of socio-technical AI systems within the field of criminology, particularly in response to cybercrime and cybersecurity issues. Emphasis was placed on the fact that the meanings, logic, and understandings of AI systems differ across disciplines, which can result in significant variances in expectations. Likewise, it was mentioned that whilst AI cannot make the hard decisions alone, it can reasonably inform aspects of the decision-making processes of practitioners, policy makers, and politicians, who are mandated to make them. Hence, it is important to match the delivery of scientific claims with consumer expectations in order to maintain public confidence.

Key Takeaways of the Workshop

  • Law enforcement, cybersecurity, and defence applications of AI often involve decision making that can have a significant human impact. Thus, such automated decisions need support from robust tools and intelligence products, where the potential for bias, error, and missing data is made clear, so that decisions made can be both informed and proportionate.
  • Socio-Technical AI systems offer the chance for human in the loop solutions, overcoming some of the problems associated with black box AI
  • Security through obscurity never works, we need to develop AI in an open and transparent way or else there will be no trust
  • If AI is not diverse it is not ethical

 

Ashton Kingdon

Web Science PhD Researcher

Faculty of Social Sciences

University of Southampton

Doctoral Fellow: Centre for Analysis of the Radical Right

Leave a Reply

Your email address will not be published. Required fields are marked *