Making the Web Human-Centric: New Directions in the Web and AI for Countering Violent Extremism – Ashton Kingdon

Thirty years have passed since Tim Berners-Lee invented the World Wide Web. In his 2019 open letter on the future of this technology, he opined that whilst the Web has created some wonderful opportunities, giving marginalised groups a voice, and making our daily lives easier, it has also created opportunities for the spreading of misinformation, giving a voice to those who disseminate hatred, and made many types of crime easier to commit. When I inform people that my PhD is in Web Science, they often look confused; I have frequently been asked at job interviews, and at academic conferences, what this discipline actually is, and why I think interdisciplinary research is so important in developing counter-measures to violent extremism. This article seeks to address these questions by outlining some of the key ethical and societal issues surrounding the use of intelligent technologies – Artificial Intelligence (AI), Augmented Intelligence, and Machine Learning – and their role in online radicalisation. It will also argue for the increasing need for researchers to consider the Web as a socio-technical system that requires the examination of the influence both ‘man’ and ‘machine’ have on combating online extremism.

“The goal of the Web is to serve humanity. We build it now so that those who come to it later will be able to create things that we cannot ourselves imagine” Tim Berners-Lee

The Web is transforming society; the scale of its impact and the rate of its adoption are unparalleled. We are moving towards a world in which data has a life beyond the individual, where the value and potential of data are ever changing, as technological developments bring new possibilities. The ‘immortality’ of data raises new ethical and societal issues that have not yet been fully explored, and with which, consequently, we are unprepared to mitigate. Many people think of the Web solely as a technology, but, in actuality, it is co-created by society, users place content on the Web, and interact with its technology – the platforms, and the social networks. Such widespread participation in the Web’s development makes it essential that we study it as an eco-system of interconnected elements that change and evolve over time. In recent years, evolutionary approaches have gathered momentum across academic disciplines seeking to understand the emergent diversity and complexity of technologies and cultural traits. The main premise of such lines of research is that small-scale evolutionary mechanisms operate gradually to produce cumulative changes that are observable on a larger scale. These analogies between biological and technological evolution are being used by social scientists who study digital media and software, as well as by the engineers who build these systems. In a role comparable to that of the genomes in living organisms, software encodes and transfers the information that determines how a technology functions and expresses itself. There is also socially generated information, the transmission and accumulation of which can be traced online in ways that surpass offline media; current topics of interest in this area include the spread of misinformation online, cognitive biases, and ideological echo chambers.

The Web as a technology is inherently interdisciplinary and socio-technical in nature, and, in modern society, much of it is powered by AI. The expansion in the use of intelligent systems has resulted in classifications, decisions, and predictions frequently being based on algorithmic models ‘trained’ on large datasets of historical and social trends. All significant technological advances bring with them challenges and opportunities for society, and the need, therefore, to consider diversity, ethical, and accountability influences. An unforeseen concern has been the exposure of citizens to disinformation, fake news, and extremist propaganda. This has become a global socio-technical issue, as increasing numbers of extremists utilise social media to increase the volume, diversity, and availability of propaganda.

The Battle for Ethical AI

“If a machine is expected to be infallible, it cannot also be intelligent.” Alan Turing

Artificial Intelligence, particularly in the form of machine learning, has become increasingly influential, leading to rapid social and technological change as it becomes embedded in many aspects of society. However, the progressive transformation of humankind by AI not only confers benefits, but also risks comprising privacy, autonomy, dignity, and human safety. It is unquestionable that the digital world is open to manipulation, but many users of social media do not fully comprehend the effects and impact of such activity, particularly how the use of algorithms to access selected datasets impacts on the impartiality of the information generated. Consequently, the rising application of algorithms in decision-making has prompted demands for increased algorithmic accountability, and the development of responsible and explainable AI. Data is the fuel for machine learning, and, whilst most algorithms are not inherently biased or unethical, the data and training they are fed can make them that way.

Automated decision-making continues to be used for a variety of purposes within a multiplicity of sectors, and a key goal for researchers is to assess the computability of AI, that is, how effectively it makes decisions. The validity of these choices cannot be gauged without a detailed knowledge of the key factors underlying algorithmic decisions, as, without these, it is impossible to know the extent to which social media companies are engaging in practices that could be considered unethical, deceptive, or discriminatory. The technology they currently employ, such as the exploitation of ‘Big Data’ and the use of advanced algorithms, raise substantial concerns regarding the potential role of AI in the dissemination of extremist propaganda, as illustrated by high profile cases, such as the role Facebook and Twitter played in the alleged Russian interference in the 2016 Presidential election of Donald Trump. Increased algorithmic transparency is essential in assessing the technological influence in such cases, although this is often resisted by those social media companies utilising AI, who are often reluctant to provide information concerning the propaganda and politically sponsored content disseminated by their users, to avoid losing their competitive edge and suffering a loss of custom.

Holding AI systems accountable for the potential dissemination of extremist content rests on the principle of transparency. Thus, as an important safeguard against any potential abuse, there should be an obligation for companies deploying intelligent systems, despite their opposition, to be open about how they operate and to be held accountable for any algorithmically driven decision-making. The identification of the source of propaganda, by ensuring its traceability throughout its dissemination, is essential to increasing algorithmic accountability. However, making one part of the AI system visible, such as the algorithm, or the underlying data, is not the same as holding the assemblage accountable. In the majority of cases, algorithms are only one part of a broader system, and a holistic view is needed, that sees accountability as a socio-technical phenomenon. Those responsible for the design, deployment, and using of socio-technical systems, need to understand how and why they function, and the requirement for them to be accountable to oversight bodies for that functioning, and the part they play within it.

The conclusion is clear; platforms need to improve transparency in relation to the origins of their content to provide a more objective and accountable information ecosystem. It is neither unreasonable nor unrealistic to expect social media companies, in view of their central role, to act with appropriate responsibility, to ensure a safe virtual environment, to protect users from propaganda, and to offer adequate and unfettered access to contrasting political and ideological views. Regrettably, under the current business models, legal frameworks, and ideological outlooks that prevail in Silicon Valley, this accountability and transparency appears unachievable in the near future, predominantly because it will require such drastic and fundamental changes in the sector – from law, to practice, to mindset.

The Benefit of Interdisciplinary Research in Countering Violent Extremism

As a Web Science PhD student, I am undertaking research that is inherently interdisciplinary, combining criminology, history, and computer science to explore the ways in which right-wing extremists utilise technology for recruitment and radicalisation. It has therefore been important that my supervisory team share my vision for the way I wish to study extremism and radicalisation, namely as a phenomenon that is co-constituted by both the individuals and the technology. On hearing of this article, team members, and other interdisciplinary researchers provided me with the following quotes encompassing their opinions of the benefit of interdisciplinary research when examining the most effective ways of countering violent extremism.

“Despite the extensive academic interest in the study of violent extremism, such projects are frequently reduced to unidimensional approaches due to established departmental structures, traditional disciplinary divisions, and the restrictive stipulations of funding sources. The task of understanding and countering this dangerous phenomenon is, however, too complex and multi-faceted to be truly understood from any single disciplinary approach. Countering the sort of transnational violent extremism, of which we are seeing more and more in an increasingly interconnected yet divided world, requires the adoption of comprehensive interdisciplinary frameworks which incorporate the many different dimensions that constitute extremism, as well as the large number of ways in which the hate radiating from these extreme ideological views is disseminated. Theological, social, political, and economic dimensions need to be considered and combined with a sophisticated understanding of the technology utilised by these groups, as well as the historical narratives which are weaponised and employed to frame extremists’ worldviews and justify their violent behaviour. The overlapping nuances and approaches that different disciplines can bring to the exploration of violent extremism, be it the online penetration of covert organisations and the related personal security and privacy of researchers, detailed data analysis, the application of theory and methods, and a broader contextualisation. All researchers can enormously enhance the findings of traditional siloed methodologies by adopting an interdisciplinary approach to the study of violent extremism.

When it comes to technology, there is a problematic perception among many policy makers, when overwhelmed with the scale and complexity of decision making in the modern state, that networked machines, combined with the collection of increasing data on citizens’ behaviour, can be utilised to create artificial intelligence smart enough to make better policy decisions than humans. However, the running of more and more data points through an algorithm may well make more data-driven decisions, but these are not necessarily the same thing as intelligent decisions. For instance, flaws present in the collection of data, and the unseen prejudices held by an algorithm’s programmers, could contribute to a self-perpetuating cycle of machine-learned inequality and injustice. Basing key policies related to law and order, healthcare, education achievement, the economy, and even life and death on algorithmic decision-making is not efficiency saving, it is an abrogation of political responsibility and humanity. Relying on code to interpret data limits the scope for human input, and everything that comes with it – flexibility, creativity, morality, vision, hope, forgiveness, and instincts. As new surveillance technologies allow businesses and governments to capture more and more data about daily human behaviour, and to seemingly find patterns in the complexity of human life, the temptation to defer to algorithmic decision making gets greater and greater. But such a course is contrary to democracy and marks a dangerous deferment of our humanity.” Dr Christopher Fuller, an Associate Professor in History at the University of Southampton, with a particular interest in the role of technology in creating national security and insecurity.

“Even if monodisciplinary endeavours maintain their descriptive, explanatory, or heuristic value, multi-and interdisciplinary efforts are increasingly needed to deal with technical challenges, big data availability, and complex social issues, first and foremost in cyberspace. Developing sensitivity towards the importance of interdisciplinarity is also needed to improve the long-established institutional structures based on a rigid division between disciplines. For most of us, epistemological issues are closely embedded in our research routines to the point that we tend to keep them implicit in our reasoning and in defining research designs. However, in the effort to facilitate and foster effective cross-disciplinary communication and collaboration, it is good to remind ourselves that there are different cultures of debate, and that meaning and knowledge can be created also in ways with which we might be less familiar.” Dr Anita Lavorgna – Associate Professor in Criminology, University of Southampton

“Assemblages of like-minded people exchanging memes, sharing experiences, and expanding networks within digital spaces, promotes information that reflects and reinforces established beliefs, and enables the propagation of misinformation. This can distort an individual’s viewpoint such that they struggle to comprehend alternate perspectives. The evolution of incel (involuntary celibate) communities into insular hate-fuelled groups has been facilitated via these ‘digital echo chambers’, which amplify and augment their extremist misogynistic ideology. Often, anti-female and anti-feminist rhetoric is presented as harmless or satirical media, and is deliberately targeted towards young men and boys. In order to combat the threat posed by groups such as incels and the ideologies they espouse, interdisciplinary research is necessary to obtain a holistic understanding of the dynamics of these communities. “ Dr Lisa Sugiura – Senior Lecturer in Criminology and Cybercrime, Institute of Criminal Justice Studies, University of Portsmouth

 “The Web provided new technology for communicating, living our lives, and organising society through online spaces. Web Science is an interdisciplinary effort that brings together researchers who individually specialise in each of those areas to provide insight into newly emerging Web-enabled phenomena.

The Web’s disintermediated social media industry provides huge scale and penetration for far right disinformation campaigns. Facebook and Twitter are spaces that researchers can examine for evidence of the spread of extremist messages. The challenge of my job as an interdisciplinary researcher is to work collaboratively to create new ethical research methodologies that identify tens of thousands of people sending hundreds of thousands of tweets, combining Political Science theories with Computer Science methods for analysing large scale text collections and large scale networks.” Professor Les Carr – Director of Web Science Centre for Doctoral Training and Professor of Web Science, University of Southampton  

The Need for a Socio-Technical Approach

The main message of this article has been the highlighting of the need for further interdisciplinary research focussing on the socio-technical challenges of deploying AI technologies on the Web. Applications of AI for countering extremism often involve decision-making that can have a significant human impact, and for such decisions to be both informed and proportionate, support is needed from robust tools and intelligence products, where possible sources of bias and error, and potentially missing data is made clear. The challenges of dealing with large volumes of personal data are increasingly apparent in many fields of practice, although they may be manifested in different ways. Hence, rather than addressing these problems within the boundaries of our academic silos, a holistic approach is needed that focuses on the commonalities in the issues that arise. Socio-technical AI systems offer the chance for ‘human in the loop’ solutions that overcome some of the problems associated with opaque black box AI.

As AI is deployed and developed globally, it has promised to revolutionise almost all aspects of daily life and consequently, social media companies need to employ this technology in a way that is trustworthy, transparent, and responsible. It would be disingenuous to discuss the technological components of AI applications distributing propaganda, without considering the impact on the ordinary citizen of their implementation. This article therefore emphasises the need for ‘Augmented Intelligence’, the combined intelligence of ‘man’ and ‘machine’, when examining the potential impact that computational advances could have on radicalisation. In the relationship between society and technology, knowledge of the algorithms used by social media platforms should be considered, it could be argued, as a fundamental human right, as users of these sites, undoubtedly, should be aware of the data selection processes that could influence their lives so significantly. Without this knowledge, individuals cannot realistically assess the validity and objectivity of the information presented to them.

 

Leave a Reply

Your email address will not be published. Required fields are marked *