Tomorrow’s Leviathan – Machine Learning in a Political World
Today’s WSI Distinguished Lecture was delivered by Professor Phil Howard of the Oxford Internet Institute, based at the University of Oxford. The overarching theme of Prof. Howards lecture was the notion of Artificial Intelligence having an influence – whether positive or negative – on politics, and the wider implications of AI influencing public opinion via social media.
Prof. Howard’s research sets out a framework to anticipate and react to the effects AI will have in society by assessing those effects we are currently aware of, and by developing scenarios and strategies to anticipate how AI will develop and deploy in future. It is in part thanks to the diverse, multidisciplinary nature of the Oxford Internet Institute research staff that this is accomplished, which includes researchers from disciplines such as law, philosophy, health and politics, as well as computer science.
The notions of how political societies are created was discussed by way of political philosopher Thomas Hobbs, who set forth in his 1651 book Leviathan the idea that our individual values shape our government and governance. We give up certain individual rights and freedoms to create and participate in a society, a topical example of this being privacy. Brief definitions of politics as something which “occurs when one person tries to represent another person’s interests”, and AI as “collections of algorithms and data that simulate learning, reasoning, and classification” are given as a basis for the discussion to follow.
There are many interesting anecdotes given around AI’s deployment in social media bots, ranging from the humorous to the sinister. From Tinder bots which would flirt with users before discussing Jeremy Corbyn, to bots which would discourage non-white demographics within the USA not to vote, citing rhetoric such as “voting is pointless, as no-one represents you”. Amusing or not, all of the examples given by Prof. Howard of bot activity have the same common denominators; the spread of misinformation, the influencing of political views and social disruption are all goals of the bots’ nefarious behaviours. These activities were particularly rife in the lead up to the most recent presidential election in the United States, with Russian bots causing widespread disruption over social media platforms.
The political arena is only one of the domains in which bots operate, with Prof. Howard describing the popularity of bot use in the beauty and pharmaceutical industries, with the latter being the most popular area for bot use. The bots operate by influence. In the example of pharmaceuticals, this would involve several thousand fake accounts posting about a medical condition – for example, migraines – in the public domains of social media, and then having interactions with several thousand more bot “patients” who have found cures for these ailments. Interactions such as these give users of social media the impression that a genuine encounter between someone suffering from migraines and someone who has had great success with a particular medication has occurred. These positive or negative impressions of products from supposedly real users are what companies and organisations use to influence consumer behaviour.
Hope is not all lost with regards to nefarious AI activity, as there are ways and means to combat the negative effects of AI activity online. Suggestions put forward by Prof. Howard include the reporting of the ultimate beneficiaries of our data; who benefits from our data, and to what end. Additionally, we should, as the users who generate this data, have agency to be able to grant data access to whoever we choose. This would be of great benefit to scientists and researchers, who often have poor access to the most current data with which to work. More up to date datasets, Prof. Howard notes, mean that the resulting information and research generated from the data is not outdated, leading to fewer cases of erroneous or misleading conclusions, which jeopardise factual research. Similarly to granting access, it is suggested that users could donate their data for similar aims.
In light of our current era in which “Fake News” and misinformation is commonplace, Prof. Howard emphasises the importance of research now more than ever. He emphasises that all research conducted to combat the problems with AI in the present is done so in the faith that it will make a difference further down the line, as we only have a few more years before politically engaged AI is rolled out to the public. It is predicted that in the near future, governments will implement their own AI systems for political means, making decisions on behalf of people and groups, the effects of which are what researchers are desperately trying to predict and pre-empt.
Prof. Howard rounds off the lecture by adding that AI will have a significant role in generating the content that we see online in the future. As some of us already do with bots in the present, it will soon be a reality that we’ll all interact with AI online without even realising it.
Leave a Reply