#WebSci20 – Paper Session 7: Bias and fairness by Robert Thorburn

Posted on behalf of Robert Thorburn

Chair: Katharina Kinder-Kurlanda

The seventh session of Web Science 2020, chaired by Katharina Kinder-Kurlander, presented papers focused on issues of bias and fairness. As with other paper sessions at the conference, the speakers represented universities from across the globe and had widely divergent focus areas within the broader theme. The four papers presented covered peer-to-peer discrimination in the sharing economy, fairness in clustering, bias mitigation in facial imaging, and a systematic media frame analysis.

Although these papers covered wide ranging of topics they did have notable areas of overlap, such the prominence of unintended or unconscious biases and the potential challenges these pose to academic studies in the field. In peer-to-peer systems these issues are further complicated by interactions being multidirectional, implying that the impacts of biases flow along multiple vectors. A further notable example of such bias is the use of facial recognition systems to determine gender, where such determination is binary only. Not only does this fix a discriminatory practice in the system’s code base, but there is no clear reason to do so in the first instance. This is due to gender data generally not being needed in facial recognition systems, while such systems could actually be used to discern a wide range gender and sexual orientation groups if the associated ML systems were trained to do so. However, this raises clear ethical concerns around the application of such systems, especially since it is easy to imagine how repressive governments might use facial recognition systems in targeted abuse.

During general discussion this issue was further distilled by the presenters, by focusing on fairness through representation. The issue of lacking representation was pinpointed as a central gap in not only related research but also in ML systems in general. Without representative training data such systems could never produce fair results. Conversely though, broader representation can be used to achieve unfair and even inherently unjust outcomes if contained within a system aimed at discriminatory practices.

Leave a Reply

Your email address will not be published. Required fields are marked *