Justice and Fairness in Data Use and Machine Learning
The 17th annual Information Ethics Roundtable will explore the relationship between the normative notions of justice and fairness and current practices of data use and machine learning.
Artificial intelligence is now a part of our everyday lives. It allows us to easily get to a place we have never been before, while avoiding traffic and road work, to communicate with our Chinese friend when we don’t share a common language, and to carry out complex but mind numbing repetitive jobs in factories. But such artificial intelligences can also exhibit what we might call “artificial bias;” that is, machine behavior that, if produced by a person, we would say is biased against particular groups, such as racial minorities. Machine learning using large data sets is one means of achieving AI that is particularly vulnerable to producing biased systems, because it uses data from human behavior that is itself biased. A number of tech companies, such as Google and IBM, and computer science researchers are currently seeking ways to correct for such biases and to produce “fair” algorithms. But a number of fundamental questions about bias, fairness, and even justice still need to be answered if we are to solve this problem.