Search
CLIC on Twitter

Search

17th Annual Information Ethics Roundtable
Justice and Fairness in Data Use and Machine Learning
April 5-7

Northeastern University
Boston, MA

________________________________
The 17th annual Information Ethics Roundtable will explore therelationship between the normative notions of justice and fairness andcurrent practices of data use and machine learning.
Artificial intelligence is now a part of our everyday lives. It allows usto easily get to a place we have never been before, while avoidingtraffic and road work, to communicate with our Chinese friend when wedon¹t share a common language, and to carry out complex but mind numbingrepetitive jobs in factories. But such artificial intelligences can alsoexhibit what we might call ³artificial bias;² that is, machine behaviorthat, if produced by a person, we would say is biased against particulargroups, such as racial minorities. Machine learning using large data setsis one means of achieving AI that is particularly vulnerable to producingbiased systems, because it uses data from human behavior that is itselfbiased. A number of tech companies, such as Google and IBM, and computerscience researchers are currently seeking ways to correct for such biasesand to produce ³fair² algorithms. But a number of fundamental questionsabout bias, fairness, and even justice still need to be answered if weare to solve this problem. (See below for some examples.)
In the 2019 edition of IER, we seek proposals that approach thesequestions from a variety of disciplinary perspectives through the lens ofinformation ethics.

Registration is free and the conference is open to the public. Thus, weinvite you to attend, regardless of whether or not you are formallyworkshopping or discussing a paper.

If you have questions, please contact the IER organizer, Kay Mathiesen

Proposals

Suggested Topics:
  *   What concepts of fairness and justice in philosophy and otherdisciplines are most useful for understanding fairness, equality, andjustice in data use and machine learning?  *   To what extent is it possible to operationalize (orcomputationalize) different conceptions of fairness and justice withindifferent machine learning techniques?  *   Should machine learning based decision-making systems be held to ahigher or different standard of fairness and justice before beingimplemented in industry (e.g. lending) or social services (e.g. childprotective services) in comparison to currently accepted practices?  *   What is the role of data scientists and computer programmers incorrecting for bias? How can machine learning be used in this role?  *   Not all biases are problematic; indeed, some are very helpful. Whatsorts of bias are unjust and why?  *   What can modern day programmers of ³classifications² learn aboutavoiding bias from the experience of other disciplines devoted toclassification, such as librarianship?  *   What can normative research in other areas for example, withrespect to police profiling or immigration/refugee screening teach usabout when or under what conditions profiling with machine learning isacceptable?  *   What is the relationship between explainability/interpretability inmachine learning decision-making and the just use of machine learning indifferent contexts?

Proposal Requirements:
We invite three types of proposals:
(1) Papers: Please submit a 500-word abstract of your paper. If accepted,you are expected to submit a detailed outline of your talk to theRoundtable. This will give your commentator a chance to prepare his/hercomments in advance.
(2) Panels: Please submit a 1500-word description of your panel. Thedescription should include: i) description of the topic, ii) biographiesof the panel members, ii) organization of the panel. It is a requirementthat panels focus tightly on a specific emergent topic, technology,phenomena, policy, or the like, with clear connections between thepresentations.
(3) Posters (for undergraduate and graduate students only): Please submita 500-word abstract of your poster and an outline of the major sections.

Commentators:
We are also interested in receiving expressions of interest to serve as acommenter/discussant for another person¹s paper. Each author with anaccepted proposal will be paired with a commenter who will provide formalfeedback and comments during the conference. Expressions of interestshould be sent to Katie Molongoski atk.molongoski@northeastern.edu<mailto:k.molongoski@northeastern.edu> byMarch 10th.

Submitting Proposals:
Please submit proposals tok.molongoski@northeastern.edu<mailto:k.molongoski@northeastern.edu>
Please include the subject line ³IER Proposal² Please indicate paperabstract, panel, or poster.

Sponsors:
* Center for Law, Innovation and Creativity

*Northeastern Ethics Institute  

*   Northeastern University College of Social Sciences and Humanities  

*   Northeastern Humanities Center

Deadlines:
  *   Submission of Proposals: February 15, 2019  *   Notification of Acceptance: March 1, 2019  *   Presentation Outline Deadline: March 15, 2019.  *   Registration Deadline: March 29, 2019  *   Conference Dates: April 5-7, 2019