Explainable Artificial Intelligence
Project Leader: Timothy Miller
Staff: Liz Sonenberg Eduardo Velloso Frank Vetere Mor Vered
Student: Prashan Madumal
Collaborators: Piers Howe (Melbourne School of Psychology)
Sponsors: Defence Science and Technology Group Microsoft Research
Primary Contact: Tim Miller (firstname.lastname@example.org)
Disciplines: Computing and Information Systems
As artificial intelligence (AI) becomes ubiquitous, algorithms will become increasingly responsible for decisions that directly impact individuals and society as a whole. Such decisions will need to be justified due to concerns of ethics and trust. The problem of addressing the challenge of Explainable Artificial Intelligence (XAI) is gaining worldwide attention. For example, the first XAI research workshop took place at the recent major international conference, IJCAI 2017.
A key goal of XAI is to explain decisions or beliefs people. However, much of the current research in XAI is being done in a vacuum, using only the researchers' intuition of what constitutes a good explanation and void of studies involving people. In this project, we aim to take a human-centered approach to XAI, explicitly studying what questions people care about for explanation, makes a good explanation to a person, how explanations & auses can be extracted from complex and often opaque decision-making models, and how they can be communicated to people.
Further information: https://people.eng.unimelb.edu.au/tmiller/