PhD project: Explaining AI using human-computer dialogue

Abstract

With the rise of autonomous intelligent technology, the ability to explain why a particular action was proposed or taken is more urgent than ever. This implies that the kind of reasoning that takes place in intelligent systems should be made transparent and explainable, in particular to people who are not AI experts. We propose to achieve this using recent developments in theories of computational argument. The key idea is that the user will be able to engage in a discussion with the intelligent technology in which explanation is provided by exchanging arguments and counterarguments. Our aim is to develop a software implementation that builds upon existing theoretical results, and supplements these where necessary. Using this software, we will assess the extent to which our approach increases user confidence in the decisions of intelligent technology.

Current state of the art and challenges

Formal argumentation theory provides a machine implementable way of drawing conclusions, even from conflicting information, by constructing arguments for different claims and examining how these arguments interact. Arguments, in essence, consist of a collection of reasons, each of which specifies that a particular consequent follows from a particular set of premises. Some of these reasons might be strict in the sense that they don't allow for any exceptions (like instances of traditional logical inference) whereas other reasons might be defeasible in the sense that they do have exceptions and should therefore be considered as rules of thumb. An argument can attack another argument, meaning that the conclusion of the former argument is incompatible with the validity of the latter argument. Hence, one can construct a directed graph in which the nodes represent arguments and the arrows represent the attack relation. Given such a graph, the question is which set(s) of arguments can be considered as justified. Several criteria (called "argumentation semantics") for determining so have been stated in the literature, with special attention to warranting an overall consistent outcome (regarding the conclusions of the justified arguments).

Traditionally, argumentation semantics are based on fixpoint theory. However, a recent development is to reformulate argumentation semantics in the form of structured discussion. That is, an argument is considered justified when it is possible to win a particular discussion (in which different arguments are exchanged) that defends the argument in question. The idea of the current project is to transfer the current theory on argument-based discussion into a particular form where it can be used for purposes of human-computer interaction. This means that:

Required skills and background

Candidates should have skill and experience in formal methods, such as formal logic, non-monotonic reasoning, or answer set programming. Furthermore, strong software development skills are necessary, as one of the aims is to develop a software implementation that builds upon the theoretical results.

About the supervisor

Dr. Martin Caminada is one of the leading researchers in the field of computational argument. Topics he has advanced are criteria for determining justified arguments, techniques for warranting the consistency of the resulting conclusions and discussion-based interpretations of argumentation theory. His work is highly cited and has appeared at some of the most competitive journals and conferences.

Funding and fees

Please be aware that the PhD position is for self-funded students only and is subject to Cardiff University's tuition fees.