New BEHAVE-publication: Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence

The quest for equipping AI with moral reasoning is quite challenging. In this work we have operationalized a theoretical metanormative framework for decision-making under moral uncertainty using a latent class choice model. But what does moral uncertainty mean in practical terms? In this article by Andreia Martinho, Maarten Kroesen, and Caspar Chorus published on Minds and Machines, we provide an interesting empirical illustration of moral uncertainty. Imagine a world where AI is in charge of making transport policy decisions. Would the decisions of an AI equipped with a moral uncertain model differ from the ones of a system equipped with a moral certain model? The abstract is as follows:

As AI Systems become increasingly autonomous, they are expected to engage in decision-making processes that have moral implications. In this research we integrate theoretical and empirical lines of thought to address the matters of moral reasoning and moral uncertainty in AI Systems. We reconceptualize the metanormative framework for decision-making under moral uncertainty and we operationalize it through a latent class choice model. The core idea being that moral heterogeneity in society can be codified in terms of a small number of classes with distinct moral preferences and that this codification can be used to express moral uncertainty of an AI. Choice analysis allows for the identification of classes and their moral preferences based on observed choice data. Our reformulation of the metanormative framework is theory-rooted and practical in the sense that it avoids runtime issues in real time applications. To illustrate our approach we conceptualize a society in which AI Systems are in charge of making policy choices. While one of the systems uses a baseline morally certain model, the other uses a morally uncertain model. We highlight cases in which the AI Systems disagree about the policy to be chosen, thus illustrating the need to capture moral uncertainty in AI systems.