Opinion article in de Volkskrant on Compromises in political negotiations

On May 25, de Volkskrant, a well-read newspaper with national circulation, published an op-ed by Caspar, about compromises in political negotiations. Currently, Dutch political parties are engaged in a difficult process of coalition exploration. Whether or not these negotiations will lead to a succesful government, depends on the ability of political parties to find common ground. This article posits that it is better to search for a compromise on every theme (climate, immigration, etc.) rather than opt for a strategy of extremes in which every party gets a big win on some themes (say climate) and suffers big sacrifices on others (say immigration). Why are these compromise outcomes more sustainable than trade-off outcomes? This is is related to the well-known compromise effect, which is one of the most robust behavioral economics findings and has been replicated in all sorts of contexts. The source of this effect lies in loss aversion and taboo trade off aversion: ultimately, in a strategy of extremes, the gains on one theme do not compensate for the losses felt on others, and the trade-off itself feels taboo (see earlier work on this by BEHAVE). Especially in polarized environments and in cases where deep values and moral dilemmas are at stake and decisions are perceived as very difficult, loss aversion and taboo trade off aversion are deeply felt and hence compromises carry the day. A nice way to show the relevance of our work to the wider community!

Aemiro presents at ICMC

On 24 May 2021, Aemiro from BEHAVE gave a presentation in the International Choice Modelling Conference (ICMC) online mini-event. The event drew participants from all corners of the world and particularly focused on using choice modelling to understand behaviour in a Covid-19 context. In his presentation, Aemiro discussed a work in progress related to self-interest motives, positional concerns and distributional considerations in healthcare choices. In particular, he explained about a flexible new choice model that captures conventional preferences, positional concerns as well as distributional preferences. He discussed the properties and identification issues of the model and outlined data requirements for empirical application

The conference website is https://cmc.leeds.ac.uk/news/icmc-2021-mini-online-conference/

Presentation at the Netherlands Parliament

Last week, Caspar gave a presentation for a delegation from the Netherlands Parliament (Tweede en Eerste Kamer) to talk about the potential and pitfalls of using artificial intelligence for digital decision support in government. He discussed the issue of moral autonomy of decision makers who interact with AI-based decision support, and also touched upon the moral aspects of the government-citizen relationship and how those are affected by the use of AI-systems. Building on previous BEHAVE-work, e.g. a decision support system for moral decisions in a healthcare context, recommendations were given about how to uphold moral values when applying advanced decision support in morally sensitive contexts.

Andreia’s paper on Artificial Moral Agents published

The paper “Perspectives about artificial moral agents” by Andreia Martinho, Adam Poulsen, Maarten Kroesen & Caspar Chorus is now published at AI and Ethics.

Artificial Moral Agents (AMAs) have long been featured in science fiction but recent advances in AI urged the need for a scientific debate about these systems. In this article published by Andreia Martinho, Maarten Kroesen, and Caspar Chorus, ethicists were surveyed on key issues associated with AMAs. In particular, we aim to gain insights about four fundamental questions regarding these systems: (i) Should AMAs be developed? (ii) How to develop AMAs? (iii) Do AMAs have moral agency? and (iv) What will be the moral and societal role of these systems? Five main perspectives were identified.

Perspectives about Artificial Moral Agents

The pursuit of AMAs is complicated. Disputes about the development, design, moral agency, and future projections for these systems have been reported in the literature. This empirical study explores these controversial matters by surveying (AI) Ethics scholars with the aim of establishing a more coherent and informed debate. Using Q-methodology, we show the wide breadth of viewpoints and approaches to artificial morality. Five main perspectives about AMAs emerged from our data and were subsequently interpreted and discussed: (i) Machine Ethics: The Way Forward; (ii) Ethical Verification: Safe and Sufficient; (iii) Morally Uncertain Machines: Human Values to Avoid Moral Dystopia; (iv) Human Exceptionalism: Machines Cannot Moralize; and (v) Machine Objectivism: Machines as Superior Moral Agents. A potential source of these differing perspectives is the failure of Machine Ethics to be widely observed or explored as an applied ethic and more than a futuristic end. Our study helps improve the foundations for an informed debate about AMAs, where contrasting views and agreements are disclosed and appreciated. Such debate is crucial to realize an interdisciplinary approach to artificial morality, which allows us to gain insights into morality while also engaging practitioners.

The paper can be found at:

https://link.springer.com/article/10.1007/s43681-021-00055-2

BEHAVE research on social routing covered in De Kampioen

This week’s issue of De Kampioen – the magazine of Netherlands Motorists Association ANWB, with 4.8 million readers – presents the results of a quick survey amongst their membership panel. The topic: would you be willing to occasionally take a social detour to alleviate congestion on busy parts of the network? In a nutshell, outcomes suggest that a majority would be open to following the social routing advice every now and then. The ANWB makes clear that she would support such social routing schemes. This is good news for BEHAVE, which studies the moral determinants of people’s inclination (not) to join such systems. We presented our research plans to the ANWB, which led to the mini-survey in their magazine. Our own results are forthcoming – data collection has just been finished, and first analyses being performed; stay tuned!

BEHAVE leads a special issue on Moral choice models

Today, a special issue has been published in the Journal of Choice Modelling which was edited by Caspar in collaboration with Ulf Liebe from Warwick University and Juergen Meyerhoff from TU Berlin. The special issue argues that moral decisions are different from ordinary, ‘consumer’ decisions; and hence that we need to derive new models to adequately represent moral choice behavior. We highlight that fields such as moral psychology and behavioral economics have much to offer to the choice modeling community, which traditionally looks to neoclassical economics for inspiration. Contributions include papers on topics as diverse as: quantum models to reflect changes of (moral) perspective, intra-family altruism, collective funding for healthcare programs, and willingness to pay to enhance labor conditions in the gig-economy. A nice collection of papers on a topic that constitutes the core of the BEHAVE-program! The editorial can be found here.

New BEHAVE-publication: Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence

The quest for equipping AI with moral reasoning is quite challenging. In this work we have operationalized a theoretical metanormative framework for decision-making under moral uncertainty using a latent class choice model. But what does moral uncertainty mean in practical terms? In this article by Andreia Martinho, Maarten Kroesen, and Caspar Chorus published on Minds and Machines, we provide an interesting empirical illustration of moral uncertainty. Imagine a world where AI is in charge of making transport policy decisions. Would the decisions of an AI equipped with a moral uncertain model differ from the ones of a system equipped with a moral certain model? The abstract is as follows:

As AI Systems become increasingly autonomous, they are expected to engage in decision-making processes that have moral implications. In this research we integrate theoretical and empirical lines of thought to address the matters of moral reasoning and moral uncertainty in AI Systems. We reconceptualize the metanormative framework for decision-making under moral uncertainty and we operationalize it through a latent class choice model. The core idea being that moral heterogeneity in society can be codified in terms of a small number of classes with distinct moral preferences and that this codification can be used to express moral uncertainty of an AI. Choice analysis allows for the identification of classes and their moral preferences based on observed choice data. Our reformulation of the metanormative framework is theory-rooted and practical in the sense that it avoids runtime issues in real time applications. To illustrate our approach we conceptualize a society in which AI Systems are in charge of making policy choices. While one of the systems uses a baseline morally certain model, the other uses a morally uncertain model. We highlight cases in which the AI Systems disagree about the policy to be chosen, thus illustrating the need to capture moral uncertainty in AI systems.
 

New BEHAVE-publication: Using natural language processing to explore heterogeneity in moral terminology in palliative care consultations

In a joint research effort with Eline van den Broek-Altenburg and her colleagues from the Robert Larner, M.D. College of Medicine at The University of Vermont, BEHAVE-researchers Maarten Kroesen and Caspar studied the use of moral terminology in end of life conversations between doctor and patient. With help of Natural Language Processing tools and the Moral Foundation Dictionary, we were able to identify using various statistical techniques, how the use of virtue- or vice-related words in different moral dimensions was associated with various personal characteristics. The paper was published today in BMC Palliative Care. The abstract is as follows:

Background
High quality serious illness communication requires good understanding of patients’ values and beliefs for their treatment at end of life. Natural Language Processing (NLP) offers a reliable and scalable method for measuring and analyzing value- and belief-related features of conversations in the natural clinical setting. We use a validated NLP corpus and a series of statistical analyses to capture and explain conversation features that characterize the complex domain of moral values and beliefs. The objective of this study was to examine the frequency, distribution and clustering of morality lexicon expressed by patients during palliative care consultation using the Moral Foundations NLP Dictionary.

Methods
We used text data from 231 audio-recorded and transcribed inpatient PC consultations and data from baseline and follow-up patient questionnaires at two large academic medical centers in the United States. With these data, we identified different moral expressions in patients using text mining techniques. We used latent class analysis to explore if there were qualitatively different underlying patterns in the PC patient population. We used Poisson regressions to analyze if individual patient characteristics, EOL preferences, religion and spiritual beliefs were associated with use of moral terminology.

Results
We found two latent classes: a class in which patients did not use many expressions of morality in their PC consultations and one in which patients did. Age, race (white), education, spiritual needs, and whether a patient was affiliated with Christianity or another religion were all associated with membership of the first class. Gender, financial security and preference for longevity-focused over comfort focused treatment near EOL did not affect class membership.

Conclusions
This study is among the first to use text data from a real-world situation to extract information regarding individual foundations of morality. It is the first to test empirically if individual moral expressions are associated with individual characteristics, attitudes and emotions.

 

BEHAVE-research covered extensively in Portraits of Science

Every year, to celebrate its birthday (its Dies Natalis in its official, Latin name), TU Delft publishes a series of about ten ‘portraits of science’, in which it showcases noteworthy research and teaching efforts by TU Delft staff in the preceding year. In this year’s portraits, which focused on resilience in the context of covid, BEHAVE-research was covered extensively in the interview with Caspar. Here, he highlights work done with Tom van den Berg and Maarten Kroesen about the – almost non-existing – empirical relation between peoples’ endorsement of abstract moral values such as fairness and their concrete compliance with covid-regulations in specific situations. Another topic discussed in the portrait refers to a recently published paper with Niek Mouter and Erlend Sandorf in which the taboo trade off between Health and the Economy is examined. Finally, attention is devoted to spin-off Councyl which develops expert systems for ICU-staff to help them make difficult choices in terms of which Covid-patients (not) to admit to the ICU. Thank you Peter Baeten (interview) and Marcel Krijgsman (pictures) for a nice interview + photoshoot!