BEHAVE research on social routing covered in De Kampioen

This week’s issue of De Kampioen – the magazine of Netherlands Motorists Association ANWB, with 4.8 million readers – presents the results of a quick survey amongst their membership panel. The topic: would you be willing to occasionally take a social detour to alleviate congestion on busy parts of the network? In a nutshell, outcomes suggest that a majority would be open to following the social routing advice every now and then. The ANWB makes clear that she would support such social routing schemes. This is good news for BEHAVE, which studies the moral determinants of people’s inclination (not) to join such systems. We presented our research plans to the ANWB, which led to the mini-survey in their magazine. Our own results are forthcoming – data collection has just been finished, and first analyses being performed; stay tuned!

BEHAVE leads a special issue on Moral choice models

Today, a special issue has been published in the Journal of Choice Modelling which was edited by Caspar in collaboration with Ulf Liebe from Warwick University and Juergen Meyerhoff from TU Berlin. The special issue argues that moral decisions are different from ordinary, ‘consumer’ decisions; and hence that we need to derive new models to adequately represent moral choice behavior. We highlight that fields such as moral psychology and behavioral economics have much to offer to the choice modeling community, which traditionally looks to neoclassical economics for inspiration. Contributions include papers on topics as diverse as: quantum models to reflect changes of (moral) perspective, intra-family altruism, collective funding for healthcare programs, and willingness to pay to enhance labor conditions in the gig-economy. A nice collection of papers on a topic that constitutes the core of the BEHAVE-program! The editorial can be found here.

New BEHAVE-publication: Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence

The quest for equipping AI with moral reasoning is quite challenging. In this work we have operationalized a theoretical metanormative framework for decision-making under moral uncertainty using a latent class choice model. But what does moral uncertainty mean in practical terms? In this article by Andreia Martinho, Maarten Kroesen, and Caspar Chorus published on Minds and Machines, we provide an interesting empirical illustration of moral uncertainty. Imagine a world where AI is in charge of making transport policy decisions. Would the decisions of an AI equipped with a moral uncertain model differ from the ones of a system equipped with a moral certain model? The abstract is as follows:

As AI Systems become increasingly autonomous, they are expected to engage in decision-making processes that have moral implications. In this research we integrate theoretical and empirical lines of thought to address the matters of moral reasoning and moral uncertainty in AI Systems. We reconceptualize the metanormative framework for decision-making under moral uncertainty and we operationalize it through a latent class choice model. The core idea being that moral heterogeneity in society can be codified in terms of a small number of classes with distinct moral preferences and that this codification can be used to express moral uncertainty of an AI. Choice analysis allows for the identification of classes and their moral preferences based on observed choice data. Our reformulation of the metanormative framework is theory-rooted and practical in the sense that it avoids runtime issues in real time applications. To illustrate our approach we conceptualize a society in which AI Systems are in charge of making policy choices. While one of the systems uses a baseline morally certain model, the other uses a morally uncertain model. We highlight cases in which the AI Systems disagree about the policy to be chosen, thus illustrating the need to capture moral uncertainty in AI systems.

New BEHAVE-publication: Using natural language processing to explore heterogeneity in moral terminology in palliative care consultations

In a joint research effort with Eline van den Broek-Altenburg and her colleagues from the Robert Larner, M.D. College of Medicine at The University of Vermont, BEHAVE-researchers Maarten Kroesen and Caspar studied the use of moral terminology in end of life conversations between doctor and patient. With help of Natural Language Processing tools and the Moral Foundation Dictionary, we were able to identify using various statistical techniques, how the use of virtue- or vice-related words in different moral dimensions was associated with various personal characteristics. The paper was published today in BMC Palliative Care. The abstract is as follows:

High quality serious illness communication requires good understanding of patients’ values and beliefs for their treatment at end of life. Natural Language Processing (NLP) offers a reliable and scalable method for measuring and analyzing value- and belief-related features of conversations in the natural clinical setting. We use a validated NLP corpus and a series of statistical analyses to capture and explain conversation features that characterize the complex domain of moral values and beliefs. The objective of this study was to examine the frequency, distribution and clustering of morality lexicon expressed by patients during palliative care consultation using the Moral Foundations NLP Dictionary.

We used text data from 231 audio-recorded and transcribed inpatient PC consultations and data from baseline and follow-up patient questionnaires at two large academic medical centers in the United States. With these data, we identified different moral expressions in patients using text mining techniques. We used latent class analysis to explore if there were qualitatively different underlying patterns in the PC patient population. We used Poisson regressions to analyze if individual patient characteristics, EOL preferences, religion and spiritual beliefs were associated with use of moral terminology.

We found two latent classes: a class in which patients did not use many expressions of morality in their PC consultations and one in which patients did. Age, race (white), education, spiritual needs, and whether a patient was affiliated with Christianity or another religion were all associated with membership of the first class. Gender, financial security and preference for longevity-focused over comfort focused treatment near EOL did not affect class membership.

This study is among the first to use text data from a real-world situation to extract information regarding individual foundations of morality. It is the first to test empirically if individual moral expressions are associated with individual characteristics, attitudes and emotions.


BEHAVE-research covered extensively in Portraits of Science

Every year, to celebrate its birthday (its Dies Natalis in its official, Latin name), TU Delft publishes a series of about ten ‘portraits of science’, in which it showcases noteworthy research and teaching efforts by TU Delft staff in the preceding year. In this year’s portraits, which focused on resilience in the context of covid, BEHAVE-research was covered extensively in the interview with Caspar. Here, he highlights work done with Tom van den Berg and Maarten Kroesen about the – almost non-existing – empirical relation between peoples’ endorsement of abstract moral values such as fairness and their concrete compliance with covid-regulations in specific situations. Another topic discussed in the portrait refers to a recently published paper with Niek Mouter and Erlend Sandorf in which the taboo trade off between Health and the Economy is examined. Finally, attention is devoted to spin-off Councyl which develops expert systems for ICU-staff to help them make difficult choices in terms of which Covid-patients (not) to admit to the ICU. Thank you Peter Baeten (interview) and Marcel Krijgsman (pictures) for a nice interview + photoshoot!


New article about the ethics of autonomous vehicles

Much has been written about the ethics of autonomous vehicles in recent years. In particular, the trolley problem thought experiment has been widely debated in the context of autonomous driving. But what do tech companies say about this? In this article published recently in Transport Reviews, Andreia Martinho, Nils Herber, Maarteen Kroesen, and Caspar Chorus looked at the industry reports of companies with autonomous driving testing permit in California to find some answers.

The onset of autonomous driving has provided fertile ground for discussions about ethics in recent years. These discussions are heavily documented in the scientific literature and have mainly revolved around extreme traffic situations depicted as moral dilemmas, i.e. situations in which the autonomous vehicle (AV) is required to make a difficult moral choice. Quite surprisingly, little is known about the ethical issues in focus by the AV industry. General claims have been made about the struggles of companies regarding the ethical issues of AVs but these lack proper substantiation. As private companies are highly influential on the development and acceptance of AV technologies, a meaningful debate about the ethics of AVs should take into account the ethical issues prioritised by industry. In order to assess the awareness and engagement of industry on the ethics of AVs, we inspected the narratives in the official business and technical reports of companies with an AV testing permit in California. The findings of our literature and industry review suggest that: (i) given the plethora of ethical issues addressed in the reports, autonomous driving companies seem to be aware of and engaged in the ethics of autonomous driving technology; (ii) scientific literature and industry reports prioritise safety and cybersecurity; (iii) scientific and industry communities agree that AVs will not eliminate the risk of accidents; (iv) scientific literature on AV technology ethics is dominated by discussions about the trolley problem; (v) moral dilemmas resembling trolley cases are not addressed in industry reports but there are nuanced allusions that unravel underlying concerns about these extreme traffic situations; (vi) autonomous driving companies have different approaches with respect to the authority of remote operators; and (vii) companies seem invested in a lowest liability risk design strategy relying on rules and regulations, expedite investigations, and crash/collision avoidance algorithms.

Two keynotes based on BEHAVE-research

This week, two keynotes will showcase BEHAVE-research to the wider community of researchers and practitioners in the fields of moral decision making for humans and artificial intelligence (AI).

First, Caspar will given the opening keynote at the inaugural World Museum Forum hosted by the National Museum of Korea: after the Forum is opened by the Minister of Culture of South Korea, Caspar will present BEHAVE-research into human and ethical decision making by AI. This talk will build largely on work done by Andreia Martinho, PhD-candidate in BEHAVE. The focus will be on the discrepancy between how industry and academia perceive and deal with ethical issues of AI, and on the variety of views that exist in academia on this topic. Finally, an approach to capture such moral uncertainty in AI will be presented.

Second, Caspar will, together with Prof. Geert Kazemier (director of Cancer Center Amsterdam at the Amsterdam University Medical Center), give the opening keynote of the Fall conference of the Netherlands Surgeons Association. In this talk, Caspar will explore how difficult moral choices made by medical professionals can be supported by moral choice models. This talk will build on BEHAVE-research into moral decision modeling and specifically on use-cases done by spin-off Councyl for several Dutch hospitals.

We are pleased with these opportunities granted to BEHAVE to present our research to the wider community of researchers and practitioners!

New BEHAVE paper: Obfuscation maximization-based decision-making

A new BEHAVE paper on obfuscation has been published in Mathematical Social Sciences.  The paper proposes a novel decision-making model based on obfuscation maximizaiton. A primer version of the paper has been presented in several conferenceds and colloquiums. The paper is a joint effort of previous, current, and new members or friends of the team, including Caspar, Sander, Aemiro, Erlend, Anae, and Teodora. Let’s celebrate!

The abstract is cited here:

Theories of decision-making are routinely based on the notion that decision-makers choose alternatives which align with their underlying preferences – and hence that their preferences can be inferred from their choices. In some situations, however, a decision-maker may wish to hide his or her preferences from an onlooker. This paper argues that such obfuscation-based choice behavior is likely to be relevant in various situations, such as political decision-making. This paper puts forward a simple and tractable discrete choice model of obfuscation-based choice behavior, by combining the well-known concepts of Bayesian inference and information entropy. After deriving the model and illustrating some key properties, the paper presents the results of an obfuscation game that was designed to explore whether decision-makers, when properly incentivized, would be able to obfuscate effectively, and which heuristics they employ to do so. Together, the analyses presented in this paper provide stepping stones towards a more profound understanding of obfuscation-based decision-making.


Aemiro Melkamu Daniel joins BEHAVE-program as a Postdoc

On October 1st, Aemiro joined the BEHAVE research program as a postdoc fellow. He holds a PhD in economics from Umeå University in Sweden, and has particular expertise in choice behavior analysis. His research combines microeconomic theories with insights from behavioral sciences to understand individual decision-making using choice modelling as a core analytical tool. In his PhD, Aemiro studied choice behavior related to household energy mainly using choice experiments. His main research responsibility in the BEHAVE program is to develop and apply discrete choice models to analyze decisions involving moral behavior in transportation, public health and other domains. Welcome to BEHAVE, Aemiro – we are looking forward to a fruitful collaboration!

New publication: how do people weigh fatalities caused by self-driving cars?

In a new paper which has been published today in the European Journal of Transport and Infrastructure Research (EJTIR), we study how people weigh fatalities caused by self-driving cars differently from fatalities caused by human drivers, and we show how part of this difference relates to a reference-level effect that is often seen in moral choice contexts. Lead author of the paper is Bing Huang; Caspar Chorus and Sander van Cranenburgh are co-authors. Here is the abstract:

Although Automated vehicles (AVs) are expected to have a major and positive effect on road safety, recent accidents caused by AVs tend to generate a powerful negative impact on the public opinion regarding safety aspects of AVs. Triggered by such incidents, many experts and policy makers now believe that paradoxically, safety perceptions may well prohibit or delay the rollout of AVs in society, in the sense that AVs will need to become much safer than conventional vehicles (CVs), before being accepted by the public. In this study, we provide empirical insights to investigate and explain this safety paradox. Using stated choice experiments, we show that there is indeed a difference between the weight that individuals implicitly attach to an AV-fatality and to a CV-fatality. However, the degree of overweighting of AV-fatalities, compared to CV-fatalities, is considerably smaller than what has been suggested in public opinions and policy reports. We also find that the difference in weighting between AV-fatalities and CV-fatalities is (partly) related to a reference level effect: simply because the current number of fatalities caused by AVs is extremely low, each additional fatality carries extra weight. Our findings suggest that indeed, AVs have to become safer—but not orders of magnitude safer—than CVs, before the general public will develop a positive perception of AVs in terms of road safety. Ironically, our findings also suggest that the inevitable occurrence of more AV-related road accidents will in time lead to a diminishing degree of overweighting of safety issues surrounding the AV.