Tom van den Berg presents a poster at the Conference of the Society for Judgment and Decision Making

On Friday 11th of February Tom of the BEHAVE-group presented a poster on his paper “Why general moral values cannot predict moral behavior in daily life” -co-authored by Maarten Kroesen and Caspar Chorus- at the Conference of the Society for Judgment and Decision Making. The conference is a leading annual meeting in the field of judgment and decision making research, attracting scholars from all parts of the world. Here, Tom presented a conceptual and exploratory empirical study on the relationship between general moral values and specific moral behavior in daily life. The study concludes that general moral values are actually poor predictors of moral behavior and that the moral questionnaires that are usually used to measure peoples values –like the Moral Foundation Questionnaire- are too general to capture the context-sensitivity of moral decision making. This is a link to our poster. See for more information on our study the abstract below.

Abstract

Throughout the behavioral sciences it is routinely assumed that general, basic moral values have a considerable influence on people’s behavior in daily life. Despite that this assumption is a key underpinning of major strands of research, a solid theoretical and empirical foundation for this claim is lacking. In this study, we explore this relationship between general moral values and daily life behavior through a conceptual analysis and an empirical study. Our conceptual analysis of the moral value-moral behavior relationship suggests that the effect of a generally endorsed moral value on moral behavior is highly context-dependent. It requires the materialization of several phases of moral decision making, each influenced by many contextual factors. We expect that this renders the relationship between generic moral values and people’s specific moral behavior indeterminate. Subsequently, we empirically explore this relationship in three different studies. We relate two different measures of general moral values -the Moral Foundation Questionnaire and the Morality as a Compass Questionnaire- to a broad set of self-reported morally relevant daily life behaviors (including adherence to Covid-19 measures and participation in voluntary work). Our empirical results are in line with the expectations derived from our conceptual analysis: the considered general moral values show to be rather poor predictors of the selected daily life behaviors. Furthermore, our results show that the moral values that we tailored to the specific context of the behavior became somewhat stronger predictors. Together with the insights derived from our conceptual analysis, this indicates the relevance of the contextual nature of moral decision making as a possible explanation for the poor predictive value of general moral values. Accordingly, our conceptual and empirical findings suggest that the implicit assumption that general moral values are predictive of specific moral behavior -a key underpinning of empirical moral theories such as Moral Foundation Theory- lacks foundation and may need revision.

New Publication: A Group-based Polarization Measurement

Polarization is everywhere and everyone is talking about it, but do we really know what is polarization and how to measure it? Our paper “Together alone: a group-based polarization measurement” published in the journal Quality & Quantity: International Journal of Methodology may provide a special viewpoint. The centerpiece of the paper lies in the idea that groups, instead of individuals, are the key actor in understanding and measuring polarization. Based on this idea, a new polarization measurement that satisfies a range of desirable properties is proposed.

The paper is the outcome of the collobration between two BEHAVEs: our BEHAVE team from TUDelft (Caspar Chorus, Amineh Ghorbani, and Tanzhe Tang), and the BEHAVE lab from Univeristy of Milan (Flaminio Squazzoni). We are looking forward to furture collobrations.

Here is the abstract of the paper:

The growing polarization of our societies and economies has been extensively studied in various disciplines and is subject to public controversy. Yet, measuring polarization is hampered by the discrepancy between how polarization is conceptualized and measured. For instance, the notion of group, especially groups that are identified based on similarities between individuals, is key to conceptualizing polarization but is usually neglected when measuring polarization. To address the issue, this paper presents a new polarization measurement based on a grouping method called “Equal Size Binary Grouping” (ESBG) for both uni- and multi-dimensional discrete data, which satisfies a range of desired properties. Inspired by techniques of clustering, ESBG divides the population into two groups of equal sizes based on similarities between individuals, while overcoming certain theoretical and practical problems afflicting other grouping methods, such as discontinuity and contradiction of reasoning. Our new polarization measurement and the grouping method are illustrated by applying them to a two-dimensional synthetic data set. By means of a so-called “squeezing-and-moving” framework, we show that our measurement is closely related to bipolarization and could help stimulate further empirical research.

New Publication: A healthy debate: Exploring the views of medical doctors on the ethics of artificial intelligence

The paper “A healthy debate: Exploring the views of medical doctors on the ethics of artificial intelligence” by Andreia Martinho, Maarten Kroesen, and Caspar Chorus is now published in the journal AI in Medicine. In this study we explored the views of medical doctors from three different countries and a mix of different specializations about the Ethics of Health AI. By surveying medical practitioners, we are taking yet another step towards contextualization in AI Ethics.

Abstract:

Artificial Intelligence (AI) is moving towards the health space. It is generally acknowledged that, while there is great promise in the implementation of AI technologies in healthcare, it also raises important ethical issues. In this study we surveyed medical doctors based in The Netherlands, Portugal, and the U.S. from a diverse mix of medical specializations about the ethics surrounding Health AI. Four main perspectives have emerged from the data representing different views about this matter. The first perspective (AI is a helpful tool: Let physicians do what they were trained for) highlights the efficiency associated with automation, which will allow doctors to have the time to focus on expanding their medical knowledge and skills. The second perspective (Rules & Regulations are crucial: Private companies only think about money) shows strong distrust in private tech companies and emphasizes the need for regulatory oversight. The third perspective (Ethics is enough: Private companies can be trusted) puts more trust in private tech companies and maintains that ethics is sufficient to ground these corporations. And finally the fourth perspective (Explainable AI tools: Learning is necessary and inevitable) emphasizes the importance of explainability of AI tools in order to ensure that doctors are engaged in the technological progress. Each perspective provides valuable and often contrasting insights about ethical issues that should be operationalized and accounted for in the design and development of AI Health.

New publication: hiding opinions by minimizing disclosed information

Our paper is published in The Journal of Mathmatical Sociology. The paper introduces obfsucation — the strategy to minimize disclosed information — to opinion dynamics and provides a two illustrative examples. Obfuscation lies between honesty (transparency) and deception, and has received wide attention recently (see a recent publication from the behave team and the obfuscation workshop held in 2021). The abstract of the paper is as follows:

In the field of opinion dynamics, the hiding of opinions is routinely modeled as staying silent. However, staying silent is not always feasible. In situations where opinions are indirectly expressed by one’s observable actions, people may however try to hide their opinions via a more complex and intelligent strategy called obfuscation, which minimizes the information disclosed to others. This study proposes a formal opinion dynamics model to study the hitherto unexplored effect of obfuscation on public opinion formation based on the recently developed Action-Opinion Inference Model. For illustration purposes, we use our model to simulate two cases with different levels of complexity, highlighting that the effect of obfuscation largely depends on the subtle relations between actions and opinions.

New publication: Modeling and supporting triage for the ICU in the midst of the Covid-pandemic

A paper was published online in the journal Intensive Care Medicine, which is the leading outlet in the field of intensive care. In the paper, we describe how the morally difficult decision which covid-patient (not) to admit to the ICU can be supported using choice experiments and choice models. The paper is authored by Intensive Care doctors from the Amsterdam University Medical Center and the Onze Lieve Vrouwe Gasthuis and by Caspar, whose spin-off Councyl built the experiment and the models together with local medical professionals. Results show that the expertise and considerations of ICU professionals can be effectively codified using choice modeling methods; we also highlight minor differences between the two hospitals. In view of the ongoing covid-crisis and shortages of ICU-staff, we hope that this decision support may eventually be used to help the healthcare sector accommodate a potential new wave of covid-patients.

Opinion article in de Volkskrant on Compromises in political negotiations

On May 25, de Volkskrant, a well-read newspaper with national circulation, published an op-ed by Caspar, about compromises in political negotiations. Currently, Dutch political parties are engaged in a difficult process of coalition exploration. Whether or not these negotiations will lead to a succesful government, depends on the ability of political parties to find common ground. This article posits that it is better to search for a compromise on every theme (climate, immigration, etc.) rather than opt for a strategy of extremes in which every party gets a big win on some themes (say climate) and suffers big sacrifices on others (say immigration). Why are these compromise outcomes more sustainable than trade-off outcomes? This is is related to the well-known compromise effect, which is one of the most robust behavioral economics findings and has been replicated in all sorts of contexts. The source of this effect lies in loss aversion and taboo trade off aversion: ultimately, in a strategy of extremes, the gains on one theme do not compensate for the losses felt on others, and the trade-off itself feels taboo (see earlier work on this by BEHAVE). Especially in polarized environments and in cases where deep values and moral dilemmas are at stake and decisions are perceived as very difficult, loss aversion and taboo trade off aversion are deeply felt and hence compromises carry the day. A nice way to show the relevance of our work to the wider community!

Aemiro presents at ICMC

On 24 May 2021, Aemiro from BEHAVE gave a presentation in the International Choice Modelling Conference (ICMC) online mini-event. The event drew participants from all corners of the world and particularly focused on using choice modelling to understand behaviour in a Covid-19 context. In his presentation, Aemiro discussed a work in progress related to self-interest motives, positional concerns and distributional considerations in healthcare choices. In particular, he explained about a flexible new choice model that captures conventional preferences, positional concerns as well as distributional preferences. He discussed the properties and identification issues of the model and outlined data requirements for empirical application

The conference website is https://cmc.leeds.ac.uk/news/icmc-2021-mini-online-conference/

Presentation at the Netherlands Parliament

Last week, Caspar gave a presentation for a delegation from the Netherlands Parliament (Tweede en Eerste Kamer) to talk about the potential and pitfalls of using artificial intelligence for digital decision support in government. He discussed the issue of moral autonomy of decision makers who interact with AI-based decision support, and also touched upon the moral aspects of the government-citizen relationship and how those are affected by the use of AI-systems. Building on previous BEHAVE-work, e.g. a decision support system for moral decisions in a healthcare context, recommendations were given about how to uphold moral values when applying advanced decision support in morally sensitive contexts.

Andreia’s paper on Artificial Moral Agents published

The paper “Perspectives about artificial moral agents” by Andreia Martinho, Adam Poulsen, Maarten Kroesen & Caspar Chorus is now published at AI and Ethics.

Artificial Moral Agents (AMAs) have long been featured in science fiction but recent advances in AI urged the need for a scientific debate about these systems. In this article published by Andreia Martinho, Maarten Kroesen, and Caspar Chorus, ethicists were surveyed on key issues associated with AMAs. In particular, we aim to gain insights about four fundamental questions regarding these systems: (i) Should AMAs be developed? (ii) How to develop AMAs? (iii) Do AMAs have moral agency? and (iv) What will be the moral and societal role of these systems? Five main perspectives were identified.

Perspectives about Artificial Moral Agents

The pursuit of AMAs is complicated. Disputes about the development, design, moral agency, and future projections for these systems have been reported in the literature. This empirical study explores these controversial matters by surveying (AI) Ethics scholars with the aim of establishing a more coherent and informed debate. Using Q-methodology, we show the wide breadth of viewpoints and approaches to artificial morality. Five main perspectives about AMAs emerged from our data and were subsequently interpreted and discussed: (i) Machine Ethics: The Way Forward; (ii) Ethical Verification: Safe and Sufficient; (iii) Morally Uncertain Machines: Human Values to Avoid Moral Dystopia; (iv) Human Exceptionalism: Machines Cannot Moralize; and (v) Machine Objectivism: Machines as Superior Moral Agents. A potential source of these differing perspectives is the failure of Machine Ethics to be widely observed or explored as an applied ethic and more than a futuristic end. Our study helps improve the foundations for an informed debate about AMAs, where contrasting views and agreements are disclosed and appreciated. Such debate is crucial to realize an interdisciplinary approach to artificial morality, which allows us to gain insights into morality while also engaging practitioners.

The paper can be found at:

https://link.springer.com/article/10.1007/s43681-021-00055-2