Moral decisions and the corona crisis – insights from the BEHAVE-research program

Caspar Chorus – Professor of choice behavior modeling at TU Delft – 28 March 2020

During the past few weeks – and in China, months – it has become clear that the corona virus generates a flurry of moral challenges for human decision-makers to solve. From panic-buying and social-distancing to triage and determining the monetary ‘value’ of a human life: citizens, health professionals and politicians around the globe are faced with decisions whose moral salience make them qualitatively different from the average types of choices most of make in our daily lives.

What insights can we draw from academic research into moral decision making, to help us navigate the storm? In this blogpost, I will draw from my team’s* publicly funded (by the European Research Council) work on this topic, in an attempt to contribute to public understanding of these important moral dimensions of the choices we make during the corona crisis.

Moral values do not influence concrete moral decisions

The current crisis demands that individuals make personal sacrifices in order to create a social good: we are asked to not hoard toilet paper, so that others don’t face empty shelves; healthy citizens are asked to stay at home and practice social-distancing, to prevent vulnerable people from catching the virus and overwhelming our health-care system. As expected, behavioral responses vary widely, ranging from careless or even anti-social behaviors to many acts of prudency and care.

An often heard opinion, is that this variety in responses reflects a variety of moral values: there are people who value fairness, care, loyalty – and there are people who do not. The former behave, the latter don’t. But things are not that easy. Research performed by our team empirically shows that the echo of moral values into concrete moral actions is only very faint, and often cannot even be detected from data about human choice behavior.

Why is that? Moral psychologists have long argued that before a moral value (e.g. fairness or loyalty) can lead to a concrete moral action, several conditions must be met: first, the decision-maker must be aware that a moral value endorsed by her is at stake in a particular situation; second, she must make the moral judgment that this value actually dictates her to choose a certain action; third, she must then decide to act in the way prescribed by her value; and finally she has to actually execute her decision. Our empirical research shows, in the context of a variety of decisions with a moral dimension, that this road from deep-seated moral values to concrete moral actions is ‘long and windy’ indeed. At each stage of the causal chain between values and actions, disruption is lurking; think of the roles of emotion and fear, misinformation, social pressure (“my best friends go out to the pub, so I have to join them”), conflicting moral values (“my family needs toilet paper and it is my moral duty to provide for them”), and rational expectations (“if everyone starts hoarding, then hoarding is the rational thing to do for me, too”).

As a result, we were not surprised to find that the empirical correlation between a person’s moral decisions and her moral values is very low. In other words, don’t be surprised if many of the people hoarding toilet paper or flouting social distancing advice strongly endorse values such as fairness, care, and loyalty.

What can a government do, to ensure citizens behave according to moral values such as fairness, care, and loyalty? Actually, most governments have been fairly effective in this regard (which should not come as a surprise, given the behavioral insight teams active in the highest circles of government): they explicitly target the steps in the causal chain described above. Take the Dutch government: our prime minister has repeatedly and forcefully argued that moral values are at stake (raising moral awareness); that there is no doubt as to what is the right thing to do (making a moral judgment); and that citizens must take responsibility and actually do the right thing (calling for moral action). When even such ‘decision-aids’ did not lead to full compliance, most governments have now resorted to stronger measures such as punishment (e.g. through fines) of anti-social behaviors.

Together, it seems that this mix of actions has in turn established a strong moral norm in society, especially regarding social-distancing, which seems to work in terms of ‘flattening the curve’. This is one of the nice by-products of forcefully establishing moral behaviors: once they are in place, the underlying attitudes or moral values will change accordingly. Similar results have been found in the context of racism and school segregation in the American South: only after the government forced schools to integrate, did moral norms regarding racism start to change gradually as a consequence of people of different ethnicity meeting each other in the classroom. Our research shows that this reverse effect (that is, concrete behavior influencing values) is actually stronger than the usually considered effect from moral value to concrete behaviors. A positively reinforcement cycle thus seems to be getting in place now, partly due to forceful and determined action of governments.

But how long can we maintain this collective ‘good’ behavior? This depends on whether or not society is willing to make a taboo trade-off…

 

Taboo trade-offs: human lives and economic costs

Moral psychologists make a distinction between conventional trade-offs, tragic trade-offs and taboo trade-offs. The first of these are the types of trade-offs we make on a daily basis, often without much contemplation (e.g. balancing the price and quality of a consumer product, when in a supermarket). The second of these is a trade-off between two ‘sacred’ entities, such as the one currently faced by health professionals who have to choose which corona-infected patient to assist, and which patient not to. Such triage may become inevitable in many hospitals, given acute scarcity of means like intensive care-beds and respiratory devices. The name says it all – this type of trade-off carries enormous emotional tolls for the decision-maker. But at least, health professionals know that society will not wag a finger at them for making these kind of life and death calls.

This is different for taboo trade-offs. These are trade-offs involving a sacred entity and a so-called secular entity. Although such taboo trade-offs are very rare, a particularly salient one is surfacing in the corona crisis. The current situation, where many societies are in full or almost complete lockdown, carries large economic costs. This begs the question how much economic loss a society is willing to accept, in an attempt to limit the loss of life caused by corona. And this is where the taboo comes in: ultimately, the above stated question implies that a human life is compared with, and traded against, an amount of money. A wealth of studies have shown, that people find the mere idea of contemplating such a trade-off as morally apprehensible. Governor Cuomo of the state of New York is one of them. President Trump of the United States of America is clearly not one of them.

When Trump suggested that the cure (lockdown and economic standstill) cannot be worse than the problem (people dying from corona), Cuomo responded by saying “If you ask the American people to choose between public health and the economy, then it’s no contest. No American is going to say, ‘accelerate the economy at the cost of human life.’ Because no American is going to say how much a life is worth”. Our research shows, that within society there is a great variety in people’s willingness to accept a taboo trade-off. For many people such a trade-off is indeed, taboo. They argue, like Governor Cuomo, that you simply cannot put a dollar or euro amount on a human life, and that all must be done so save a life. Many others however, argue that in a world of scarce resources, trade-offs simply have to be made, even when they are morally uncomfortable. We cannot, as a society, pay billions of dollars or euros for every single live saved. We cannot allow our entire economy to collapse in the process.

In fact, most Western societies have procedures in place to make such trade-offs, without much of the public knowing about them. Healthcare policies are routinely based on the notion of the monetary value of a ‘quality adjusted life year’. This value, which typically ranges in the tens of thousands of dollars or euros per healthy year of human life saved, has a long history in helping governments assess the cost-effectiveness of policies in domains as diverse as healthcare, traffic safety, and climate change.

When the inevitable moment comes where governments need to make this taboo trade-off, society’s acceptance depends a lot on how it will be communicated. One classical approach is to reframe the taboo trade-off into a tragic trade-off. In this case this could be done by expressing economic costs not in terms of decrease in GDP, but in terms of the real pain it inflicts on households and children, especially those with limited means. Another approach is to try and avoid the trade-off altogether, for example by explaining that a healthy economy is needed to sustain the healthcare system that is able to effectively deal with this and future pandemics. This is not merely a matter of clever or cynical politics; ultimately society needs to be informed about the terrible choices that need to be made in a time like this. Being open about the trade-offs these decisions imply, will help us weather future storms.

 

*The BEHAVE-project (http://behave.tbm.tudelft.nl/), funded by the European Research Council (https://erc.europa.eu/)  brings together scholars from fields as diverse as philosophy & ethics, sociology, econometrics, behavioral science, and artificial intelligence. Together we strive to develop and empirically test quantitative, mathematical models of moral decision-making. Note that the views expressed in this post do not necessarily reflect those of all members of our team. For readability, I did not refer to specific academic papers in the post itself; below is a list of sources that interested readers may want to look at for more background information.

 

Literature on moral values and moral actions:

https://www.sciencedirect.com/science/article/abs/pii/S0965856416307418

https://link.springer.com/article/10.1007/s10551-015-2886-8

(We are preparing additional manuscripts for submission to academic journals based on our research into this topic. Please contact us in case you like to receive draft versions thereof.)

 

Literature on taboo trade-offs:

https://twitter.com/arjenUSA/status/1242486964953255936

https://theincidentaleconomist.com/wordpress/economic-cost-of-flattening-the-curve/

https://www.scientificamerican.com/article/psychology-of-taboo-tradeoff/

https://www.sciencedirect.com/science/article/pii/S1755534517300684

Save the Date: 3rd Obfuscation Workshop, 11 and 12th May, 2020 @TU Delft

The 3rd iteration of the Obfuscation Workshop is coming to Europe! It will take place at TU Delft, in the Netherlands, 11 and 12 May 2020.

Obfuscation can be seen as the art and science of protecting your privacy in contexts where your actions are being monitored and analyzed by other humans, organizations, or e.g. AI-powered technology. Obfuscation models are an important topic in the BEHAVE program, as obfuscation is considered a useful strategy to masque one’s true moral motivations, in contexts where giving them away might lead to contempt or feelings of shame.

This interdisciplinary workshop convenes researchers, scientists, policy makers, developers, and artists to discuss a broad range of technical, theoretical, and policy approaches to obfuscation, including tools, simulations and experimental methods that people and artificial agents use to obfuscate themselves and their environments in asymmetries of power and information.

You can read more about the last iteration of the workshop at http://www.obfuscationworkshop.org/report/ .

The organizing committee, consisting of Helen Nissenbaum @ Cornell University, Caspar Chorus & Seda Gurses @ TU Delft and Ero Balsa @ KU Leuven, will soon be sending out invitations and an open call for submissions. For now, please save the dates and feel free to forward this announcement to anyone who could find it of interest.

Presentation at the AAAI/ACM conference

Our research on moral uncertainty for ethical AI is going to be featured in the AAAI/ACM conference on Artificial Intelligence, Ethics, and Society which will take place on February 7-8 in NY. PhD candidate Andreia Martinho will present our re-conceptualization of a metanormative framework for decision-making under moral uncertainty using Discrete Choice Analysis techniques and its operationalization using a latent class choice model. The relevance of moral uncertainty is illustrated in a proof of concept in which we conceptualize a society where AI Systems are in charge of making policy choices and investigate whether the choices of morally uncertain AI contrast with the choices of morally certain AI.

Three BEHAVE-presentations within and beyond the campus’ walls

In November, three presentations based on BEHAVE-work were given to enthusiastic audiences. On November 6, the TU Delft Tracks in Transport conference hosted a special session organized by Andreia and Caspar, in which Andreia, Tom and Bing Huang presented work on the topic “On moral men and machines: Real ethical issues on the road”. In this special session highlights of recent work done in the BEHAVE project were presented, including new theoretical insights underpinned by empirical evidence on three thought-provocative questions: Can we predict aggressive driving behavior based on drivers’ moral values? Why do people find accidents caused by autonomous vehicles more unacceptable than those caused by human drivers? and What can bioethics and automotive industry reports teach us about dealing with moral issues surrounding autonomous vehicles?

On 20 November, Caspar gave a lunch lecture for the aiTech community, which brings together scholars with diverse AI-related disciplinary perspectives ranging from computer science to ethics and the behavioral sciences. Caspar’s talk, titled “Morality and taboos for men and machines”, started with a quick overview of how the BEHAVE-team integrates the newest insights from moral psychology into tractable mathematical formulations of moral choice behavior. Furthermore, it was shown how, building on such a mathematical representation of human morality, a human-inspired moral compass can be designed for a ‘morally uncertain’ Artificial Intelligence (AI).

On 28 November, as part of the so-called Dag van het Gedrag in The Hague, Tom, Maarten en Caspar co-hosted a special session on the difficulties associated with stearing human behavior through influencing moral values and attitudes. The session was oversubscribed, and participants (mostly civil servants) engaged with the speakers in a lively discussion about the pros and cons of tapping into moral values when trying to steer behavior towards more societally beneficial outcomes.

Two new BEHAVE-publications

During the past couple of weeks, two new BEHAVE-research manuscripts have been published. The first is a Chapter about the moral dimension of regret, and its implications for choice set design. The design, or architecture, of choice sets is an increasingly established marketing tool to steer choices towards products with, e.g., a particularly high profit margin, in a subtle wat that is often not detectable by consumers. This manuscript shows, using the random regret minimization model which has been developed at TU Delft, how some of these choice architectures generate disproportional levels of regret and hence should be considered morally problematic. Food for thought for marketing professionals! The Chapter has appeared in the book The Moral Psychology of Regret, published by Rowman International.

Another manuscript, of which Ahmad Alwosheel is first author and Sander van Cranenburgh second author, was published in the Journal of Choice Modelling. It reconceptualizes techniques from the computer vision field to develop a procedure to build trust in the use of so-called Artificial Neural Networks (ANNs) for choice behavior analysis. ANNs are increasingly used to analyze and predict choice behavior, but their ‘black box’ nature causes problems in terms of explainability and interpretability. Especially in morally sensitive choice situations, this hampers the use of these models, as it precludes learning what moral values and trade-offs were at stake. The idea of having the trained neural network generate so-called prototypical examples helps gain trust, among analysts, that a particular neural network has learned intuitive relations and behavioral processes underlying the observed choice data. This in turn will help pave the way towards using these machine learning approaches for the analysis of moral decision making.

The call for papers is out! Special issue Models of moral decision-making

The Journal of Choice Modelling has published the call for papers for the special issue on Models of moral decision-making, which is connected to the special session of the International Choice Modelling Conference which we discussed further below. We welcome submissions on a range of fascinating topics, such as: Norm formation and its effect on choices; Altruistic and pro-social behaviour; Anti-social behaviour, deceit, obfuscation, taboos; Guilt, shame, remorse as determinants of choice behaviour; Decision-making in moral dilemmas; (social) Context effects on moral choice behaviour. Submission deadline: 30 September.

MSc theses on moral choice behavior

During the past semester, three MSc-students supervised by BEHAVE-members wrote theses on moral choice behavior. Nienke Pieters, hosted by Goudappel Coffeng where she was supervised by Dr. Matthijs Dicke-Ogenia, studied to what extent moral considerations influenced consumers’ decisions for safety-enhancing Advanced Driver Assistance Systems (ADAS); her thesis can be found here. Belle Visee, hosted by Mobycon where she was supervised by Babet Hendriks, studied the views of road users on (im-)polite behaviors by automated vehicles; her thesis can be found here. Anne-Fleur Tjon Joe Gin, hosted by Accenture where she was supervised by Rozemarijn de Koomen, studied to what extent consumers are willing to pay for more environmentally friendly package delivery services, and how this relates to their innate morality; her thesis can be found here. Congratulations to all three (former) MSc-students with obtaining their degree!

 

New BEHAVE-research published

A paper written by Tanzhe Tang (PhD-candidate) and Caspar Chorus has been accepted for publication in the Journal of Artificial Societies and Social Simulation (JASSS), which is a leading journal in the field of social simulation. The paper presents a new model of opinion dynamics. In contrast with previously proposed models, our so-called AOI (action opinion inference) model postulates that people learn about each others’ opinions not by observing them directly, but by observing each others’ actions and interpreting those. This added level of behavioral realism has important implications for the predicted population-shares of various types of opinions. As such, the AOI model provides an important stepping stone for more realistic models of moral norm formation, which are currently being developed in the BEHAVE-program. Congratulations with this publication, Tanzhe!

The full abstract reads as follows:  Opinion dynamics models are based on the implicit assumption that people can observe the opinions of others directly, and update their own opinions based on the observation. This assumption significantly reduces the complexity of the process of learning opinions, but seems to be rather unrealistic. Instead, we argue that the opinion itself is unobservable, and that people attempt to infer the opinions of others by observing and interpreting their actions. Building on the notion of Bayesian learning, we introduce an action-opinion inference model (AOI model); this model describes and predicts opinion dynamics where actions are governed by underlying opinions, and each agent changes her opinion according to her inference of others’ opinions from their actions. We study different action-opinion relations in the framework of the AOI model, and show how opinion dynamics are determined by the relations between opinions and actions. We also show that the well-known voter model can be formulated as being a special case of the AOI model when adopting a bijective action-opinion relation. Furthermore,we show that a so-called inclusive opinion, which is congruent with more than one action (in contrast with an exclusive opinion which is only congruent with one action), plays a special role in the dynamic process of opinion spreading. Specifically, the system containing an inclusive opinion always ends up with a full consensus of an exclusive opinion that is incompatible with the inclusive opinion, or with a mixed state of other opinions, including the inclusive opinion itself. A mathematical solution is given for some simple action-opinion relations to help better understand and interpret the simulation results. Finally, the AOI model is compared with the constrained voter model and the language competition model; several avenues for further research are discussed at the end of the paper.

BEHAVE-research presentations

During the first half of 2019, various BEHAVE-studies have been presented at symposia, workshops, and conferences. Tanzhe Tang presented his newest work on obfuscation and its effects on moral norm formation at the INAS symposium in St Petersburg, which brought together experts on Agent-Based Modelling for Theory Building in Social Sciences. Nicolas Cointe presented his work on how obfuscation influences coalition formation in multi-agent systems at the EXTRAAMAS workshop in Montreal, which focused on Explainable Transparent Autonomous Agents and Multi-Agent Systems. His paper was nominated for best paper award! Caspar Chorus presented BEHAVE-research at various occasions, including an opening keynote at the National Econometricians Day, a seminar at Leeds University’s Choice Modelling Centre, a plenary talk at a workshop on Collective Decision-Making at the University of Amsterdam, and a guest lecture at the Netherlands Defence Academy. Are you interested in receiving slides? Please send us an email. Note that much our most recent work will be presented at the forthcoming International Choice Modelling Conference in Kobe, Japan, where we are also hosting two special sessions on moral choice models (featuring work from scholars outside the BEHAVE-team). More updates to follow!

Special session at ICMC and special issue in JOCM – Models of moral decision making

The International Choice Modeling Conference 2019, which is the premier venue for choice modelers world-wide, will feature a special session on Models of moral decision making. The session, which is sponsored by the BEHAVE-program and organized by Caspar Chorus, Jürgen Meyerhoff and Ulf Liebe, will present novel work concerning the (theoretical) development and (empirical) testing of mathematical representations of human decision-making in morally sensitive contexts. Authors of accepted abstracts will be invited to submit full papers to a corresponding special issue, to be published in the JCR-listed Journal of Choice Modelling. The call for abstracts can be found here. We look forward to receiving your submission! Note that submission to this special session does not count against the ‘one presentation per registration’-quotum of the conference.