New BEHAVE-research: Model of obfuscation based decision-making

Monday 9 April, Caspar has presented –at his section’s colloquium series– a new model of decision-making with relevance for moral choice behavior of humans and artificial agents. The so-called obfuscation-based model postulates that in some (moral) choice situations, agents may wish to hide the motivations (e.g. moral rules) underlying their choices, from onlookers. Here are the slides of the presentation.

The abstract of the talk is as follows: Formal models of decision-making are routinely founded on the assumption that agents base their choices on underlying motivations (also named preferences, goals, decision rules, desires, etc.); this talk presents a new perspective on modelling decision-making, by assuming that agents –when making choices– aim to obfuscate (hide) their underlying motivations. In other words, where decision models usually assume that motivations echo through in choices, this model postulates that decision-makers may want to suppress that echo.

Such obfuscation-behaviour is likely to occur in various situations: think of a person facing a moral dilemma, who is unsure which moral principle to apply and afraid that an onlooker (which may be her own ‘moral persona’) will punish her with contempt or feelings of guilt, if the ‘wrong’ moral principle is applied. Or think of an Artificial Agent that is being trained using a penalty-system to avoid implicit moral biases underlying her choices. In such situations, the agent benefits from choosing actions that, while being in line with her motivations, at the same time hide those motivations for onlookers or prosecutors.

 Combining notions of Bayesian inference and Information entropy, I present a mathematical representation of such obfuscating agent behaviour; and I illustrate how the actions chosen by obfuscators differ from those chosen by agents that do not attempt to obfuscate. I also show how an onlooker may try to design choice sets that maximize the information that may be extracted from choices made by (non-)obfuscating agents.

March 2018: Joint workshop with CoreSAEP

On March 22nd, the BEHAVE-team held a joint workshop together with the CoreSAEP-team. The CoreSAEP-research program is chaired by Birna van Riemsdijk (Faculty of Electrical Engineering, Mathematics, and Computer Science @ TU Delft), and sponsored by a so-called Vidi-grant from NWO. It aims to develop a new computational reasoning framework for Socially Adaptive Electronic Partners that support people in their daily lives. The PI and three researchers currently active in that program pitched their research, and so did the core-researchers and PI of BEHAVE. The workshop was most useful and inspirational, as it turned out that the two programs have enough overlap to ensure a meaningful discussion, yet enough difference in perspectives and backgrounds to ensure a mutual learning experience. For example, the experience in the BEHAVE-program with extracting moral decision rules and preferences from observed behaviours is likely to be of use for the design of socially adaptive technologies in CoreSAEP. Vice versa, the multi-agent perspective developed in that program, is likely to be very relevant for modules 3 and 4 of the BEHAVE-program. Several plans for future collaboration were discussed – to be continued!

January and February 2018: Several seminars and pitches of the BEHAVE-program

In a series of seminars and pitches, Caspar introduced the BEHAVE-program to a variety of audiences:

  • On February 26, Caspar presented the BEHAVE-program in a seminar at Ben-Gurion University, as part of a two-day research visit to Beër Sjeva, Israel. A collaboration was started with Eran Ben-Elia and his co-workers on the topic of ‘Moral Automated Vehicles’. Specifically, we will attempt to study, using agent based-methods, how different moral rules embedded in automated vehicles impact traffic flows and aggregate travel times in transport networks where automated vehicles and conventional vehicles interact.
  • On February 15, invited by Esther de Bekker-Grob (Health economics & policy) and Bas Donkers (Quantitative marketing & econometrics), Caspar presented the BEHAVE-program at Erasmus University Rotterdam. The seminar was part of the Choice Modeling Center seminar series. Afterwards, potential avenues for collaborations were explored, particularly to study moral aspects of health related decision-making and policy design. Even more so than in Transportation contexts, many health related choices have a clear moral dimension, making the development of moral choice models all the more important in that field.
  • On Friday 2 February, Caspar pitched the BEHAVE-program to a team of civil servants from Amsterdam municipality, all working on smart mobility. The meeting was organized by the AMS-institute, which is a joint initiative of TU Delft, Wageningen university, MIT, Amsterdam municipality and many other partners; it aims to translate academic results into real life applications that benefit liveability in dense urban areas such as Amsterdam. In his presentation, Caspar emphasized the opportunity of collaboration between BEHAVE-researchers and Amsterdam, to identify and work with crucial moral aspects of topics such as co-operative driving and automated vehicles (‘moral machines’).
  • On January 16, for a diverse public consisting of philosophers, economists, psychologists and sociologists, Caspar presented the BEHAVE-program and its first recent output in the form of a study into taboo trade-off aversion (see here for an announcement of the seminar). The seminar was organized by Professor Andreas Flache of Groningen University, who is professor of Sociology and leading expert in the field of social simulation, and took place in the beautifully renovated “de Gadourekzaal”. Afterwards, potential avenues for collaborations were explored – to be continued! The slides of Caspar’s talk can be found here.

February 2018: Hiring processes finished – full BEHAVE-team complete!

Having received a large number of applications, and after a number of selection rounds, we are pleased to inform you that our team of full-time researchers is now complete. We welcome all our researchers – some of whom have started in Fall 2017 and some of whom are yet to start – and we wish them a very fruitful and enjoyable research experience at the BEHAVE-quarters @TU Delft. To get an idea of the diversity of perspectives that we aim to tap into, take a look at the educational background of our hired team-members (their full profiles can be found here):

  • Tom van den Berg: MSc Philosophy, MSc Criminology; both @ VU University Amsterdam
  • Teodóra Szép: MSc Economics (Public policy specialization) @ VU University Amsterdam
  • Anae Sobhani: PhD Transportation Engineering @ McGill University (Canada)
  • Andreia Martinho: MSc Bioethics @ New York University (USA)
  • Tanzhe Tang: MSc Applied Mathematics @ King’s College London (UK)
  • Nicolas Cointe: PhD Artificial Intelligence @ Ecole Nat. Sup. des Mines Saint Etienne (France)

October 2017: First BEHAVE-publication online – “Taboo trade-off aversion: A discrete choice model and empirical analysis”

We are pleased to announce the first journal paper-output of our research program: a publication in the multidisciplinary Journal of Choice Modelling (special issue containing selected contributions from the 5th International Choice Modelling Conference). I would like to acknowledge contributions from co-authors Niek Mouter (TU Delft), Baiba Pudane (TU Delft), and Danny Campbell (Stirling U.) – all from outside the BEHAVE team.

Here is the abstract: An influential body of literature in moral psychology suggests that decision makers consider trade-offs morally problematic, or taboo, when the attributes traded off against each other belong to different ‘spheres’, such as friendship versus market transactions. This study is the first to model and empirically explore taboo trade-off aversion in a discrete choice context. To capture possible taboo trade-off aversion, we propose to extend the conventional linear in parameters logit model by including penalties for taboo trade-offs. Using this model, we then explore the presence (and size) of taboo trade-off aversion in a data set specifically collected for this purpose. Results, based on estimation of a variety of (Mixed) Logit models with and without taboo trade-off penalties, suggest that there is indeed a significant and sizeable taboo trade-off aversion underlying choice behaviour of respondents.