EU Horizon 2020
Horizon 2020
HomeNewsResearch ThemesPeopleKey Prior PublicationsPublications
[Ole22] Maciej Olejnik. Modelling Human-Like Decision Making and Social Trust Using Probabilistic Programming. Ph.D. thesis, Department of Computer Science, University of Oxford. 2022. [pdf] [bib]
Downloads:  pdf pdf (3.07 MB)  bib bib
Abstract. Effective collaborations between humans and machines necessitate the modelling of human cognitive processes and complex social attitudes such as trust, guilt or shame. Robots can take an active role in reducing misuse if they are able to detect human biases, inaccurate beliefs or overtrust, and make accurate predictions of human behaviour.

To that end, we propose cognitive stochastic multiplayer games, a novel parametric framework for multi-agent human-like decision making, which aims to capture human motivation through mental, as well as physical, goals. Our framework enables expression of cognitive notions such as trust in terms of beliefs, whose dynamics is affected by agent’s observation of interactions and own preferences. Agents are modelled as soft expected utility maximisers, which allows us to capture the full range of rationality, from imperfections characteristic of human rationality, to fully rational robots. A key contribution is a novel formulation of the utility function, which incorporates agent’s own, as well as other agents’, emotions and takes into account their preference over different goals. Heuristics and mental shortcuts that people use to approximate what they cannot observe are captured in the framework as mental state estimation functions.

We implement the model using a probabilistic programming language called WebPPL. Our tool supports encoding of cognitive models and simulating their execution based on stochastic behavioural predictions it generates. Conversely, given a set of data, the tool may be used to learn characteristics of agents using Bayesian techniques. The software has been designed to be modular, so that probabilistic models of affection developed by others may be integrated. We validate our tool on a number of synthetic case studies, demonstrating that cognitive reasoning can explain experimentally-observed human behaviour that standard, equilibria- based analysis often overlooks.

To evaluate the framework in a human-robot interaction setting, we have designed and conducted an experiment that has human participants playing the Trust Game against a custom bot we developed. Participants in the game (humans or bots) are randomly assigned the role of an investor or investee. Results of our study show that predictions of human behaviour generated by our tool are on par with, and in some circumstances superior to, the state of the art. Unlike other approaches, our model integrates behavioural observations with prior beliefs and captures how one agent’s behaviour affects actions of their opponent.