Evaluating the Effect of Theory of Mind on People's Trust in a Faulty Robot


The success of human-robot interaction is strongly affected by the people’s ability to infer others’ intentions and behaviours, and the level of people’s trust that others will abide by their same principles and social conventions to achieve a common goal. The ability of understanding and reasoning about other agents’ mental states is known as Theory of Mind (ToM). ToM and trust, therefore, are key factors in the positive outcome of human-robot interaction. We believe that a robot endowed with a ToM is able to gain people’s trust, even when this may occasionally make errors.
In this work, we present a user study in the field in which participants (N=123) interacted with a robot that may or may not have a ToM, and may or may not exhibit erroneous behaviour. Our findings indicate that a robot with ToM is perceived as more reliable, and they trusted it more than a robot without a ToM even when the robot made errors. Finally, ToM results to be a key driver for tuning people’s trust in the robot even when the initial condition of the interaction changed (i.e., loss and regain of trust in a longer relationship).

International Symposium on Robot and Human Interactive Communication