Children’s trust towards erroneous robot informants
Abstract
As social robotics continues to grow and develop, robots are increasingly finding their way into more areas of society, including hospitals, homes, daycare centres, and schools. It is essential for these robots to behave in ways that are appropriate for interacting with children, especially when they may need to elicit trust. As part of this thesis, we conducted two experiments with an overall total 115 participants investigating preschool-aged children’s trust towards robots with human-like informational (experiment 1, Chapter 5) and robot-typical speech-recognition (experiment 2, Chapter 6) errors. Our findings suggest that children trust a robot that makes informational errors less than one that does not, but may trust a robot that exhibits speech-recognition errors more than one that does not. This suggests that children may perceive robot errors, and therefore trust robots differently, from other entities such as humans or puppets. We contribute the findings from these two experiments, as well as an initial framework of child-robot trust. This thesis provides a starting point for robot designers to consider trust when designing robots for children, and for researchers to further investigate young children’s trust towards robots.