This work was presented at HAIDM 2015. The 2015 workshop on Human-Agent Interaction Design and Models was co-organised by SmartSociety.
Abstract: With the improvement in agent technology and agent capabilities we foresee increasing use of agents in social contexts and, in particular, in human-agent team applications. To be effective in such team contexts, agents need to understand and adapt to the expectation of human team members. This paper presents our study on how behavioral strategies of agents affect the humans’ trust in those agents and the concomitant performance expectations that follow in virtual team environments. We have developed a virtual teamwork problem that involves repeated interaction between a human and several agent types over multiple episodes. The domain involves transcribing spoken words, and was chosen so that no specialized knowledge beyond language expertise is required of the human participants. The problem requires humans and agents to independently choose subset of tasks to complete without consulting with the partner and utility obtained is a function of the payment for task, if completed, minus its efforts. We implemented several agents types, which vary in how much of the teamwork they perform over different interactions in an episode. Experiments were conducted with subjects recruited from the MTurk. We collected both teamwork performance data as well as surveys to gauge participants’ trust in their agent partners. We trained a regression model on collected game data to identify distinct behavioral traits. By integrating the prediction model of player’s task choice, a learning agent is constructed and shown to significantly improve both social welfare, by reducing redundant work without sacrificing task completion rate, as well as agent and human utilities.
Keywords: Human-agent interaction, teamwork, trust, adaptation.
Citation: Feyza Hafizoglu and Sandip Sen. Evaluating Trust Levels in Human-agent Teamwork in Virtual Environments.