Learning to trust in the competence and commitment of agents

TitleLearning to trust in the competence and commitment of agents
Publication TypeJournal Articles
Year of Publication2009
AuthorsSmith M, desJardins M
JournalAutonomous Agents and Multi-Agent Systems
Volume18
Issue1
Pagination36 - 82
Date Published2009///
ISBN Number1387-2532
Abstract

For agents to collaborate in open multi-agent systems, each agent must trust in the other agents’ ability to complete tasks and willingness to cooperate. Agents need to decide between cooperative and opportunistic behavior based on their assessment of another agents’ trustworthiness. In particular, an agent can have two beliefs about a potential partner that tend to indicate trustworthiness: that the partner is competent and that the partner expects to engage in future interactions . This paper explores an approach that models competence as an agent’s probability of successfully performing an action, and models belief in future interactions as a discount factor. We evaluate the underlying decision framework’s performance given accurate knowledge of the model’s parameters in an evolutionary game setting. We then introduce a game-theoretic framework in which an agent can learn a model of another agent online, using the Harsanyi transformation. The learning agents evaluate a set of competing hypotheses about another agent during the simulated play of an indefinitely repeated game. The Harsanyi strategy is shown to demonstrate robust and successful online play against a variety of static, classic, and learning strategies in a variable-payoff Iterated Prisoner’s Dilemma setting.

URLhttp://dx.doi.org/10.1007/s10458-008-9055-8