Seminar • Waterloo AI Institute — Fair Reward Division

Friday, March 13, 2020 10:30 am - 10:30 am EDT (GMT -04:00)

Kate Larson
David R. Cheriton School of Computer Science

Axiomatic approaches are an appealing method for designing fair algorithms, as they provide a formal structure for reasoning about and rationalizing individual decisions. However, to make these algorithms useful in practice, their axioms must appropriately capture social norms.

We explore this tension between fairness axioms and socially acceptable decisions in the context of cooperative game theory for the fair division of rewards.

We use two crowdsourced experiments to study people’s impartial reward divisions in cooperative games, focusing on games that systematically vary the values of the single-player coalitions.

Our results show that people select rewards that are remarkably consistent, but place much more emphasis on the single-player coalitions than the Shapley value does. Further, their reward divisions violate both the null player and additivity axioms but support weaker axioms.

We argue for a more general methodology of testing axioms against experimental data, retaining some of the conceptual simplicity of the axiomatic approach while still using people’s opinions to drive the design of algorithms.


Bio: Professor Larson’s research interests fall in the area of artificial intelligence with an emphasis on self-interested multiagent systems and how agents interact. The overarching theme of her research is strategic reasoning in computational settings. She is interested in understanding how ideas from game theory, mechanism design and microeconomics can be used to model and design systems for intelligent agents, as well as in studying the effect that computational limitations have on strategic behaviour, with the aim of reconciling some of the conflicts that arise between computational and game-theoretic constraints. 

For example, she has developed models for settings where agents’ interactions are constrained by an underlying network, has designed robust mechanisms and algorithms for multiagent settings where agents deviate from classic rationality assumptions, and has investigated multiagent models where computational and information gathering abilities are limited. 

Applications of Professor Larson’s work are wide-ranging. She has studied resource sharing and allocation for wildfire control, proposed market mechanisms for crowdsourcing applications, looked at electronic auction and market design, the design and implementation of software agents for negotiation settings, and the use of economic methodologies in computational systems such as cloud computing.