UnivLogo

Bikramjit Banerjee's Publications

Selected PublicationsAll Sorted by DateAll Classified by Publication Type

Action Discovery for Single and Multi-agent Reinforcement Learning

Bikramjit Banerjee and Landon Kraemer. Action Discovery for Single and Multi-agent Reinforcement Learning. Advances in Complex Systems, 14(2):279–305, World Scientific Publishing, 2011.

Download

[PDF] 

Abstract

The design of reinforcement learning solutions to many problems artificially constrain the action set available to an agent, in order to limit the exploration/sample complexity. While exploring, if an agent can discover new actions that can break through the constraints of its basic/atomic action set, then the quality of the learned decision policy could improve. On the flipside, considering all possible non-atomic actions might explode the exploration complexity. We present a novel heuristic solution to this dilemma, and empirically evaluate it in grid navigation tasks. In particular, we show that both the solution quality and the sample complexity improve significantly when basic reinforcement learning is coupled with action discovery. Our approach relies on reducing the number of decision points, which is particularly suited for multiagent coordination learning, since agents tend to learn more easily with fewer coordination problems (CPs). To demonstrate this we extend action discovery to multi-agent reinforcement learning. We show that Joint Action Learners (JALs) indeed learn coordination policies of higher quality with lower sample complexity when coupled with action discovery, in a multi-agent box -pushing task.

BibTeX

@Article{Banerjee11:Action,
  author =       {Bikramjit Banerjee and Landon Kraemer},
  title =        {Action Discovery for Single and Multi-agent


                 Reinforcement Learning},
  journal =      {Advances in Complex Systems},
  year =         {2011},
  volume =       {14},
  number =       {2},
  pages =        {279--305},
  publisher = {World Scientific Publishing},
  abstract = {The design of reinforcement learning solutions to many
   problems artificially constrain the action set available to an agent,
   in order to limit the exploration/sample complexity. While exploring,
   if an agent can discover new actions that can break through the
   constraints of its basic/atomic action set, then the quality of the
   learned decision policy could improve. On the flipside, considering all
   possible non-atomic actions might explode the exploration complexity.
   We present a novel heuristic solution to this dilemma, and empirically
   evaluate it in grid navigation tasks. In particular, we show that both
   the solution quality and the sample complexity improve significantly
   when basic reinforcement learning is coupled with action discovery. Our
   approach relies on reducing the number of decision points, which is
   particularly suited for multiagent coordination learning, since agents
   tend to learn more easily with fewer coordination problems (CPs). To
   demonstrate this we extend action discovery to multi-agent
   reinforcement learning. We show that Joint Action Learners (JALs)
   indeed learn coordination policies of higher quality with lower sample
   complexity when coupled with action discovery, in a multi-agent box
   -pushing task.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Sat May 29, 2021 15:48:22