UnivLogo

Bikramjit Banerjee's Publications

Selected PublicationsAll Sorted by DateAll Classified by Publication Type

Human-Agent Transfer from Observations

Bikramjit Banerjee and Sneha Racharla. Human-Agent Transfer from Observations. The Knowledge Engineering Review, 36(2021, e2), Cambridge University Press, 2020.

Download

[PDF] 

Abstract

Learning from human demonstration (LfD), among many speedup techniques for reinforcement learning (RL), has seen many successful applications. We consider one LfD technique called Human Agent Transfer (HAT), where a model of the human demonstrator’s decision function is induced via supervised learning, and used as an initial bias for RL. Some recent work in LfD have investigated learning from observations only, i.e., when only the demonstrator’s states (and not its actions) are available to the learner. Since the demonstrator’s actions are treated as labels for HAT, supervised learning becomes untenable in their absence. We adapt the idea of learning an inverse dynamics model from the data acquired by the learner’s interactions with the environment, and deploy it to fill in the missing actions of the demonstrator. The resulting version of HAT—called State-only HAT (SoHAT)—is experimentally shown to preserve some advantages of HAT in benchmark domains with both discrete and continuous actions. This paper also establishes principled modifications of an existing baseline algorithm—called A3C—to create its HAT and SoHAT variants that are used in our experiments.

BibTeX

@Article{Banerjee20:Human,
  author = 	 {Bikramjit Banerjee and Sneha Racharla},
  title = 	 {Human-Agent Transfer from Observations},


  journal = 	 {The Knowledge Engineering Review},
  year = 	 {2020},
  volume = 	 {36},
  number = 	 {2021, e2},
  publisher =    {Cambridge University Press},
  abstract =     {Learning from human demonstration (LfD), among many
  speedup techniques for reinforcement learning (RL), has seen many
  successful applications. We consider one LfD technique called Human
  Agent Transfer (HAT), where a model of the human demonstrator’s
  decision function is induced via supervised learning, and used as
  an initial bias for RL. Some recent work in LfD have investigated
  learning from observations only, i.e., when only the demonstrator’s
  states (and not its actions) are available to the learner. Since
  the demonstrator’s actions are treated as labels for HAT, supervised
  learning becomes untenable in their absence. We adapt the idea of
  learning an inverse dynamics model from the data acquired by the
  learner’s interactions with the environment, and deploy it to fill
  in the missing actions of the demonstrator. The resulting version
  of HAT—called State-only HAT (SoHAT)—is experimentally shown to
  preserve some advantages of HAT in benchmark domains with both
  discrete and continuous actions. This paper also establishes
  principled modifications of an existing baseline algorithm—called
  A3C—to create its HAT and SoHAT variants that are used in our
  experiments.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Sat May 29, 2021 15:48:22