Efficient PAC-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs.
The first part of this research program concerns the development of customized and easily implementable Probably Approximately Correct (PAC)-learning algorithms for episodic tasks over acyclic state spaces. The defining characteristic of our algorithms is that they take explicitly into consideration the acyclic structure of the underlying state space and the episodic nature of the considered learning task. The first of the above two attributes enables a very straightforward and efficient resolution of the ``exploration vs exploitation' dilemma, while the second provides a natural regenerating mechanism that is instrumental in the dynamics of our algorithms. Some additional characteristics that distinguish our algorithms from those developed in the past literature are (i) their direct nature, that eliminates the need of a complete specification of the underlying MDP model and reduces their execution to a very simple computation, and (ii) the unique emphasis that they place in the efficient implementation of the sampling process that is defined by their PAC property.
More specifically, the aforementioned PAC-learning algorithms complete their learning task by implementing a systematic episodic sampling schedule on the underlying acyclic state space.
This sampling schedule combined with the stochastic nature of
the transitions taking place, define
the need for efficient routing policies that will help the
algorithms complete their exploration program while minimizing, in expectation, the
number of executed episodes.
The design of an optimal policy that
will satisfy a specified pattern of arc visitation requirements in
an acyclic stochastic graph, while minimizing the expected number
of required episodes, is a challenging problem, even under the
assumption that all the branching probabilities involved are known
a priori. Hence, the sampling process that takes place in
the proposed PAC-learning algorithms gives rise to a novel, very
interesting stochastic control/scheduling problem, that is
characterized as the problem of the Optimal Node Visitation
(ONV) in acyclic stochastic digraphs. The second part of the work presented herein seeks the systematic modelling and analysis of the ONV problem.
The last part of this research program explores the computational merits
obtained by heuristical implementations that result from the integration of
the ONV problem developments into the PAC-algorithms developed in the first part of this work. We study, through numerical experimentation, the relative performance of these resulting heuristical implementations in comparison to (i) the initial version of the PAC-learning algorithms, presented in the first part of the research program, and (ii) standard Q-learning algorithm variations provided in the RL literature. The work presented in this last part reinforces and confirms the driving assumption of this research, i.e., that one can design customized RL algorithms of enhanced performance if the underlying problem structure is taken into account.
Advisor:Reveliotis, Spyros; Shamma, Jeff; Zwart, Bert; Ayhan, Hayriye; Goldsman, Dave
School:Georgia Institute of Technology
School Location:USA - Georgia
Source Type:Master's Thesis
Keywords:industrial systems engineering
Date of Publication:12/19/2008