Abstract (EN):
Stochastic search algorithms are black-box optimizers of an objective function. They have recently gained a lot of attention in operations research, machine learning and policy search of robot motor skills due to their ease of use and their generality. However, with slightly different tasks or objective functions, many stochastic search algorithms require complete re-learning in order to adapt the solution to the new objective function or the new context. As such, we consider the contextual stochastic search paradigm. Here, we want to find good parameter vectors for multiple related tasks, where each task is described by a continuous context vector. Hence, the objective function might change slightly for each parameter vector evaluation. Contextual algorithms have been investigated in the field of policy search. However, contextual policy search algorithms typically suffer from premature convergence and perform unfavourably in comparison with state of the art stochastic search methods. In this paper, we investigate a contextual stochastic search algorithm known as Contextual Relative Entropy Policy Search (CREPS), an information-theoretic algorithm that can learn for multiple tasks simultaneously. We extend that algorithm with a covariance matrix adaptation technique that alleviates the premature convergence problem. We call the new algorithm Contextual Relative Entropy Policy Search with Covariance Matrix Adaptation (CREPS-CMA). We will show that CREPS-CMA outperforms the original CREPS by orders of magnitude. We illustrate the performance of CREPS-CMA on several contextual tasks, including a complex simulated robot kick task.
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
6