Abstract (EN):
<jats:p>We investigate the convergence properties of policy iteration and value iteration algorithms in reinforcement learning by leveraging fixed-point theory, with a focus on mappings that exhibit weak contractive behavior. Unlike traditional studies that rely on strong contraction properties, such as those defined by the Banach contraction principle, we consider a more general class of mappings that includes weak contractions. Employing Zamfirscu¿s fixed-point theorem, we establish sufficient conditions for norm convergence in infinite-dimensional policy spaces under broad assumptions. Our approach extends the applicability of these algorithms to feedback control problems in reinforcement learning, where standard contraction conditions may not hold. Through illustrative examples, we demonstrate that this framework encompasses a wider range of operators, offering new insights into the robustness and flexibility of iterative methods in dynamic programming.</jats:p>
Language:
English
Type (Professor's evaluation):
Scientific