Abstract (EN):
The performance of Deep Learning agents is known to be very sensitive to the parameterization values used. The additional hyperparameters associated to the computation of the attention function used in Attention-based Deep Rein-forcement Learning agents further complicate this issue. One example of this concerns the parameterization of the number of attention heads to use in multi-head attention-based agents. Usually, this hyperparameter is set manually and remains fixed throughout training. This may be neither optimal nor efficient. This work addresses this issue by endowing the agent with a policy whose purpose is to dynamically choose the number of attention heads throughout the duration of the game and according to the game state and the contextual memory of the agent, at each timestep. The results obtained seem to suggest that in some cases the use of this dynamic parameterization can improve the performance of the agent, when compared to a baseline agent with a static parameterization. The Atari 2600 videogame benchmark was used to perform and validate all the experiments. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
17