Go to:
Logótipo
Comuta visibilidade da coluna esquerda
Você está em: Start > Publications > View > Dynamically Choosing the Number of Heads in Multi-Head Attention
Publication

Publications

Dynamically Choosing the Number of Heads in Multi-Head Attention

Title
Dynamically Choosing the Number of Heads in Multi-Head Attention
Type
Article in International Conference Proceedings Book
Year
2024
Authors
Duarte, FF
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Lau, N
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Pereira, A
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Indexing
Publicação em Scopus Scopus - 0 Citations
Other information
Authenticus ID: P-010-A06
Abstract (EN): Deep Learning agents are known to be very sensitive to their parameterization values. Attention-based Deep Reinforcement Learning agents further complicate this issue due to the additional parameterization associated to the computation of their attention function. One example of this concerns the number of attention heads to use when dealing with multi-head attention-based agents. Usually, these hyperparameters are set manually, which may be neither optimal nor efficient. This work addresses the issue of choosing the appropriate number of attention heads dynamically, by endowing the agent with a policy h trained with policy gradient. At each timestep of agent-environment interaction, h is responsible for choosing the most suitable number of attention heads according to the contextual memory of the agent. This dynamic parameterization is compared to a static parameterization in terms of performance. The role of h is further assessed by providing additional analysis concerning the distribution of the number of attention heads throughout the training procedure and the course of the game. The Atari 2600 videogame benchmark was used to perform and validate all the experiments. © 2024 by SCITEPRESS ¿ Science and Technology Publications, Lda.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 9
Documents
We could not find any documents associated to the publication.
Related Publications

Of the same authors

A Survey of Planning and Learning in Games (2020)
Another Publication in an International Scientific Journal
Duarte, FF; Lau, N; Pereira, A; reis, lp
Study on LSTM and ConvLSTM Memory-Based Deep Reinforcement Learning (2023)
Article in International Conference Proceedings Book
Duarte, FF; Lau, N; Pereira, A; reis, lp
Revisiting Deep Attention Recurrent Networks (2023)
Article in International Conference Proceedings Book
Duarte, FF; Lau, N; Pereira, A; reis, lp
LSTM, ConvLSTM, MDN-RNN and GridLSTM Memory-based Deep Reinforcement Learning (2023)
Article in International Conference Proceedings Book
Duarte, FF; Lau, N; Pereira, A; reis, lp
Recommend this page Top
Copyright 1996-2025 © Faculdade de Direito da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z
Page created on: 2025-08-12 at 10:33:02 | Privacy Policy | Personal Data Protection Policy | Whistleblowing