Abstract (EN):
The Vision Transformer (ViT) architecture has emerged as a potential game-changer in computer vision, offering scalability and global attention that have generated considerable interest in recent years. Its adaptability has fueled enthusiasm for its application. This work investigates the boundaries of the architecture, focusing on developing new techniques targeting explicitly complex tasks, such as medical imaging datasets, which often exhibit high variability, class imbalance, and limited sample sizes. We propose a set of mixed regularisation and augmentation techniques to enhance the performance of models. These include a novel loss function and a smoothly differentiable activation function, leading to more stable training and model performance. The results show that incorporating these techniques improves model performance and training convergence.
Idioma:
Inglês
Tipo (Avaliação Docente):
Científica
Nº de páginas:
6