Abstract (EN):
Multimodal models, namely vision-language models, present unique possibilities through the seamless integration of different information mediums for data generation. These models mostly act as a black-box, making them lack transparency and explicability. Reliable results require accountable and trustworthy Artificial Intelligence (AI), namely when in use for critical tasks, such as the automatic generation of medical imaging reports for healthcare diagnosis. By exploring stress-testing techniques, multimodal generative models can become more transparent by disclosing their shortcomings, further supporting their responsible usage in the medical field. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
1