Abstract (EN):
<jats:title>Abstract</jats:title>
<jats:p>This work introduces Gen-JEMA, a generative approach based on joint embedding with multimodal alignment (JEMA), to enhance feature extraction in the embedding space and improve the explainability of its predictions. Gen-JEMA addresses these challenges by leveraging multimodal data, including multi-view images and metadata such as process parameters, to learn transferable semantic representations. Gen-JEMA enables more explainable and enriched predictions by learning a decoder from the embedding. This novel co-learning framework, tailored for directed energy deposition (DED), integrates multiple data sources to learn a unified data representation and predict melt pool images from the primary sensor. The proposed approach enables real-time process monitoring using only the primary modality, simplifying hardware requirements and reducing computational overhead. The effectiveness of Gen-JEMA for DED process monitoring was evaluated, focusing on its generalization to downstream tasks such as melt pool geometry prediction and the generation of external melt pool representations using off-axis sensor data. To generate these external representations, autoencoder (AE) and variational autoencoder (VAE) architectures were optimized using Bayesian optimization. The AE outperformed other approaches achieving a 38% improvement in melt pool geometry prediction compared to the baseline and 88% in data generation compared with the VAE. The proposed framework establishes the foundation for integrating multisensor data with metadata through a generative approach, enabling various downstream tasks within the DED domain and achieving a small embedding, allowing efficient process control based on model predictions and embeddings.</jats:p>
Language:
English
Type (Professor's evaluation):
Scientific