logo

Menu

Tracks

SpecialSession: Robust and Uncertainty-Aware Machine Learning with Real-World Imperfect Biomedical Data

Robust and Uncertainty-Aware Machine Learning with Real-World Imperfect Biomedical Data

Federico CabitzaFederico Cabitza
Andrea CampagnerAndrea Campagner
Mauro DragoniMauro Dragoni

Machine learning deployed in real-world contexts routinely operates under non-ideal conditions, including imperfect data and supervision. This Special Session invites contributions that advance robustness and uncertainty-awareness in ML systems, to enable reliable, transparent, and safe operation across diverse and challenging environments. We welcome original research, methodological innovations, and in-depth reviews addressing data imperfections—such as measurement noise, missing values, artifacts, outliers, and distributional shifts—as well as supervisory uncertainty, including ambiguous or noisy labels, partial annotations, soft/probabilistic labels, and annotator disagreement. A central focus is on principled approaches to model, estimate, and propagate uncertainty across the learning pipeline, including methods with formal reliability guarantees, calibration strategies, and robust evaluation under adverse conditions. We particularly value work that combines theoretical rigor with practical relevance, in the critical domain of healthcare. A strong requirement is the use of real-world data: submissions relying solely on toy problems or synthetic datasets without a clear connection to real-world applications are discouraged. The session also welcomes work at the intersection of robust learning and related areas, including data-centric AI, causality-aware learning, semi/self-supervised learning under weak supervision, algorithmic fairness, and approaches that enhance reproducibility and transparency of ML systems.

Topics of interest include, but are not limited to:

Robust machine learning under noisy, incomplete, or corrupted medical dataLearning with distributional shifts; covariate shift adaptationUncertainty quantification and uncertainty propagation (supervised/unsupervised)Learning from noisy, soft, probabilistic, or partially labeled dataModeling and aggregation of annotator disagreementCalibration, reliability analysis, and principled evaluation metricsOut-of-distribution detection and robust generalizationTrustworthy, safe, or interpretable ML under uncertaintyReal-world digital health applications using real-world dataBenchmarks, datasets, and tools for robust and uncertainty-aware learning

Contacts