Federico Cabitza
Andrea Campagner
Mauro DragoniMachine learning deployed in real-world contexts routinely operates under non-ideal conditions, including imperfect data and supervision. This Special Session invites contributions that advance robustness and uncertainty-awareness in ML systems, to enable reliable, transparent, and safe operation across diverse and challenging environments. We welcome original research, methodological innovations, and in-depth reviews addressing data imperfections—such as measurement noise, missing values, artifacts, outliers, and distributional shifts—as well as supervisory uncertainty, including ambiguous or noisy labels, partial annotations, soft/probabilistic labels, and annotator disagreement. A central focus is on principled approaches to model, estimate, and propagate uncertainty across the learning pipeline, including methods with formal reliability guarantees, calibration strategies, and robust evaluation under adverse conditions. We particularly value work that combines theoretical rigor with practical relevance, in the critical domain of healthcare. A strong requirement is the use of real-world data: submissions relying solely on toy problems or synthetic datasets without a clear connection to real-world applications are discouraged. The session also welcomes work at the intersection of robust learning and related areas, including data-centric AI, causality-aware learning, semi/self-supervised learning under weak supervision, algorithmic fairness, and approaches that enhance reproducibility and transparency of ML systems.