Calibrating ensembles for scalable uncertainty quantification in deep learning-based medical image segmentation.
- Abstract:
- Uncertainty quantification in automated image analysis is highly desired in many applications. Typically, machine learning models in classification or segmentation are only developed to provide binary answers; however, quantifying the uncertainty of the models can play a critical role for example in active learning or machine human interaction. Uncertainty quantification is especially difficult when using deep learning-based models, which are the state-of-the-art in many imaging applications. The current uncertainty quantification approaches do not scale well in high-dimensional real-world problems. Scalable solutions often rely on classical techniques, such as dropout, during inference or training ensembles of identical models with different random seeds to obtain a posterior distribution. In this paper, we present the following contributions. First, we show that the classical approaches fail to approximate the classification probability. Second, we propose a scalable and intuitive framework for uncertainty quantification in medical image segmentation that yields measurements that approximate the classification probability. Third, we suggest the usage of k-fold cross-validation to overcome the need for held out calibration data. Lastly, we motivate the adoption of our method in active learning, creating pseudo-labels to learn from unlabeled images and human-machine collaboration.
- Authors:
- T Buddenkotte, L Escudero Sanchez, M Crispin-Ortuzar, R Woitek, C McCague, JD Brenton, O Öktem, E Sala, L Rundo
- Journal:
- Comput Biol Med
- Citation info:
- 163:107096
- Publication date:
- 1st Sep 2023
- Full text
- DOI