Image Processing
Uncertainty quantification for vision regression tasks
Publié le
This work focuses on uncertainty quantification for deep neural networks, which is vital for reliability and accuracy in deep learning. However, complex network design and limited training data make estimating uncertainties challenging. Meanwhile, uncertainty quantification for regression tasks has received less attention than for classification ones due to the more straightforward standardized output of the latter and their high importance. However, regression problems are encountered in a wide range of applications in computer vision. Our main research direction is on post-hoc methods, and especially auxiliary networks, which are one of the most effective means of estimating the uncertainty of main task predictions without modifying the main task model. At the same time, the application scenario mainly focuses on visual regression tasks. In addition, we also provide an uncertainty quantification method based on the modified main task model and a dataset for evaluating the quality and robustness of uncertainty estimates.We first propose Side Learning Uncertainty for Regression Problems (SLURP), a generic approach for regression uncertainty estimation via an auxiliary network that exploits the output and the intermediate representations generated by the main task model. This auxiliary network effectively captures prediction errors and competes with ensemble methods in pixel-wise regression tasks.To be considered robust, an auxiliary uncertainty estimator must be capable of maintaining its performance and triggering higher uncertainties while encountering Out-of-Distribution (OOD) inputs, i.e., to provide robust aleatoric and epistemic uncertainty. We consider that SLURP is mainly adapted for aleatoric uncertainty estimates. Moreover, the robustness of the auxiliary uncertainty estimators has not been explored. Our second work presents a generalized auxiliary uncertainty estimator scheme, introducing the Laplace distribution for robust aleatoric uncertainty estimation and Discretization-Induced Dirichlet pOsterior (DIDO) for epistemic uncertainty. Extensive experiments confirm robustness in various tasks.Furthermore, to introduce DIDO, we provide a survey paper on regression with discretization strategies, developing a post-hoc uncertainty quantification solution, dubbed Expectation of Distance (E-Dist), which outperforms the other post-hoc solutions under the same settings. Additionally, we investigate single-pass uncertainty quantification methods, introducing Discriminant deterministic Uncertainty (LDU), which advances scalable deterministic uncertainty estimation and competes with Deep Ensembles on monocular depth estimation tasks.In terms of uncertainty quantification evaluation, we offer the Multiple Uncertainty Autonomous Driving dataset (MUAD), supporting diverse computer vision tasks in varying urban scenarios with challenging out-of-distribution examples.In summary, we contribute new solutions and benchmarks for deep learning uncertainty quantification, including SLURP, E-Dist, DIDO, and LDU. In addition, we propose the MUAD dataset to provide a more comprehensive evaluation of autonomous driving scenarios with different uncertainty sources.