PhD Thesis - Fabian Küppers
The Gaussian Discriminant Variational Autoencoder (GdVAE): A Self-Explainable Model with Counterfactual Explanations
Visual counterfactual explanation (CF) methods modify image concepts, e.g., shape, to change a prediction to a predefined outcome while closely resembling the original query image. Unlike self-explainable models (SEMs) and heatmap techniques, they grant users…
Quantifying Local Model Validity using Active Learning
Machine learning models in real-world applications must often meet regulatory standards, requiring low approximation errors. Global metrics are too insensitive, and local validity checks are costly. This method learns model error to estimate local validity…
Parametric and Multivariate Uncertainty Calibration for Regression and Object Detection
We inspect the calibration properties of common detection networks and extend state-of-the-art recalibration methods. Our methods use a Gaussian process (GP) recalibration scheme that yields parametric distributions as output (e.g. Gaussian or Cauchy). The usage…
Segmentation-guided Domain Adaptation for Efficient Depth Completion
Complete depth information and efficient estimators have become vital ingredients in scene understanding for automated driving tasks. A major problem for LiDAR-based depth completion is the inefficient utilization of convolutions due to the lack of…
Confidence calibration for object detection and segmentation
Calibrated confidence estimates obtained from neural networks are crucial, particularly for safety-critical applications such as autonomous driving or medical image diagnosis. However, although the task of confidence calibration has been investigated on classification problems, thorough…
Towards Black-Box Explainability with Gaussian Discriminant Knowledge Distillation
In this paper, we propose a method for post-hoc ex- plainability of black-box models. The key component of the semantic and quantitative local explanation is a knowledge distillation (KD) process which is used to mimic…
Bayesian Confidence Calibration for Epistemic Uncertainty Modelling
Modern neural networks have found to be miscal- ibrated in terms of confidence calibration, i.e., their predicted confidence scores do not reflect the observed accuracy or precision. Recent work has introduced methods for post-hoc confidence…
From Black-box to White-box: Examining Confidence Calibration under different Conditions
Confidence calibration is a major concern when applying artificial neural networks in safety-critical applications. Since most research in this area has focused on classification in the past, confidence calibration in the scope of object detection…
Multivariate Confidence Calibration for Object Detection
Unbiased confidence estimates of neural networks are crucial especially for safety-critical applications. Many methods have been developed to calibrate biased confidence estimates. Though there is a variety of methods for classification, the field of object…