Glossary of Terms
Data point
A single instance, for example as input for the model to make prediction on.
A single instance, for example as input for the model to make prediction on.
Feature
An attribute containing information for predicting the target variable.
An attribute containing information for predicting the target variable.
Target
A value associated with data point that is either provided or needs to predicted from features. This value can be categorical or continuous.
A value associated with data point that is either provided or needs to predicted from features. This value can be categorical or continuous.
Regression
When the target values are continuous the prediction algorithm is called as regressor. The prediction task is called Regression.
When the target values are continuous the prediction algorithm is called as regressor. The prediction task is called Regression.
Classification
When the target values are categorical the prediction algorithm is called as a classifier. The prediction task is called Classification.
When the target values are categorical the prediction algorithm is called as a classifier. The prediction task is called Classification.
Predictive Uncertainty
The uncertainty in the model predictions.
The uncertainty in the model predictions.
Uncertainty Quantification (UQ)
The process of obtaining the uncertainty in the predictions of a model and or the model parameters.
The process of obtaining the uncertainty in the predictions of a model and or the model parameters.
Intrinsic UQ Algorithm
An algorithm that is explicitly designed to produce uncertainty estimates along with predictions.
An algorithm that is explicitly designed to produce uncertainty estimates along with predictions.
Extrinsic UQ Algorithm
An algorithm for extracting post-hoc uncertainty from a trained model.
An algorithm for extracting post-hoc uncertainty from a trained model.
Data Uncertainty
Data uncertainty refers to the inherent variability in the data instances and targets.
Data uncertainty refers to the inherent variability in the data instances and targets.
Model Uncertainty
Multiple models (each model is characterized by a set of parameters) may be consistent with the observed data. The lack of knowledge about a single appropriate model gives rise to model uncertainty.
Multiple models (each model is characterized by a set of parameters) may be consistent with the observed data. The lack of knowledge about a single appropriate model gives rise to model uncertainty.
Prediction Interval
The uncertainty in predictions expressed as an interval in which the true value is expected to fall with a pre-specified probability. This is typically used for characterizing the uncertainty in regression tasks, but can also be used to characterize uncertainty in class probabilities.
The uncertainty in predictions expressed as an interval in which the true value is expected to fall with a pre-specified probability. This is typically used for characterizing the uncertainty in regression tasks, but can also be used to characterize uncertainty in class probabilities.
Prediction Interval Width
Width of the prediction interval.
Width of the prediction interval.
Prediction Confidence
The uncertainty in predictions for a classification task expressed as a probability distribution over the target classes.
The uncertainty in predictions for a classification task expressed as a probability distribution over the target classes.
Mean Prediction Interval Width (MPIW)
The average width of prediction interval across several samples.
The average width of prediction interval across several samples.
Uncertainty Calibration Evaluation
Checking agreement between the predicted uncertainty estimates and the relative frequency of the ground truth targets.
Checking agreement between the predicted uncertainty estimates and the relative frequency of the ground truth targets.
Prediction Interval Coverage Probability (PICP)
The fraction of samples for which the prediction interval covers the true value.
The fraction of samples for which the prediction interval covers the true value.
Uncertainty Re-calibration
Procedure to improve the agreement between the predicted uncertainty estimates and the relative frequency of the ground truth targets.
Procedure to improve the agreement between the predicted uncertainty estimates and the relative frequency of the ground truth targets.
Expected Calibration Error (ECE)
ECE is a metric for measuring the calibration of uncertainties produced by a classifier. This is defined as the expected difference between the classifier's accuracy and its confidence.
ECE is a metric for measuring the calibration of uncertainties produced by a classifier. This is defined as the expected difference between the classifier's accuracy and its confidence.