Introduction to Uncertainty Quantification 360


There are inherent uncertainties in the predictions of machine learning (ML) models. Knowing how uncertain the prediction might be influences how people act on it. For example, you may have encountered a weather-forecasting model telling you that there will be no rain tomorrow with 60% confidence. In that case you may still want to bring an umbrella when you go to work.

There is more to uncertainty in ML than the example above. Uncertainty is an important area of ML research, which has produced an abundance of uncertainty quantification (UQ) algorithms, metrics, and ways to communicate uncertainty to end users. Researchers at IBM Research have been actively working on these topics to enable critical transparency to ML models and engender trust in AI. You can read our papers.

We present Uncertainty Quantification 360 (UQ360), an extensive open-source toolkit with a Python package to provide data science practitioners and developers access to state-of-the-art algorithms, to streamline the process of estimating, evaluating, improving, and communicating uncertainty of machine learning models as common practices for AI transparency.

Flowchart to use UQ360

In this overview, we introduce some core concepts in uncertainty quantification. You can use the navigation bar above to navigate to the two questions we will answer:

  • What capabilities are included in UQ360 and why?
  • How can you use the uncertainty information produced using UQ360?

An interactive demo allows you to further explore these concepts and capabilities offered by UQ360 by walking through a use case where different tasks involving UQ are performed. The tutorials and notebooks offer a deeper, data scientist-oriented introduction. A complete API is also available.