Visualizing Uncertainty in Predictive Models

Published:

Predictive models are used in many fields to characterize relationships between the attributes of an instance and its classification. While these models can provide valuable support to decision-making, they can be challenging to understand and evaluate. While they provide predicted classifications, they do not generally include indications of confidence in those predictions. Typical quality measures for predictive models are the percentage of predictions which are made correctly. These measures can give some insight into how often the model is correct, but provide little help in understanding under what conditions the model performs well (or poorly). We present a framework for improving understanding of predictive models based on the methods of both machine learning and data visualization. We demonstrate this framework on models that use attributes about individuals in a census data set to predict other attributes of those individuals.

Download paper here

Recommended citation: Penny Rheingans, Marie desJardins, Wallace Brown, Alex Morrow, Doug Stull, and Kevin Winner. Visualizing Uncertainty in Predictive Models, pages 61–69. Springer London, London, 2014 https://doi.org/10.1007/978-1-4471-6497-5_6