Confidence Elicitation & Presentation Tool

Confidence is increasingly an integral part of uncertainty reporting. Bodies from the IPCC to the US Defense Agency report using confidence, often as qualifying assessments of probability or likelihood. Despite ongoing work on the theoretical foundations, interpretations and consequences of confidence, practical challenges remain. This tool is meant to help with two such challenges: confidence reporting—or its elicitation—and accessible presentation of confidence evaluations—and their implications.

Elicitation

Confidence reporting poses the question of the language used to formulate and communicate confidence. The verbal phrases of everyday language—low, medium, high confidence—are most widespread, but they are not without problems. Interpretation and use of such terms tend to depend heavily on context, and in cases even background beliefs and culture. Propogating uncertainty—drawing about confidence about some issues from confidence evaluations concerning others—risks being muddied and more prone to fallacies when based on verbal judgements.

Similar issues have been known to plague our everyday language talk concerning likelihoods or chances, using phrases such as likely or unlikely. They have often been cited by those championing the more rigorous, and less ambiguous language of numerical probability values.

Beside allowing confidence to be reported using verbal phrases, this tool introduces a specifically designed, calibrated numerical confidence language for reporting, explained here. (For its theoretical foundations, see here and here.) Several formats are offered for evaluating confidence in this language, affording varying levels of precision and proffering different amounts of support. They are set out here.

Presentation

Confidence presentation involves not only presenting, in an accessible way, the confidence evaluations reported, but drawing some of their consequences for related issues (e.g. for other ranges of the parameter under examination). In particular, inconsistencies in the confidence evaluations should be detected at this point: it is not possible to plot an inconsistent set of evaluations.

For likelihood judgements, the probability calculus regulates the conclusions that can be drawn from evaluations, as well as the presentation formats: for instance as probability distributions over variables of interest.

Confidence evaluations concerning likelihood judgements demand a generalisation: for each confidence level, confidence evaluations generate a set of probability distributions. (See here, here or here for some theoretical background.) These sets capture the consequences of the confidence evaluations: in particular, if empty, they reveal that the set of evaluations reported are inconsistent. They generate upper and lower probability bounds at each confidence level, which can be plotted.

The tool plots the upper and lower probability bounds generated by the confidence evaluations, for selected confidence levels. (See here for details of the options available.) It thus affords a graphical representation of the confidence reported. Moreover, it checks for and flags inconsistencies. An option allowing the user to “suspend” a given confidence evaluation in the calculation of these bounds allows exploration of the role of the different evaluations in driving consequences, or inconsistencies.

Feedback

This tool has been developed by researchers working on uncertainty reporting, with the aid of no funding except for national research funding. The site has been made available and is maintained purely for public use.