Monday, 11 September 2017

One
of the greatest impediments to the use of probabilistic reasoning in
legal arguments is the difficulty in agreeing on an appropriate prior
probability that the defendant is guilty. The 'innocent until proven
guilty' assumption technically means a prior probability of 0 - a figure
that (by Bayesian reasoning) can never be overturned no matter how much
evidence follows. Some have suggested the logical equivalent of 1/N where N is the number of people in the world. But this probability is clearly too low as N
includes too many who could not physically have committed the crime. On
the other hand the often suggested prior 0.5 is too high as it stacks
the odds too much against the defendant.

New work - presented at the 2017 International Conference on
Artificial Intelligence and the Law (ICAIL 2017) - shows that, in a large class of cases, it is
possible to arrive at a realistic prior that is also as consistent as
possible with the legal notion of ‘innocent until proven guilty’. The
approach is based first on identifying the 'smallest' time and location
from the actual crime scene within which the defendant was definitely
present and then estimating the number of people - other than the
suspect - who were also within this time/area. If there were n
people in total, then before any other evidence is considered each
person, including the suspect, has an equal prior probability 1/n of having carried out the crime.

The
method applies to cases where we assume a crime has definitely taken
place and that it was committed by one person against one other person
(e.g. murder, assault, robbery). The work considers both the practical
and legal implications of the approach and demonstrates how the prior
probability is naturally incorporated into a generic Bayesian network
model that allows us to integrate other evidence about the case.

Full details:

Fenton, N. E., Lagnado, D. A., Dahlman, C., & Neil, M. (2017). "The
Opportunity Prior: A Simple and Practical Solution to the Prior
Probability Problem for Legal Cases". In International Conference on
Artificial Intelligence and the Law (ICAIL 2017). Published by ACM.
Pre-publication draft.

Thursday, 7 September 2017

From July to December 2016 the Isaac Newton Institute Programme on Probability and Statistics in Forensic Science in Cambridge hosted many of the world's leading figures from the law, statistics and forensics with a mixture of
academics (including mathematicians and legal scholar), forensic
practitioners, and practicing lawyers (including judges and eminent
QCs). Videos of many of the seminars and presentation from the Programme can be seen here.

A key output of the Programme has now been published.
It is a very simple set of twelve guiding principles and
recommendations for dealing with quantitative evidence in criminal law
for the use of statisticians, forensic scientists and legal
professionals. The layout consists of one principle per page as shown
below.

Martin Neil

About Me

Norman's experience in risk assessment covers application domains such as legal reasoning (he has been an expert witness in major criminal and civil cases), software project risk, medical decision-making, vehicle reliability, football prediction, transport systems, and financial services. Norman has published over 130 articles and 5 books on these subjects