In response to the 2009 US National Research Council (NRC) Report on Strengthening Forensic Science in the United States, the 2010 England & Wales Court of Appeal ruling in R v T, and the 2012 US National Institute of Standards and Technology and National Institute of Justice (NIST/NIJ) report on latent fingerprint analysis there is increasing pressure across all branches of forensic science to adopt a paradigm consisting of the following three elements:
1. a logically correct framework for the evaluation and interpretation of forensic evidence
2. approaches based on relevant data, quantitative measurements, and statistical models
3. empirical testing of the degree of validity and reliability of forensic-evaluation systems
under conditions reflecting those of the case under investigation
This lecture provides an introduction to the likelihood-ratio framework as the first element in this paradigm, and begins to touch on the second element. Assigned reading also include some discussion of the third element. There is a great deal of misunderstanding and confusion about the likelihood-ratio framework among lawyers, judges, and forensic scientists. The likelihood-ratio framework is about logic, not mathematics or databases, and it makes explicit the questions which must logically be addressed by the forensic scientist and considered by the trier of fact in assessing the work of the forensic scientist. This lecture explains the logic of the likelihood-ratio framework in a way which is accessible and which does not require any prior knowledge about the framework. It uses intuitive examples and student-participation exercises to gradually build a fuller understanding of the likelihood-ratio framework. The lecture also includes discussion of common logical fallacies.