Evidence Management

I have been providing electronic courtroom presentation and database services since the field was first developed by Litigation Sciences, Inc. Introduced in 1991, the “Catalyst 2000” presentation system was first used in the Ralston Purina case. The equipment was primitive by today’s standards: a 286 computer with a whopping 8 MB of ram! The P.C. ran a hand-coded database which made the exhibits accessible by bar code. Video was handled by a Video Laser Disc player connected to the system with a large wiring harness that fed the audio, video, text, and charts to the monitors and speakers.

Since that time, I have worked extensively with Trial Director, Trial Pro, Visionary and Summation to organize and present evidence of every possible kind in over two hundred electronic trials, mediation and arbitration.

2010 – State of the Art: Predictive CodingPosted by:Craig Carpenteron March 10, 2010

A great deal of discussion has taken place recently about a new form of document review that is taking the eDiscovery industry by storm: Predictive Coding. The reasons for this surge in interest are several – as discussed below – but the timing is not coincidental, as two major trends are colliding:

The economics of traditional, linear review have become unsustainable

The early returns from those employing Predictive Coding are nothing short of phenomenal and have given such early adopters a significant competitive advantage. Given the nascent stage of the Predictive Coding world, we thought the timing was right for a quick primer on what Predictive Coding is, what it isn’t, how it came to be and the problems it seeks to address.

Linear document review – where individual reviewers manually review and “code” documents ordered by date, keyword, custodian or other simple fashion – has been the accepted standard within the legal industry for decades. This was not a big deal when ESI volumes were measured in megabytes or even a few gigabytes; the explosion of data volumes over the past decade, however, has exposed traditional linear review as an exceedingly inefficient, costly and inconsistent approach to document review (which accounts for 60-70% of the costs of eDiscovery). There is simply so much data to be coded that the old model has become too slow and expensive to keep up. [Back in the early 90’s, we saw huge rooms full of coders, sometimes as many as one hundred, pouring over documents and hand-keying every single document; not the greatest jobs in the world either].

The Problems with Linear Document Review:

False Positives – (i.e. irrelevant, unresponsive, or both), yet they are still reviewed by an attorney, which racks up huge amounts of unnecessary costs.

Documents are typically not organized by topic which forces reviewers to jump from topic to topic, slowing down the process and leading to inaccurate results.

Documents aren’t prioritized in any way (i.e. from most important to least important) so reviewers can miss key documents. A

Finally, because individual attorneys typically know little about a case’s substance, multiple “passes” must be made over the same documents based on the substance of a particular review (i.e. a first pass for relevance, a second for responsiveness, a third for relationship to a substantive category, etc.). Add it all up and one is left with a woefully outdated and extremely expensive approach that is rapidly falling out of favor with clients and outside counsel.