We use cookies to customise content for your subscription and for analytics.If you continue to browse Lexology, we will assume that you are happy to receive all our cookies. For further information please read our Cookie Policy.

The proliferation of data to the point of excess is a common refrain and problem for many organizations today. The scale to which data has grown recently has a direct correlation to eDiscovery – as organizations struggle to keep up with data repositories that are shared with multiple law firms and discovery vendors, they look for a simpler way to manage their data and leverage prior work product.

Take, for example, the following common scenario. A products liability case was just filed in federal court in Florida, so to prepare for early case assessment and eDiscovery, you ship your collected data to your local counsel and its data vendor. Then another matter pops up in New York on similar issues, so you copy the data again for that firm and its cloud-based discovery repository. This pattern continues, and eventually you have 10 cases around the country, each with its own copy of the data. Unfortunately, you have also multiplied the risks and costs of eDiscovery:

Data transfer: Every time you send data to a new vendor, you create a new access point that, if breached, could cause additional litigation and financial and reputational losses.

Incompatible formats: By using multiple firms and vendors, data may end up in irreconcilable file types, unless each agrees to use the same eDiscovery software and production format.

Inconsistent coding decisions: Different review teams may code documents classified as trade secrets or protected by the attorney-client privilege or work-product doctrine differently, which can expose private or other sensitive data.

Rework: Nonresponsive or relevant, nonprivilege documents may be reviewed numerous times in multiple cases, needlessly raising the cost of discovery.

No insight: One-and-done reviews prevent the transfer of learnings from one case to the next, so organizations cannot see their data in context or spot troubling trends.

What if you could eliminate the need for all of that rework—and the associated costs—and reduce risk at the same time by consolidating all of your data collections in a single hub?

Now you can. A new breed of next generation analytics can consolidate information from multiple sources, including disparate document review platforms, analyze it using multiple individualized algorithms, and deliver diagnostic and predictive analytics that provide insight and trends into data across cases. These analytics tools such as machine learning, statistical learning, text analytics, natural language processing, sentiment analysis, audio analytics, and anomaly detection deliver unparalleled insight into an organization’s entire data history.

With the analytics, clients can staff projects more appropriately. For example, one large law firm found that some documents it planned to send to outside counsel for review had already been reviewed an average of seven times, with some reviewed 30 or more times. Eliminating the need for further review slashed the projected review cost from $2.8 million to $1.1 million. Clients can also see their prior work product in context; the analytics generated can serve as an early warning system that flags documents with language that indicates potential liability.

Compare jurisdictions: BYOD: Bring Your Own Device

"Lexology is a quick and useful indicator of developments in the legal sphere. It alerts me to changes taking place in the legal environment in South Africa that I may not otherwise have spotted or had immediate access to as a company lawyer. It definitely serves as a trigger for me to investigate such changes in the legal landscape in South Africa as they may affect my work and that of my employer. I believe that receiving Lexology provides me with a competitive advantage."