Error of the Day & Maintaining Integrity of Algorithmic Results

Earlier this week, the folks at The Algorithm asked "what is AI, exactly?" The answer is reproduced below.

The question may seem basic, but the answer is kind of complicated.In the broadest sense, AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can.As it currently stands, the vast majority of the AI advancements and applications you hear about refer to a category of algorithms known as machine learning. These algorithms use statistics to find patterns in massive amounts of data. They then use those patterns to make predictions on things like what shows you might like on Netflix, what you’re saying when you speak to Alexa, or whether you have cancer based on your MRI.Machine learning, and its subset deep learning (basically machine learning on steroids), is incredibly powerful. It is the basis of many major breakthroughs, including facial recognition, hyper-realistic photo and voice synthesis, and AlphaGo, the program that beat the best human player in the complex game of Go. But it is also just a tiny fraction of what AI could be.The grand idea is to develop something resembling human intelligence, which is often referred to as “artificial general intelligence,” or “AGI.” Some experts believe that machine learning and deep learning will eventually get us to AGI with enough data, but most would agree there are big missing pieces and it’s still a long way off. AI may have mastered Go, but in other ways it is still much dumber than a toddler.In that sense, AI is also aspirational, and its definition is constantly evolving. What would have been considered AI in the past may not be considered AI today. Because of this, the boundaries of AI can get really confusing, and the term often gets mangled to include any kind of algorithm or computer program. We can thank Silicon Valley for constantly inflating the capabilities of AI for its own convenience.
It's good to be reminded of this definition as we contend with the latest releases of the legal research databases as the databases continuously tweak their underlying algorithms -- the latest being Westlaw Edge.

With Westlaw Edge comes a revised "WestSearch Plus."

Introducing the next generation of legal search. Get superior predictive research suggestions as you start typing your legal query in the global search bar.WestSearch Plus applies state-of-the-art AI technologies to help you quickly address legal questions for thousands of legal topics without needing to drill into a results list.
We're starting to see a time when the Google Generation is already predisposed to not drill into a results list and now the databases are actively advocating for the users to blindly rely on the top result in the list.

From October 30, 2018:
Error of the Day A Lexis typo (possibly scanning error) in Excessiveness of Bail in State Cases, 7 A.L.R.6th 487. The following group of letters is used six times throughout the document, CocainesepBail. A quick look at the Westlaw version shows that it should be Cocaine – Bail

From November 5, 2018:

In the Case People v Kindell, 148 AD3d 456 (1st Dept 2017), Susan Axelrod is listed as both the counsel for the Appellant and the Respondent. The official version, the print, does not list the attorneys.

[]

I confirmed with ADA Axelrod that she did not represent the defendant and opposing counsel was not someone with the same name. I also checked the defendant’s brief and it lists Ms. Moser as counsel.

While these errors are seemingly minute individually, the consequences are greater in the aggregate.
My own mentor, a law librarian who had been in the profession for 40 years, kept a print file of the errors that he found in the databases while performing legal research. The file was overflowing by the time I saw it roughly 3 years before his retirement.

Because an algorithm's results are only as good as the underlying data, as we move toward an algorithmic society that relies heavily on algorithmic decision making, these errors could have consequences on the development of the law.

The current version of Standard 601(3)(a) was developed during the Comprehensive Review as a method of involving a law library in the process of strategic planning required of a law school. It was envisioned that the planning and assessment taking place for a law school (under what was then Standard 203) would incorporate the work done by the library under this new Standard. To ensure that incorporation, it was decided that a written assessment should be completed by the library. However, when the requirement for strategic planning for a law school was removed during a later phase of the Comprehensive Review, no change was made to the new Standard 601. As a result, the library community has been left…

Law libraries are in the information business. To act as superior guides to this information, we must also be in the people business. We must be concerned with the people who seek our information. And we must be concerned with the people who guide those seekers to the information (i.e., our staff).

Contrary to popular belief, it's not easy to be a staff person in the rigid hierarchy of an academic law library. Particularly at a time when law libraries are facing increased budget pressures that require staff to do much more with much less. This is especially challenging with longtime staff who have seen their jobs change dramatically since they were hired. Many of these folks were not formally trained in librarianship, and they may be resistant to the flexibility needed in today's law library.

Given these challenges, how do we motivate our staff to be the very best guides to our information?

To that end, there was an enlightening program at the AALL Annual Conference in 2013 t…

As we further consider how to train future lawyers for the Algorithmic Society and develop the quality of thinking, listening, relating, collaborating, and learning that will define smartness in this new age, law schools must reach beyond their storied walls.

In law, we must got beyond talking about algorithmic implications to actually help shape algorithmic performance. We need lawyers and programmers to work together to create a sound "machine learning corpus." There's potential for an entirely new subfield to emerge if given the right support. With many law school attached to major research universities, it's a great place to start this cross-pollination and interdisciplinary work.

This type of interdisciplinary work would help to satisfy the career aspirations of advanced-degree seekers but also the wishes of many college presidents, deans, and faculty members who see an interdisciplinary professional education as a path to greater relevance, higher enrollments,…