More and more academic libraries have invested in discovery layers, the centralized “Google-like” search tool that returns results from different services and providers by searching a centralized index. The move to discovery has been driven by the ascendence of Google as well as libraries' increasing focus on user experience. Unlike the vendor-specific search tools or federated searches of the previous decade, discovery presents a simplified picture of the library research process. It has the familiar single search box, and the results are not broken out by provider or format but are all shown together in a list, aping the Google model for search results.

The potential for bias is particularly troublesome for library discovery layers because libraries are seen as highly reputable institutions that users should inherently trust.

As Reidsma notes, that our perception of search tools’ trustworthiness should be so uncritical has been a boon to the industry. In librarianship over the past few decades, the profession has had to grapple with the perception that computers are better at finding relevant information then people. On the technical services side of the profession, we have responded to this perception by pushing for more integration with our various search tools. Over the past decade discovery tools, which search a unified index of providers from a single certain point, have changed the way that many library users do research. As our discovery tools have become more complex, much of the discussion and critique has centered on the simplification of the search process, the effectiveness of user interface elements, and the integration with other library systems and services. [He] ha[s] found no substantive evaluation of the search algorithms of commercial library discovery platforms in the literature. The task of determining how successful our library discovery tools are at presenting good results is thus stymied by user perceptions of what the tools are capable of, the opacity of the business model of our search engine providers, and the fact that underlying everything was a series of instructions written by people with a particular point of view.

Reidsma put a discovery layer to the test through a series of searches designed to ferret out bias. Ultimately, Reidsma found that the discovery layer's algorithm showed bias in the following areas:

Women

The LGBT Community

Islam

Race

Mental Illness

The biased results are in areas commonly known for their stereotypical social biases. What we find, time and again, is that the algorithms are only as good as the biases present in the programmers who code them.

Why is this an issue in the library world? Since the goal of the Topic Explorer is to identify the underlying topic behind a user's search, incorrect or biases results can have a great impact on a user's perception of a topic. By showing results that exploit stereotypes or bias, the Topic Explorer is saying to the user, “this is what you are looking for.” The purpose of [Riedsma's] examination was to bring these anomalies to light and start a discussion within the library community about how to improve our search tools for everyone.

The current version of Standard 601(3)(a) was developed during the Comprehensive Review as a method of involving a law library in the process of strategic planning required of a law school. It was envisioned that the planning and assessment taking place for a law school (under what was then Standard 203) would incorporate the work done by the library under this new Standard. To ensure that incorporation, it was decided that a written assessment should be completed by the library. However, when the requirement for strategic planning for a law school was removed during a later phase of the Comprehensive Review, no change was made to the new Standard 601. As a result, the library community has been left…

Law libraries are in the information business. To act as superior guides to this information, we must also be in the people business. We must be concerned with the people who seek our information. And we must be concerned with the people who guide those seekers to the information (i.e., our staff).

Contrary to popular belief, it's not easy to be a staff person in the rigid hierarchy of an academic law library. Particularly at a time when law libraries are facing increased budget pressures that require staff to do much more with much less. This is especially challenging with longtime staff who have seen their jobs change dramatically since they were hired. Many of these folks were not formally trained in librarianship, and they may be resistant to the flexibility needed in today's law library.

Given these challenges, how do we motivate our staff to be the very best guides to our information?

To that end, there was an enlightening program at the AALL Annual Conference in 2013 t…

As we further consider how to train future lawyers for the Algorithmic Society and develop the quality of thinking, listening, relating, collaborating, and learning that will define smartness in this new age, law schools must reach beyond their storied walls.

In law, we must got beyond talking about algorithmic implications to actually help shape algorithmic performance. We need lawyers and programmers to work together to create a sound "machine learning corpus." There's potential for an entirely new subfield to emerge if given the right support. With many law school attached to major research universities, it's a great place to start this cross-pollination and interdisciplinary work.

This type of interdisciplinary work would help to satisfy the career aspirations of advanced-degree seekers but also the wishes of many college presidents, deans, and faculty members who see an interdisciplinary professional education as a path to greater relevance, higher enrollments,…