The sixteenth-century Villa La Rotunda is the masterwork of the architect Palladio, who changed the way we think about architecture. He used all of his knowledge of architecture to design this space for his client. With his design, he reached back to ancient Rome and embraced the way the Romans built their buildings to embody the virtues and ideals important to their society. The proportions, the scale, the materials, all related in some way to the values that Palladio thought were important. And just in case you hadn't taken a course in architecture and didn't know, he covered the Villa with statuary embodying particular virtues and ideals. He had the goddess of wisdom, the goddess of justice, and others. So when you're out for a stroll, not only does the building embody these virtues, but you see them standing before you.

For Palladio, the idea of embedding values into his work was so integral to his approach to architecture, that even hundreds of years after his death, his influential book, The Four Books of Architecture, was still printed with a frontispiece depicting the maidens of architecture bowing before the Queen of Virtue. The image was to give you an idea of what the book was about, in case you thought it would just be about how to design a functional building. Instead, it was a book about values, about ethics. Incidentally, it is also a book about architecture.

Palladio was very intentional about explicitly building values into his buildings. But this is not a unique thing, using one's values to help shape a building. Many architects and designers intentionally build their values into their work.

This is Albert Speer's German Pavilion for the 1937 World's Fair, representing Germany's fascist government. It is a monstrous, imposing, threatening tower of might. It doesn't have statuary embodying those values, but the building itself represents them.

I think we can best see the values built into Speer's building by comparing it to the German Pavilion done 21 years later, for the 1958 World's Fair, by Egon Eiermann. It is a three story, flat, horizontal building. It feels tranquil and calm. It is all glass, representing transparency and democracy. The World's Fair is a place where you represent what you believe is right in the things you make. So these were very intentional choices by the architects, to encode these values into their buildings.

Political and ethical ideas can be written into window frames and door handles.
— Alain de Botton1

And this is not unusual. When we build things, we want them to be expressions of us. When we build things, we want to help people feel cared for, supported, or intimidated, depending on our values.

But this is not just buildings. Gerrymandering, for instance, is also a design that reflects the values of its creators. This graphic is an example of how the redrawing of political boundaries works.

If you have, for instance, 100 voting precincts, and 60 of them vote for a Democratic candidate, and 40 vote for the conservative candidate, split into equal districts, you will have 5 districts sending Democratic representatives to Congress. But if you redraw those districts and put as many blue voters as you can into two of those districts, you can send 3 Republican candidates to Congress even though they are outnumbered by 20%.

This is a design, and it represents the values of the people who created it. If you want to see what this design looks like in real life, above is North Carolina's 12th Congressional District. Doesn't it seem like a really obvious shape for a congressional district? The District has three cities in it, that tend to vote Democratic. The party that was in control, the Republicans, redrew this district to load it up with the opposition party, potentially thwarting the will of the people, with design.

The late Social Critic Paul Goodman wrote that "Technology is a branch of moral philosophy, not of science."2 An when he spoke of technology, he was speaking broadly. By technology, he meant knowledge applied to practical means, rather than the simple definition we often gravitate towards, like gizmos and gadgets.

We take what we know, and we make something with it. That's technology.

The process of making technology is called design. It's not just architecture, its not just gerrymandering, but rather it is design. And design is a branch of ethics, because every decision you make as you create something is going to limit and constrain the possibilities for the people who use your tools, your services, and your designs.

UX Libs II alum Andreas Orphanides says in his talk Architecture is Politics that all of our designs reflect our values and the culture that we are in. This is true not just when we're intentional about designing something, like a user interface or a poster. All systems that we design, everything that we create, including our policies, our workflows, our buildings, our websites, and our services, reflect our values.

What Are OurValues?

So, what are our values? As librarians, we could all go to our professional organization's website and download a handy PDF of our code of professional ethics. (IFLA has a handy list of professional codes of ethics, actually.) Then we could point to it and say, here you go! Access, privacy, equity, our values are all here in black and white.

When was the last time you taped your organization's code of ethics to your wall, next to an affinity map? That's okay, we don't necessarily have to have it visible all time to know what our values are, right?

If I'm a transgender woman and I encounter this question, how do I answer it? Do I answer for who I am, or do I answer what it says on my birth certificate? After all, they don't say what they will do with this information, or who will see it. If I haven't come out to my friends and family, do I dare answer this truthfully?

If libraries are supposed to be a safe place, why would we ask this question, and make someone justify themselves just to use our services?

What are our values? Equity? Privacy? Do we value our users more than our fetish for data collection?

If we search for information about stress in the workplace, and our tools tell us that stress is probably related to women in the workplace, what are our values? Equity?

If our tools will only work on the newest technology, what are our values? Access? Equity?

This is how bad design makes it out into the world. Not due to malicious intent, but with no intent at all.- Mike Monterio3

The difference between libraries and Palladio, Speer, and Eiremann, is that the architects intentionally encoded their values into their designs. Your values will be encoded in your work whether you want them to be or not. So be conscious of your values and what you want your work to say.

Last year, my friend Cody Hanson gave a talk called Libraries are Software.4 According to Cody, when you look at what we think about libraries, the one constant as we change is that our values underlie what we do. They are the most important part of libraries.

If we want to do the kinds of things we talk about here at UX Libs, we have to build those things into the software and services that we make for our users. This is intentionally encoding our values into the things we design.

When I hear Cody say that we should be encoding our values into our services, I hear him saying that we need ethical design.

What
isEthical
Design?

Ethical design is thinking about what happens in the world if we make this thing. You have to think about how the people who use your design will be affected. How do the choices we make when creating something help or hinder those who will use it? Does that sound familiar to you? That sounds like UX.

This should be our bread and butter. Because a graphic designer doesn't necessarily worry about how someone will interpret a poster (even if they should). An industrial designer doesn't necessarily have to worry about how people will react to what they've created (even if they should). But an experience designer thinks about the experience of the people who use our tools. It's in the title! We have the tools.

If any of you read one book about design in your life, you should read Victor Papanek's Design for the Real World. It's a classic 1970s screed against the state of things. The first line of the book is "There are professions more harmful than industrial design, but only a few of them."

Papanek argues that designers need to engage their moral and ethical judgment before creating prototypes, before drawing sketches. He urges us to think about ethics from the beginning.

Analytics

Let's talk about analytics.

Libraries love analytics! And I'm thinking of analytics broadly. We can include qualitative and quantitative data about our users under the heading of analytics. We love usage data, website data, and even gate counts for some reason.

Why do we collect data? Design giant IDEO reminds us that "the goal of design research isn’t to collect data; it’s to synthesize information and provide insight and guidance that leads to action."5

Now, in the States, it's different from other places, because we're less rational about data collection than most. For many US libraries, we have to provide a big data report to organizations like ACRL and ALA, because they need to know how many people walked through the gates of our library. It doesn't matter that gate counts offer absolutely no qualitative information or context for how to improve your library or services, the point is that you have a number you can share with these organizations. So we collect the data.

But data has a particular purpose, which is to guide you as you work through the design process. It's to help inform your design. But that's not how libraries tend to use it. We mostly seem interested in collecting it, just in case.

How many of you have Google Analytics installed on your websites? Almost all of you. How many of you made a conscious choice when installing Google Analytics, understanding you were making a trade-off? How many thought what they learned from the data would be so valuable that it would be worth the risk to our users' privacy? No one here had that conversation.

At GVSU, we didn't have that conversation either. I have 14 different instances of Google Analytics running on all our various web tools. (This is starting to feel like a 12-step meeting: My name is Matthew, and I use Google Analytics.)

But beyond the privacy concerns (which are super real, and you should read more about it from Eric Hellman) , I have other real issues with website analytics. We love analytics and data so much that often it's the only thing we see.

Remove a person’s humanity, and she is just a curiosity, a pinpoint on a map, a line in a list, an entry in a database. A person turns into a granular bit of information.- Frank Chimero6

But what do you see in a spreadsheet, or a databases, or a map? Can you see people behind the rows and rows of data?

We always talk about designing for people. But, if the people are only represented by columns and rows of numbers, it is a bit easier to forget that those are real, complicated people using our libraries. Analytics reassure us that people are predictable, that their behavior will be reasonable and methodical. But what tells us more about human nature, a spreadsheet of catalog searches, or Whitman, when he writes "Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes."?

This is a screenshot from the marketing websites of Analytics on Demand from Gale. It's an analytics service that is provided by the company that will give you "household-level data" about your users. That means that instead of anonymous numbers in a row of spreadsheets, you can know the "age, race, and ethnicity" of the people behind those numbers. But what good is that information? Are demographics, our racial backgrounds, our genders, what cause us to make decisions, to want things, to do searches in the library? How can this information possibly help a library?

They also claim they can provide you with voting information about your patrons which, I suppose, can be valuable if you are a public library with an upcoming millage, but if scares the hell out of me. Look at that map: do you see people behind the pin points, highlighting where all of the libraries patrons are?

The way this service works is that you upload all of your own information to their friendly servers, and then they crunch your numbers and spit out some dehumanizing maps and charts for you as the last vestiges of your patrons' privacy are flushed away. This service provides them with a lot of value, too, of course. They want the data more than they want to provide you with this analytics service.

I think we have top be very careful with this kind of analytics usage, because we're not thinking up-front about what is right. Our moral judgment has not been invoked before we start designing.

Let me be clear: I am not condemning analytics. I am not saying we shouldn't use data about how our services are used. (I am coming down strongly against creating a map of where library patrons live.) But we're designing for people, and we need to make our design process more rich, and move beyond analytics and data.

Personas

"But Matt!"" you're saying. "We have personas! They help us take that data, and they humanize it."

How many people here use personas? About half. They can be a useful tool, especially if you want to communicate patterns of behavior within your organization, or when you are designing, or helping to provide some context for the analytics data you've collected. Some people prefer to see a face instead of data, to "trick" themselves into designing for a person that is an amalgamation of behaviors of particular types of users. I use personas in my work, as do my colleagues at GVSU.

And we have good intentions, because you need to have a way to design for people who are not you or the other library staff. As Erica Hall states, designing for yourself will likely lead to "building discrimination right into your product."7

Personas begin with us trying to better understand our users. Below are some examples of personas from Montana State University Libraries. They've done a terrific job of documenting and researching their personas, and they've been useful to me as I created and refined my own.

They have an undergraduate persona, a graduate student persona, and they also have faculty, including an adjunct (whose salary is approximately half of what a barista at Starbucks makes.)

But the longer I've used personas, and the more I've talked with others who use them, I've begun to feel uneasy about them as a design tool, at least in the way they are commonly used. Do you see anything strange about the personas above?

Everyone is smiling. Even the adjunct professor is smiling! (I used to be an adjunct at GVSU, and I can tell you that I didn't have time to smile, because I had to work 2 other jobs to make rent.)

The question is, when our personas all seem to be happy, perfect individuals, then who are we designing for? We're designing for smiling people, happy people! People who love being at the library! What amazing opportunities they have in the library, and they're so happy about it.

But people aren't really like that. Karen McGrane asks us to remember that we're note designing for "expert automaton, programmed to complete each task flawlessly." We're instead designing for "the messy, error-prone, distracted human."8 These "patterns" of behavior we've encoded into personas seem like nothing more than the same biases we read into our raw data, but with a smiling, CC-zero licensed portrait tacked on top.

<div class="box">
<p>Full disclosure: below are the smiling people on the GVSU personas. We're not immune from this.</p>
<div class="center">
<img src="https://mreidsma.github.io/ethicalux/img/gvsupersonas.jpg" alt="Student personas from Grand Valley State University Libraries" />
</div>
<p>How can we make these design tools and our research reflect the complexity of our users? Think about situations that might bring someone to a library. Because, your users might not have a choice about coming to you. They might be in crisis, afraid, numb, bored, angry, sad, or some combination of these things. Are we ready to help them when they come?</p>
<p>Deirdre Costello from EBSCO's User Research team shared <a href="https://medium.com/@indiyoung/describing-personas-af992e3fc527">Indi Young's article on crafting personas without demographics or photos</a>. Indi proposes that instead of leading with demographics, you lead with needs, and use description and narrative instead of "facts". Demographics aren't necessary for a persona to be useful, but they might trigger biases about particular groups of people in those who use them to design.</p>

Algorithms

Now everyone's favorite topic, algorithms.

We interact with algorithms all day, every day. The library is no exception. Of course, we're ahead of the game. Before we had computers the library ran on algorithms. An algorithm is a series of choices often based on conditions, like the steps to catalog a book. (Impress your friends by referring to AACR2 or RDA as algorithms about semi-colon placement.)

We think of algorithms as neutral systems that we build into computers. And so, if the computer is doing the choosing or the recommending, then there can't be a problem of bias or discrimination.

But all of those algorithms were written by messy people, with ideas and opinions and biases they might not even know they have. And often these people don't even know understand what they are designing for.

Relevance is one of those things. Libraries don't even have a clear idea of what relevance is anymore. Yet, we base many of our decisions on the effectiveness of one set of relevancy algorithms over another.

But what is relevance? How many people here have a particular search yo do when you encounter a new library search tool? (I once built a small social network at a bar after a conference called This is My Search, which allowed librarians to simultaneously share "their search" while testing it on a random, new system, so I know you all have one.)

My search is "batman," which gives you a nice mix of medieval special collections material as well as contemporary books and media from both adult and juvenile collections. UX Libs own Matt Borg uses "ethical tourism." John Chapman of OCLC told me years ago that his was "Space law." And my colleague Jeff uses "Stress in the worlplace," since you can see how well a tool uses synonyms by looking for engineering articles. (Full props to Pete Coco of Boston Public Library, who also uses batman. I shamelessly stole his search years ago when I realized how useful it was.)

But we like to use the same search across different tools because we think we're also evaluating the relevance algorithm of a tool. We might say, upon seeing a strange set of results in a new system, "I don't like the relevance ranking of this system."

But what is the assumption behind that statement? Is relevance actually a property of a list of items? Or is relevance actually a property of the relationship between the items and the person who needs information? There has to be a "for whom" for something to actually be relevant.

If two different patrons search for "fetal cell research" in your discovery tool, you may get varying reports as to the relevance of the results. Imagine the first user is a second-year college student writing a research paper, while another is a late-career academic that was just diagnosed with cancer and was told that these experimental fetal cell treatments would be her only hope of survival. These two people will have wildly different ideas of what results are relevant.

Tarleton Gillespie suggests that this makes relevance all but unknowable, and so "quick clicks and no follow-up searches [are] an approximation, not of relevance exactly, but of satisfaction."9 What we instead are designing for is satisfaction, that things "look right." I don't know what the answer is to this problem, but we have to start by being honest with ourselves about what our tools are capable of.

The Topic Explorer is one of my favorite features of Summon, the discovery layer we use at GVSU. It is designed to give contextual information about broad searches (or what we often call crummy searches, 1-2 words at most). In the example given with Summon's marketing material, searching for "heart attack" will expand the search and bring in an encyclopedia article about myocardial infarction, which is the medical term for heart attacks. Because medial literature is going to use this terminology, the user gets more academic materials and learns the proper terminology for their field of research.

I did an analysis of the Topic Explorer last year, because I really love this feature, and I wanted to make it better. 93% of the time, the topic was at least on the same topic as the user's search. That's pretty good success for an algorithm.

But the more important question isn't about the tool's success, but what happens when it fails. What about the other 7% of the time, when it shows some off-topic entry?

We don't know how the Topic Explorer works, because the algorithm is proprietary. So we don't know what makes it decide to choose one topic over another. We don't know, for instance, what makes it choose "Sexual abstinence" if you are searching for virginity. These are not synonyms, as you can be sexually abstinent and not be a virgin. There are, in fact, strong religious and political connotations to those two terms, and conflating them in this way makes it appear that ProQuest, and for the purposes of our users, the library, are making a political stance.

But no one sat down and consciously connected sexual abstinence and virginity together at ProQuest, so how did they get matched? What happened?

What happened in this discovery tool to connect a search for "new york city waste" to "new york city women"?

What happened to connect the idea of rape with hearsay testimony?

It's not a coincidence that the problems we find in our tools reflect the same biases we fight outside the library in our larger society every day.

None of these were intentionally added to these systems. But as Mike Ananny has said, "Reckless associations—made by humans or computers—can do very real harm especially when they appear in supposedly neutral environments."10 We have to be very careful when we provide complex tools and services to complex, messy people. Because if we lose sight of the fact that we are dealing with real people, not smiling, 2-dimensional buckets of data, we can do real harm.

A lot of this is systemic. The library world is overwhelmingly white, and those making software for libraries are overwhelmingly male.

This lack of diversity is a user experience issue! Who you hire to build your tools is a user experience issue! Who you hire to work your desk is a user experience issue! Who you hire to choose the collections is a user experience issue! Who gets a voice in your staff meetings is a user experience issue! These all affect your users' experience of the library, and must be treated accordingly with your moral judgment in play from the beginning.

Your
library
ispeople

This is the best profession to be in. Because our mission is genuinely to help people.

But we need to keep in mind the whole humanity of those we are designing for. And that might mean some changes to the way we do things. But our users will thank us for it.

A fire alarm went off at this point in my talk, and the 175+ attendees and I tromped down 9 flights of stairs and then back up 9 flights of stairs between this sentence and the next. The gap in the video recording was pretty interesting. ↩

In the mid-1970s, a young East German historian named Hannah Marburger found a reference to a pact signed by Hitler and Stalin in 1939.1 You'd think that as an historian, Marburger would have been familiar with the Molotov-Ribbentrop Pact, but she had never heard of it. In fact, most of the people her age and younger in the Communist East Germany hadn't heard of it. The idea that the great communist leader Stalin would have aligned himself with the fascist Hitler was unthinkable.

She found that there was a copy of this pact at the State Library. It was incredibly difficult to access. She sent multiple requests but was constantly rebuffed. However, she was persistent. She kept at it, trying to find out about the pact, pestering the librarians with her request. Finally, she was told that the document had disappeared.

She went in to visit the library, she was taken to a service the library provided that she hadn't known existed until that moment, despite the library having offered it for over 400 years. She was taken to the Giftschrank.

Giftschrank

Giftschrank translates literally as "poison cabinet," and it's used to describe the small cabinets that pharmacists use to keep their medications under lock and key. But in the 16th century, it became a tool of the library to hold dangerous texts. A poison cabinet.

While one way of dealing with texts you don't agree with is to burn them, the Holy Roman Emperor also saw value in knowing what his enemies, and the enemies of the Church, were saying in their texts. And so the Giftschrank rose up in the 16th century as a way to keep watch over these suspicious religious and political texts. It evolved through the years, and eventually became a useful tool in Nazi Germany and later in the German Democratic Republic (DDR). In the DDR, the Giftschrank became the perfect place to quarantine Western literature, or, as Hannah Marburger learned, to hide the evidence of history that the party found disagreeable.

What I love about the Giftschrank is that you needed permission to get into the room to see you material, what they called a "poison card." But more importantly, this is a library service that is unabashedly about two things that we think we've erased from libraries in the 21st century: It's intentionally super hard to access materials, and it's tied to a physical space. There is, after all, no online Giftschrank.

As I listened to an episode of 99 Percent Invisible about the Giftschrank, the producer detailed the long and winding path down stairs and through dark tunnels that a user has to take to access the Giftschrank at the Bavarian State Library today. And then the host Roman Mars landed a low blow at libraries:

The experience should be familiar to anyone who has ever tried to get a book from an academic library.
— Roman Mars

My first reaction was to be defensive: "Not all academic libraries!" I thought. "I've seen many shiny buttons on library websites!" But the more I thought about it, the more I saw value in Mars' critique. Because we, as librarians, know how to request books from the library. We understand what a "hold" is. But it can be a challenge for folks who do not have the same mental model of how a library works.

Libraries have changed a lot over the past decade. The thing that is all the rage right now is to talk about changing your physical library space. The video above shows North Carolina State University's amazing Hunt Library. (I could easily have put up a video of the library I work in, GVSU's Mary Idema Pew Library.) And what you'll notice is that it is really hard to find anything in these libraries that would have been standard library fare 20 years ago. If you watch closely, you'll see a bookshelf in the Hunt Library, somewhere, but don't blink, or you might miss it.

One of the things that's changed is that we've changed our orientation to our library spaces. A few decades ago, you couldn't even make an intellectual distinction between a library space and a library service. You walked into a library, and you saw the services laid out in front of you. But as we started moving things online, we've had to rethink what that space means. And one of the reasons we've transformed our libraries into wide-open spaces for study, for group work, and cafés, is that our physical spaces are now just one of our many services, rather than being the foundation of all of our services.

My colleague Kathy Kosinski at the University of Michigan iSchool worked on a survey of 80 undergraduates about why they came to the University of Michigan undergraduate library.2 And the students instinctively understand how the space fits into their lives. What's most important to them, is that the library is a place to study. Students see our spaces as one of our services. (Faculty, it seems, have different reasons.)

Last month Cody Hanson gave a talk at the Library Technology Conference in Minneapolis called "Libraries are Software." He didn't really like that title, actually, he wanted to say "Libraries are being eaten by software" but he wasn't sure anyone would come to a talk with that title. But one of his arguments was just this: our spaces have become just one service among many.

We're moving all of our services online into software, so that our users don't need to come to our physical spaces anymore, unless they need a place to interact with our software. And I think this is true, we're trying to move our services online, shedding the age-old idea that the library's space defines what we do. We're moving from space to screen. But the question is, have we really left the physical spaces behind?

Let's look at one common online service: Ask a Librarian. The problem with most of these services is that users have to choose to Ask a Librarian in order to use it. We don't have these links stationed at common points of failure, unless you count having a ubiquitous link to Ask a Librarian on every single page of your website the same thing.

The day before I gave this talk, I attended a session by Stefanie Buck of Oregon State University, where she heard her students tell her "I don't want to ask a librarian! I just want to know how to do it!"

This is Harvard's Ask a Librarian service. When they moved their reference service online, you'll notice that it defaults to email. But let's ask ourselves, what's the most face-to-face-like experience of communication for our undergraduate students? Is it email? No. Is it the phone? Absolutely not. It's chat, interacting with text. And Harvard has a chat service! And it's a big, blue button so it's really prominent!

However, you'll notice that the chat service is still in beta. (How do you put a service that has existed for decades into beta?) Harvard's beta likely stems from the limited hours of the service, which is available Monday through Friday, from 10am to 5pm, EST. The chat service is restricted by the physical space of the library despite the fact that it is online. The hours that librarians work has shaped the online service.

At GVSU we use Sierra for our ILS, and our OPAC has a number of hard-coded error messages that instruct users to "see a librarian." I only found these by following up on a number of service desk and chat interactions where the user mentioned that our website had told them to "see a librarian." Since these error messages are rare and hard-coded, they're hard to change.

What's the implication of this error message? That you are in the physical library when you are using the online catalog.

It is more and more common to see library websites that are "brochure-ware," describing what the library does rather than letting the users actually do it. The assumption here is that the website is not, in fact, the library, but rather is just a place to put a lot of words about all the things you can do in the physical library.

Finally, when we move our services online, we tend to shape them by our organizational structures or the ways we've organized our physical resources for the past century dictate how we organize our online services. Or, as my colleague Erin White from Virginia Commonwealth University says, "Our silos are showing!"

These are just a few examples, but I think the effect that library physical spaces have had on our online services goes deeper than this. We are still fundamentally trapped in the resource warehouse mindset of libraries when we think about online services, even as we've managed to separate that thinking out of our minds as we build and refurbish our physical spaces.

Don't believe me? Let's try an experiment.

Let's say we're going to build a library that Hannah Marburger might have gone to in the mid-1970s.

First, we'll put a sign out in front, as well as some landscaping. We want it to look nice when folks arrive so they know where they are and that we care.

We'll locate our card catalog and circulation desk right up front, but then we'll fill up most of the space with resources. Rows, and rows, and rows, and rows of resources. Since navigating those resources can be tough, we'll put some wayfinding tips right up by the card catalog. So when you find what you want, you can write it down with your little golf pencil on the back of some scrap paper we chopped up for you, and then you can look at the wayfinding to see where you need to go to find that call number.

Just in case the wayfinding doesn't help, we'll put a reference desk right back in the stacks. And then we'll put some exhibits, and some new books displays and a bulletin board with important news where everyone who walks in will see it. And finally, we'll post our hours, and some maps of the building,and some helpful tips (probably on a passive-aggressive sign) right up by the front door.

No one would build a library like this now, would they?

Here's the thing: we still build the libraries. This is how we think about our libraries, as physical spaces full of resources. Even though we might not consciously realize it, that connection between the physical library and the service is coming out in our online tools.

When we move physical items from the physical world into software, the new digital versions are often just replicas of their tangible counterparts. In design, this is called skeuomorphic design.

Think of the old desk blotter. Online calendars have the benefit of being portable, but they display the month in exactly the same way as the paper calendar (although sometimes they appear to be leather-bound). Yet the reason paper calendars are designed the way they are is a limitation of the medium, paper. Paper cannot change on its own. Once you get to the end of a month, the information of the previous 3 or 4 weeks isn't useful to you, yet it takes up most of the space. On a computer, this could easily be fixed, but most digital calendars don't offer this option. They stick to the paper-based layout because that is what we understand calendars to be.3

The library is full of skeuomorphs ... a set of arcane rules and conventions to learn for those who come to an academic library with only the web and their cellphones as their cultural templates.
— Barbara Fister

We've moved all these services from space to screen, but we've left the old shapes embedded in the new. The structure of the OPAC was dictated by the limitations of the 3 by 5 card. Ask a Librarian is often tied to the same hours as the reference desk.

We've changed the media by which we conduct our services, moving from space to screen. And although it's cliché to bring up Marshall McLuhan every time we talk about media, it's instructive here to help us understand what changed when we brought our services online.

For any medium has the power of imposing its own assumption on the unwary.
— Marshall McLuhan4

Here McLuhan is telling us that when you change communication mediums, the medium itself has a very strong impact on how the message is relayed. We often assume that if the content of our message stays the same, moving say between television and radio, that the message still has the same meaning. But McLuhan says that in fact, changing mediums will shape both the message sent and the message received. The content, he claims, is "like the juicy piece of meat carried by the burglar to distract the watchdog of the mind."5 That is, we're tricked by the content into thinking that the medium doesn't have any effect on our communications. By the content is shaped by the medium.

Have you every had a face-to-face conversation with someone, and then later used instant messaging or text messages to talk with that same person? The experiences of communicating in these two different ways clearly shapes the messages that we send and receive. And so, too, when we move, for instance, reference into chat.

The chat service strangles the entire conversation into something very different than a face-to-face conversation. But we didn't think about this when we moved reference online. Despite a wealth of research on virtual reference, not much has been said about this change, despite plenty of articles looking at virtual chat, most approach it from a budget or staffing perspective rather than a communication or philosophical standpoint.

The
Medium
Is TheMessage

We've moved from an analog medium to a digital one. In face-to-face interaction, there is a wealth of information being communicated through many different senses. Non-verbal cues help give richness to the interaction. Moving to the phone cuts off many of those non-verbal cues, and then we lose the richness of intonation as we move from phone to chat or email. But there is more than this change that happens when we move to a digital medium, something about they way that computers work that shapes the nature of our communication.

Analog communication is a "continuous, contiguous series of differences." Think of a volume control. Even at low volumes, information is still getting through. Digital communication is binary, it's either on or off. A light switch is digital. In digital communication, you either understand or you don't.6

When we're working with someone in person, we can use the rich spectrum of analog clues to see if someone understands what we are saying. As we move more and more into a digital medium, we strip those clues away, and the binary nature of the medium routes us into binary modes of understanding: yes or no, understanding or not, for or against.

I think the best way to see how digital media shapes the way we understand the world is by watching a bit of the British show, Little Britain. Go ahead, I'll wait for you.

Now, we're thinking hard about making things easier to use and to make library online tools not quite so ugly, but we're still missing something. And that has to do with thinking about how computers shape our interactions online.

Computer
SaysNo

The designer Don Norman talks about three aspects of successful design in his book Emotional Design: Why we love (or hate) everyday things. The first aspect is visceral design, which relates to all of the gut-level reactions to the aesthetics and layout of a design. One way to understand visceral design in libraries is to show a new student the Lexis-Nexis homepage. The reaction you get will be quite visceral.

The second aspect of design is behavioral design, where we talk about functionality and usability. This is where we focus on how something works, rather than how it looks or feels. in libraries over the past decade, we've worked hard to get a lot better at the first two aspects of design.

The last level of design Normal identifies is reflective design, is the most important. This is where people tell stories to themselves about how an object or service fits into their lives. And this is because Norman understands that we use technologies not for the sake of technologies, but rather to help us along in the bigger project of our lives.

Reflective design is ...about service, about providing a personal touch and a warm interaction.— Don Norman7

One of my favorite examples of the importance of reflective design was the introduction of Microsoft's Zune media player.8 The Zune has become a laughing stock, but the thing is: the Zune was exceptionally well-designed on both the visceral and behavioral levels. The Zune was beautiful, and it was quite easy to use. It even offered more functionality than the iPod, as well as the ability to share songs with other Zune owners.

Yet the Zune was a failure, because Microsoft didn't understand the reflective aspects of design. iPods, if you remember, were almost all reflective design. Think of the ad with the dancing silhouettes with just the white ear buds visible. That ad doesn't really show the product, so it's not about visceral design. And it doesn't tell you anything about how the device worked, so it wasn't about behavioral design. That ad was all about the reflective aspect of design, and it worked. Nearly everyone who bought a personal media player in the first 6 or 7 years after the iPod was released bought an iPod, because they wanted to be one of those dancing silhouettes.

The Zune was available in a gorgeous color that was sort of anodized bronze. And what tells you that they didn't understand reflective design is that Microsoft called the color "brown." Who wants to buy a personal media player that is "brown"? It wasn't even brown, it was a shiny kind of bronze. Anil Dash suggested that they could have called it "chocolate," which would have been great: a little bit sexy, a little bit sinful. But they didn't. They called it brown. And it was a flop.

We tend to talk about transportation as if the ultimate goal were mere movement, measured in speed, time and capacity. The ultimate goal of transportation, though, isn't really to move us. It's to connect us—to jobs, to schools, to the supermarket.— Emily Badger

We get really focused on the parts of transportation that make sense universally, and we lose sight of the real reason that people go places: the stories they tel themselves every day about their own lives.

Your members don't come to the library to find books, or magazines, journals, films, or musical recordings. They come to hide from reality or understand its true nature. They come to find solace or excitement, companionship or solitude.— Hugh Rundle

People use libraries because they are in the midst of living their lives. We're around to help our users tell their stories, not to hijack their interactions with our own library-inspired narratives.

A good science fiction story should be able to predict not just the automobile, but the traffic jam.— Frederik Pohl9

When we think of our services, we need to move beyond focusing on just the visceral and behavioral aspects of the technology. Because it's in the interaction with our users, the people, where we need to do the most work. If we can focus on studying and understanding what happens when our services and tools are used by people, we'll have a better shot at success. After all, what's a library without people?

And with all of this talk about services and spaces, we sometimes lose sight of that. But if we can truly get our services divorced from our spaces, we won't just make things better for the true distance students, those that cannot enter our buildings. We'll make things better for all of our users.

Your
Library
IsPeople

"But we do offer help!"" you say. Do we? I have seen the ways that we offer help to users. We overload them. We send them links to LibGuides. And the thing is: this is really good stuff! If only they would read these help documents we have written, we likely would make things better for our users.

I struggled for a long time trying to find an example of an industry that faced a similar issue: having important information that they needed to convey where the users actively ignored any attempt at communication.

I couldn't think of much, and while I was sitting on the runway on my way to the conference, I tuned out something the flight attendant was saying and really tried to get some thinking done. I wanted to know who else was trying to communicate information that was important to everyone, and that users would really want to know if something went wrong. And then I remembered, that on an Iceland Air flight last year, I was enamored by the safety video they showed me that was unlike any safety video I'd ever seen before.

The video starts off in a fairly conventional way, showing travelers on an airplane. But I don't care much at this point, because I'm already on an airplane, and being on an airplane isn't why I am on an airplane.

As I watched, I realized that they had come up with a really smart way to get me to pay attention: they connected the information they were trying to get across to my whole reason for being on the plane: I was going to Iceland. They found a way to keep me in my story, my adventure in the Icelandic landscape, and bring the information they thought I should know right there.

If you watch the video above, you'll see that they manage to incorporate not just beautiful vistas of Iceland into the backdrop: they manage to work in physically the actual safety instructions with the landscape.

When I got on the plane, I was already in Iceland in my head. Iceland Air realized to get me to pay attention, they had to understand my story, my motivation. And they met me where I was.

How do we connect people with their stories in libraries? How do we go to where the students are, online, shedding our dependence on our physical spaces?

What we've done at GVSU is to target small, micro-interactions where were commonly lose students. If we want to keep people in their stories, making our services easy to use is a first step. But making sure that help is available when and where they need it, without having to hunt down an "Ask a Librarian" button is also key.

One common failure point in our catalog is the author search. WebPAC Pro wants you to enter the last name followed by the first name. This is not how people search, though. WebPAC's solution is to let you fail the first time, and then offer to switch the order of the terms and do a whole other search. Yet no one told the user that they needed to do last name first! (Some libraries do, of course, by putting red text in all caps somewhere near the search box.)

At GVSU, we tried something easier. We just watch the drop-down menu to see if someone decides to choose the author search. If they do, we change the placeholder text to say "Last Name, First Name." Since we instituted this, we've gone from having a nearly 50% of our author searches were done with the first name first. After implementing this one function, we're down to about half a percent.

We also get a lot of the same questions over and over through our chat service. Since we use LibChat and LibAnswers from Springshare, I built a little tool that integrates them. You start typing your question, and the box gives you up to 5 possible answers that might help you immediately.

One other issue that caused a lot of cognitive overhead is filling our request forms for Interlibrary Loan or Course Reserves. Instead, we just have one request form that has a check box that toggles between digital copies and physical books to simplify the process of getting something from one of these services. The underlying metadata was actually the same for nearly every other item type that Atlas Systems created for both of these services, so I have no idea why they made their options so complicated. But by getting out in front of these common roadblocks, we were able to help our users be successful.

This past week, we made a small change to our Interlibrary Loan request form. For many reasons, we want folks to first search GVSU holdings for books, then search MeLCat, our consortial catalog, and finally, if they can't get what they need from those two sources, to submit an Interlibrary loan request. The problem is that MeLCat is a separate website that I have no control over. Once I send someone there, I can't get them back unless they navigate back to our website.

So we thought of all kinds of ways we could control the flow of our users as they searched for books when it hit me: stop trying to make the user come to some page we've created that will set them straight. Go to where the users are. And every user that submits an ILL request instead of a MeLCat request comes to Interlibrary loan. I wrote a simple script that does a title search for your item and lets you know if it's available in MeLCat, even giving you a button that will take you right to the request page.

(Every month we have between 75-85 users who request items from ILL that could have been requested from MeLCat. Not only will the users get things faster, but MeLCat requests are effectively free to us, while ILL costs per request.)

Rather than focus on really big issues, I work to find those areas of our online services where we can make things easier for all of our users, by removing roadblocks and the traces of our physical spaces.

Good technology makes us feel like we are inching closer to who we truly want to be.— Frank Chimero10

Like our services that were grounded in our physical libraries, our online services should help make our users better versions of themselves. But we can only do this by being conscious of the changes that took place when we moved our services from space to screen, and by recovering the reflective aspects of our designs. By focusing on how people interact with our services, and how they use our tools to live our their goals and stories, we can all succeed, together.

Try as I might, I've never found a reputable citation for this, although I've seen the quote several times, always attributed to Pohl.

Chimero, F. The Space Between You and Me. The Manual, Issue #1, 2012. p. 26

]]>reidsmam@gvsu.edu (Matthew Reidsma)Mon, 19 Feb 2018 12:00:00 -0500http://www.matthew.reidsrow.com/articles/175Algorithmic Bias in Library Discovery Systemshttp://www.matthew.reidsrow.com/?b=173
More and more academic libraries have invested in discovery layers, the centralized “Google-like” search tool that returns results from different services and providers by searching a centralized index. The move to discovery has been driven by the ascendence of Google as well as libraries' increasing focus on user experience. Unlike the vendor-specific search tools or federated searches of the previous decade, discovery presents a simplified picture of the library research process. It has the familiar single search box, and the results are not broken out by provider or format but are all shown together in a list, aping the Google model for search results.

Discovery's promise of a simple search experience works for users, more often than not. But discovery's external simplicity hides a complex system running in the background, making decisions for our users. And it is the rare user that questions these decisions. As Sherry Turkle (1997) observed, users approach complex systems like search engines at “interface value.” Since the interface is simple, they are content to assume that the underlying mechanism that makes them work is also simple. They are often unaware that complex algorithms help determine what results are shown and what results are excluded. As library search in particular has become simpler, the complex workings of our search tools have, like Google's, receded into a black box. As Tim Sherratt (2016) reminds us “it's not just the simplicity of that single search box, it's our faith that search will just work.”

Even within libraries, algorithms are not well understood. A common definition is that an algorithm is a set of instructions that take an input and produces an output. But translating this simple definition into a model that explains how a single search box can produce millions of possible results is a challenge. Describing an algorithm in these terms is another attempt to put a simple interface on a complex idea. Frankly, with algorithms, the input and output aren't the most important parts. The instructions are what matter, since they help to determine how the input is interpreted and how the output will be generated. Perhaps a better way to define algorithms is by following Christopher Steiner (2012), who compared them to “decision trees, wherein the resolution to a complex problem, requiring consideration of a large set of variables, can be broken down to a long string of binary choices” (p. 6). An algorithm, then, can be thought of as a series of if/then statements, where a number of parameters are examined and the results can affect how the input is transformed along the way. Algorithms don't have to be computer code, either. As a child I loved the Choose Your Own Adventure series, and they were in their own way a literary algorithm. Today, what stories appear on your Facebook news feed and what ads you're shown as you move about the web are all determined by algorithms.

But rather than objective methods of analysis, algorithms are systematic instructions created by humans that computers follow. Ian Bogost (2015), writing in The Atlantic, claims that algorithms are closer to caricatures then deities. Algorithms, he writes,“ take a complex system from the world and abstracted into the processes that capture some of that systems logic and discard others.” This is important to remember, however, because our view of algorithms as objective processes is dependent I'm the absence of human interference. But as Andreas Ekström (2015) reminds us in his TED talk, The Moral Bias Behind your Search Results, “behind every algorithm is always a person, a person with a set of personal beliefs that no code can ever completely eradicate.” We might add that behind every algorithm is also a company, with obligations to its business model and shareholders.

To the extent that we understand algorithms, they take on a more significant role than simply a set of instructions a computer uses to weight the “relevance” of possible results. As Tarlton Gillespie (2014) has noted, “more than mere tools, algorithms are also stabilizers of trust, practical and symbolic of assurances that their evaluations are fair and accurate, free from subjectivity, error, or attempted influence” (p. 179). Users' faith in search algorithms frees us from having to evaluate the inner workings and possible biases of tools that have become fixtures in our daily lives. The French philosopher Bruno Latour has said that “in so far as they consider all the black boxes well sealed, people do not, any more than scientists, live in a world of fiction, representation, symbol, approximation, convention… they are simply right” (qtd. in Dormehl, 2014, p. 236).

Providers also capitalize on this faith. Marissa Mayer, while at Google, said, “It’s very, very complicated technology, but behind a very simple interface. Our users don't need to understand how complicated the technology and the development work that happens behind us is. What they do need to understand is that they can just go to a box, type what they want, and get answers” (Vaidhyanathan 2011, p. 54). That our perception of search tools’ trustworthiness should be so uncritical has been a boon to the industry. In librarianship over the past few decades, the profession has had to grapple with the perception that computers are better at finding relevant information then people. On the technical services side of the profession, we have responded to this perception by pushing for more integration with our various search tools. Over the past decade discovery tools, which search a unified index of providers from a single certain point, have changed the way that many library users do research. As our discovery tools have become more complex, much of the discussion and critique has centered on the simplification of the search process, the effectiveness of user interface elements, and the integration with other library systems and services. I have found no substantive evaluation of the search algorithms of commercial library discovery platforms in the literature.

The task of determining how successful our library discovery tools are at presenting good results is thus stymied by user perceptions of what the tools are capable of, the opacity of the business model of our search engine providers, and the fact that underlying everything was a series of instructions written by people with a particular point of view. Yet as academic libraries providing our users with search tools, we need a way to evaluate the contents of the results. In Safiya Noble's (2012) research on commercial search engines like Google, she highlights how the results Google shows users are understood themselves to be a reflection of the truth. When students in Noble's classes search for “black girls” and Google returns porn websites (p. 38), what are African-American girls to think about their own identity? In situations like this, we can make a clear argument that the results are not only not relevant, but that they appear to be biased. But Google's results for “black girls” also highlights the moral question underneath this: if indeed algorithms are created by people with biases and perspectives, who should be accountable for these kinds of search results? (Google often defers to the algorithm, refusing to accept blame since the computer has done the selecting. They fail to mention that engineers programmed the computer to do the selecting in the first place (Dormehl, 2014, p. 226).)

In academia, we like to assume that our results do not necessarily send the same message to users as those of commercial search engines. While Google is showing you “answers,” academic discovery tools are reflecting the scholarly conversation around a subject, and so wide ranges of opinions are to be expected. (It is doubtful that our users are as confident in this distinction as library folk are.) With the number of results that are returned from our commercial discovery services (easily over a hundred thousand on many of the most common searches) along with the trouble in ascertaining the meaning of “relevance,” it can be a challenge to make a solid case for what constitutes a “good” algorithm. But just because this kind of analysis is challenging, or that doing it might call into question our own faith in the objectivity of our discovery service's algorithms, doesn't mean we should abandon it. As the law professor Danielle Citron has argued, “we trust algorithms because we think of them as objective, whereas the reality is that humans craft those algorithms and can embed in them all sorts of biases and perspectives” (as cited in Dormehl, 2014, p. 150). We need to evaluate these algorithms for accuracy and bias.

From following the discussions at professional conferences, on list-servs, and in the professional literature over the past few years, I've seen a sort of acceptance from librarians of the black box nature of our discovery search algorithms. Like our users, by not questioning how the black box works, we can continue to keep have full trust and it's effectiveness. Of course, it is also difficult to test the “effectiveness” of many of our search tools, since the results are often sorted by “Relevance”. The problem with evaluating relevance, is that once you start to measure the relevance of a result you bump up against an important question that is not part of the library discovery service: relevance to whom?

Introducing the concept of relevance automatically brings the supposed objectivity of algorithms into question. In Tarleton Gillespie's (2014) analysis of the “political valence” of algorithms, he cites six dimensions of algorithms that we can examine that have political or moral dimensions. The third and fourth dimensions are “the evaluation of relevance” and “the promise of objectivity” (p.168). As Introna and Nissenbaum (2000) have noted in their work on the politics of search engines, “experts must struggle with the challenge of approximating a complex human value (relevancy) with a computer algorithm” (p.174). Gillespie's (2014) analysis is less optimistic about the engineering attempts at relevancy: “'relevant' is a fluid and loaded judgment … engineers must decide what looks 'right' and tweak their algorithm to attain that result… or make changes based on evidence from their users, treating quick clicks and no follow-up searches as an approximation, not of relevance exactly, but of satisfaction” (p.175).

At Grand Valley State University, we have used the Summon Discovery service from ProQuest since 2009. In 2013, Summon rolled out its new “2.0” product, with a number of features that were meant to move it beyond just the traditional rows and rows of relevance-ranked results. One of those new features was similar to new work coming out of Google around the same time. In Summon 2.0, the right hand sidebar was reserved on new searches for something called the “Topic Explorer.” Topic Explorer was designed to “dynamically display background information for more than 50,000 topics” (ProQuest, 2013) by showing reference articles for the topic the user was searching for. The reference articles would “provide users with valuable contextual information to improve research outcomes” (Ibid). ProQuest did this by “analyzing global Summon usage data and leveraging commercial and open access reference content, as well as librarian expertise” to establish “large-scale, data-driven contextual guidance” (Ibid).

Behind the corporate lingo, the goal of Topic Explorer was to show reference material for broad searches by drawing on entries from Wikipedia, Credo Reference, Gale Virtual Reference Library, and more. At GVSU, we’ve had Wikipedia and Gale Virtual Reference Library enabled since we went live in early 2014.

The Topic Explorer also benefitted from another Summon 2.0 feature: query expansion. Since users don't always know the best keywords or controlled vocabulary words for their particular academic search, Summon 2.0 would automatically match many common keywords with additional terms that would return results from more technical literature that were relevant for the users search. The example given in the press release is a search for “heart attack.” Summon 2.0 will also return results that match for “myocardial infarction,” the technical term for a heart attack (ProQuest, 2013). Doing a search on GVSU's instance of Summon for “heart attack” also returns the Wikipedia article for “Myocardial Infarction” (Figure 1).

Figure 1

While the exact algorithm that chooses the terms remains a bit of a mystery, Brent Cook (2016), the current Project Manager for Summon, explained a bit behind how the tool works in an email to the Summon Clients list-serv in January 2016:

”We use our log files to identify topics and synonyms for these topics. We index these topic names and synonyms using Solr. We then match the Wikipedia and encyclopedia articles to topics. In the case of Credo, we match through their topic taxonomy, which has links to their encyclopedia articles. Because of this, you will see differences between searches done on the individual platforms as compared to the Topic Explorer in Summon.“

When I first began sketching out tests to measure the effectiveness of algorithms, I was drawn to the one behind the Topic Explorer because it was professing to do something different from all other discovery search results sets. By returning only a single result and placing the result on the right side on wider screens (mirroring Google's Knowledge Graph), the Topic Explorer is meant to say, "this is what you are searching for.” Because there is only a single result, there is a confidence inherent in the design that would make this a test subject for algorithm effectiveness.

What's more, as I was in the planning stages for a project to examine the Topic Explorer, a colleague reminded me of a common search he uses to examine new search tools, “stress in the workplace.” Run in our instance of Summon, the Topic Explorer returned the Wikipedia article for “Women in the workforce (Figure 2). This seemed to be something else worth examining, since rather than just accuracy, this particular result contained what appeared to be gender bias. The result was equating stress at work with working women. I tweeted about the search (Reidsma, 2015), and ProQuest made a change in the algorithm to fix that particular search (ProQuest, 2015). However, I quickly discovered that they had merely blocked the keywords "stress in the workplace” from returning a topic. A search for “stress in the workforce” still returns “women in the workforce,” more than three months after they made their change.

Figure 2

I became interested in testing not just for accuracy in the Topic Explorer, but also for situations where inaccuracy crosses the line into bias. The study of bias in search engine results has a fairly long history. In 2002, Mowshowitz and Kawaguchi emphasized the need for this kind of analysis, since “the role played by retrieval systems as gateways to information coupled with the absence of mechanisms to insure fairness makes bias in such systems an important social issue” (p.143). Indeed, many such studies have looked for anomalies in results. According to Eslami et al. (2015), “researchers have paid particular attention to algorithms when outputs are unexpected or when the risk exists that the algorithm might promote antisocial political, economic, geographic, racial, or other discrimination” (p.154) Indeed, examining problematic results in the Topic Explorer felt like not just a first step in building a toolkit for examining library discovery algorithms, but also of highlighting the moral responsibility we who create technologies for libraries are beholden to. The Topic Explorer itself is designed to change a users’ behavior, to offer “ contextual information to improve research outcomes” (ProQuest, 2013). As Luke Dormehl (2014) reminds us, because technology aims not to describe but to change the world, it is “a discipline that is inextricably tied in with a sense of morality regardless of how much certain individuals might try to deny it” (p.132).

Once I understood the scope of my project, I knew I needed a way to examine a large sample of searches that included Topic Explorer entries. Since not all searches have a Topic Explorer, I couldn't just wade through the ProQuest search logs. Rather, I decided to specifically capture the searches that returned Topic Explorer results. I wanted to know what the search terms were, what topic explorer entry came up, what the entry said, and which provider it was from. Following on Cook’s comment that the Summon Topic Explorer algorithm will return different results in the native tools, I wanted to also examine what results I would have gotten from Wikipedia or Gale Virtual Reference Library if I had searched there directly using the same search terms.

First, I added a small jQuery function to our custom JavaScript file that runs in Summon. The JavaScript looks for Topic Explorer entries on search result pages, and then sends the search terms, the entry title, source, and text to a simple PHP file that saves the data into a MySQL database. I then created a web page that returned and sorted the results. The tool first groups all of the search queries together, counting the number of times the exact same keywords have been used, and then displays the results in chronological order.

I then waited a few months to collect data, and reviewed the first 8,000 search queries that produced a Topic Explorer entry. In most cases, the Topic Explorer showed an entry that matched exactly or very closely the text entered by the user (or, at least the same words, if not in the same order). In one grouping of the page, you'll see a search for “transactional analysis” return a Wikipedia article of the same name, a search for “french revolutionary wars” return the Wikipedia article “French Revolutionary Wars,” and a search for “chinese herbal medicine” return the Wikipedia entry for “Chinese Herbology.” These examples are numerous, and show the benefits of the Topic Explorer algorithm at matching these general searches with relevant reference material.

The algorithm also proved to be quite good at picking up related terms in some cases, regardless of what keywords were entered. At times, this was because of the query expansion, as in the “heart attack” search example. Other times, however, the Topic Explorer algorithm was able to find related entries that were relevant without relying on query expansion. For instance, a search for “ends justify the means” returned the Gale Virtual Reference Library entry for “Consequentialism,” a philosophical outlook in which outcomes are enough to justify actions. Searching the Gale Virtual Reference Library for “ends justify the means” returns over 2,800 results. The first hit is the “New Testament,” and nowhere on the first page of results is “Consequentialism.” Here is a case where Summon's custom indexing has brought a potentially useful reference article out into a user's search.

However, often searches for specific topics were given an overly-generalized reference article, such as a search for “fetal tissue research” which returned the Wikipedia article on “Fetus.” The topic of “fetus” isn't going to be particularly relevant to a search on fetal tissue, especially since the Wikipedia article focuses on fetal development during pregnancy. Searching Wikipedia for “fetal tissue research” returned over 800 results, none of which matched the search directly, either. The majority of the results on the first page were political entries, covering politician or political party positions on fetal tissue research. The ninth result was "Fetus.” In these situations, it seemed that Summon wasn't improving on the native search, but it wasn't hurting things, either.

However, there appeared another group of searches that returned Topic Explorer entries that didn’t match the search terms at all. Of the 8,000 search queries I examined, I flagged 561 as being inaccurate, usually for matching an incorrect Topic Explorer entry with the search. Many of these errors fell into predictable categories, although not all. There were topical searches that returned known item entries, like the search for “skin to skin contact” that returned the Wikipedia page for an Australian pop song called “Skin to Skin.” Often these were subject searches that returned a particular journal, such as the search for “marriage and family” that returns the Wikipedia article on the “Journal of Marriage and Family.” These are at least understandable, if we assume that the algorithm is weighing the keywords in the entry's title more heavily than other factors. More puzzling is the search for “poems” (granted, a terrible search) that returns the Gale Virtual Reference Library's article on Ralph Waldo Emerson. While Emerson was a poet, it hardly seems helpful to introduce the curious user to the entire breadth and history of poetry by offering up a biography of one American writer.

Another group of problems concerned searches for known items. Of these, many returned the wrong known item, usually due to a pattern of words in the titles that was similar. Some examples of this were the searches for “return of the king,” where users expected information on the Tolkien novel but instead were shown the Wikipedia entry for a documentary called “The King of Kong: A Fistful of Quarters.” Searches for the “city of god” (which could either be a topical or known item search) were shown the page about a non-fiction book called “Farm City: The Education of an Urban Farmer.” Many of these known item searches that returned the wrong item concerned journals. Searching for “american journal of transplantation” will get you the “American Journal of Sociology,” and the “journal of aquatic sciences” will offer up the “British Journal of Psychiatry.” No matter how you search, looking for “the prince” or “the prince machiavelli” will get your research on “Prince, 1958-” the “exciting live performer and prolific singer-songwriter,” courtesy of Gale Virtual Reference Library.

Sometimes a topical search returns topical information, but on the wrong topic. While some combinations seem downright nonsensical (“women are homemakers” returns “sociology”), some can be inadvertently humorous, depending on your political or moral leanings. For instance, searching for “united states healthcare system” returns the Wikipedia article on “United States patent law”, which may or may not be a comment on the hold that Big Pharma has on our current healthcare. A search for “princess diana” returns the Gale Virtual Reference Library article on “Wonder Woman.” The search for “creation of patriarchy” perhaps rightly introduces the reader to Michelangelo’s “Creation of Adam” painting, for if there were to be a starting point of the patriarchy, why not at the beginning? A sad commentary on the rising cost of college tuition might be the search for “united states egg price” which gives us Wikipedia's entry on “Student financial aid in the United States.” Those concerned with the influence of sports in our culture will no doubt be pleased to see a search for the “culture of sports” return “the culture of narcissism,” and it does seem appropriate to learn more about “legal drugs in the united states” by studying “Public holidays in the United States.” Perhaps my favorite, “branding” returns “BDSM,” the Wikipedia article for Bondage, Discipline, Sadism, Masocicism dealing with fantasy role play. It's hard not to read that as a statement about corporate image creation.

While the sample of incorrect Topic Explorer associations I shared above might be read as sly commentary on our political or educational systems, other results in the Topic Explorer presented a darker perspective. This is not uncommon when dealing with algorithms. Mike Ananny (2011), writing in The Atlantic, told of installing Grindr, a location-based dating and socializing app for gay men, and looking at the algorithmically generated “related” apps. Included was a “Sex Offender Search” app. In teasing out how the algorithm decided that gay men and sex offenders were topically related, he notes that “reckless associations—made by humans or computers—can do very real harm especially when they appear in supposedly neutral environments.”

In 2013, Harvard professor Latanya Sweeney published an article that argued that searching for “racially associated” names on Google and other services had a significant effect on whether an advertisement implying that the person you were searching for had been arrested. “Black-identifying names” like DeShawn and Trevon were 25% more likely to be shown an ad suggesting that the person had been arrested than names associated with whites, like Jill or Emma. These results are not connected to whether the names actually have an arrest record, but seem to be algorithmically biased.

Of the 561 Summon queries I judged as returning incorrect Topic Explorer results, I flagged 54 as displaying potential for bias (including the BDSM entry I just mentioned). That is 9.63% of the problematic search queries, or .68% of the 8,000 I reviewed.

Mowshowitz and Kawaguchi proposed two kinds of bias that can come from matching algorithms. The first is “indexical bias,” where bias is evident in the selection of items. “Content bias,” however, comes from the content of what was selected (2012, p.143). Of the 54 potentially biased results, the majority were instances of “indexical” bias, where the matched subject term was enough to imply bias in the matching algorithm.

Indexical bias most often occurred where a search query was matched with an article that implied a social, moral, or political comment on the original search terms. For instance, a search for “corrupt government in united states” returned the Wikipedia entry for “Government procurement in the United States,” making the implication that the Government's supply chain is inherently corrupt. Another example was a search for “pollution levels in the usa” which offered the user the Gale Virtual Reference Library article on “Education in the United States.” Say what you will, we've had vigorous debates about education in America, but I don't think it's quite reached the point where we can classify it as smog.

Bias cut the other way, at times, such as a search for “history of human trafficking” that returned Wikipedia's entry for the “History of human sexuality,” an entry that may be related to trafficking but by no means subsumes the entire topic.

This isn't to say that bias was put into the algorithm intentionally. However, the end result is the same, regardless of intent. The important thing is to understand how the results were mapped to specific search terms and address the problem.

The majority of the 54 biased search results were related to women, the LGBT community, race, Islam, and mental illness. I’ve reproduced each of these categories below, including any spelling errors present in the original search queries.

Biased Results About Women

Search terms

Topic Explorer Result

women in the holocaust

Women in the military

stress in the workforcestress in the workplace

Women in the workforce

women in the middle east

Women in the Middle Ages

rape in united states

Hearsay in United States law

women roles in household

Gender roles in Christianity

virginity

Sexual abstinence

history of women's rights

History of far-right movements in France

indulgences in the middle ages

Women in the Middle Ages

women in the governement [sic]

Women in the military

the birth of feminism

The Birth of Tragedy

corruption in the army

Women in the military

united states rape culture

Culture of the United States

By far, the most problems encountered were related to gender. Here we see the comparison that started my examination of the Topic Explorer, the result that says “women” are the same as “stress” when found in the workplace. Perhaps one of the most offensive entries suggests that researchers looking into “rape in united states” have a look over the Wikipedia article on “Hearsay in United States law,” which describes unverified statements made about an event while not under oath (Figure 3). Implying that a search for information on rape in this country is tantamount to searching for unverifiable claims goes beyond merely being incorrect, it's offensive. A search through the text of the Wikipedia entry on Hearsay shows that the word “rape” never appears in the text.

Figure 3

Another topic is a bit perplexing since they seem to reflect one side of contemporary moral debates: a search for “virginity” that produces the entry for “Sexual abstinence” implies a certain moral or political stance, these days. Especially since Wikipedia has a perfectly useful entry on Virginity. (The word virginity only appears on the Sexual abstinence page three times, and two of those are in see also references. In contrast, it appears 121 times in the Virginity entry, including in the title.) Of course, the Summon results page indicates that they have expanded the default search of virginity to include “sexual abstinence.” Removing that query expansion will return the Wikipedia “Virginity” entry. While query expansion works well in the case of using synonyms, this query expansion is a different matter. One can be sexually abstinent without being a virgin, and so including the query expansion makes the Topic Explorer entry look like a synonym for virginity.

At least a biased result you'd get when searching for “the birth of feminism” has some technical explanation: when you're shown “The Birth of Tragedy,” hopefully you'll understand that the algorithm was matching a pattern of words, not comparing feminism with tragedy. Likewise, the “corruption in the army” search that returns the entry on “Women in the military” is matching a pattern in the title and using a synonym: you can try this experiment yourself by putting whatever you like in place of the “x” when visiting GVSU's Summon: “X in the military” will almost always return the Wikipedia entry for “Women in the military,” as if the Topic Explorer has become a sort of Mad-Libs. The same pattern is at work with many searches for "women in the,” as evidenced by the suggestion that the only role for women in the government is in the military. Matching “history of women's rights” with French far-right movements it latching on to the indexical term “right,” although missing the different context and meaning, and implying that women’s rights are a form of extremism. The word “women” never appears in the entry on French's far-right movements, indicating that the algorithm is overemphasizing the keyword matching in the title.

Another pattern matching error that appears biased is the suggestion that like “indulgences in the middle ages,” women during that time were tools to be handed out by the church to reduce the time someone spent in purgatory. As for the “stress in the workplace” search that ProQuest fixed, you can still find the biased result by switching to workforce. Nearly any noun can be added to a search “X in the workforce” and you'll still get the entry about women.

Biased Results About the LGBT Community

Search terms

Topic Explorer Result

domestic violence in the united states

Domestic partnership in the United States

lesbian literature

Lesbian fiction

crimes of the community

Healthcare and the LGBT community

Here we have domestic violence equated with domestic partnership (Figure 4). While both of these subjects apply to both hetero- and homosexual relationships, domestic partnership began as a way to recognize same-sex unions. With the amount of controversy surrounding sax-sex marriage in the United States, this kind of “reckless associations” can add fuel to an already contentious debate.

Figure 4

Another match implied either that the only literature available for or about lesbians is fiction. When searching for “lesbian literature,” the Topic Explorer shows an entry for “lesbian fiction,” as well as the “Related topics” of “science fiction” and “speculative fiction.” This likely stems from a decontextualized association between the term literature and the term fiction. However, a search for “gay literature” in Summon returns the Wikipedia article “Gay Literature,” and Wikipedia has an entry entitled “Lesbian literature.” Even more perplexing is the fact that the Wikipedia entry for “Lesbian fiction” was redirected to the more broad entry “lesbian literature” in October 2015. I am writing this four months later, and the Wikipedia entry in Summon’s index still has not been updated.

The final match in this category equates “crimes in the community” with “Healthcare and the LGBT community.” Implying that providing healthcare to a particular population is a crime, especially a community that already suffers from overt prejudice and discrimination, is not a stance that libraries want to be associated with.

One other topic was initially flagged, although on investigation I decided to remove it from the examination. Searching for “lgbt youth” in Summon will return the Wikipedia entry on “Suicide among LGBT youth.” There is no Wikipedia entry for “lgbt youth,” and in fact a search on Wikipedia for that topic returns the entry for “Homelessness among LGBT youth in the United States” and “Suicide among LGBT youth” as the first two results. This is an instance where the algorithm seems to be zeroing in on an entry that includes the search keywords frequently, because sadly, much of the discussion around LGBT youth today centers around suicide and homelessness.

Biased Results About Islam

Search terms

Topic Explorer Result

muslim terrorist in the united states

Islam in the United States

islamic day of judgment

List of modern-day Muslim scholars of Islam

In searches about Islam that returned a Topic Explorer result, Summon did consistently well. This is largely due to a number of topic-specific Wikipedia articles on different aspects of Islam, ranging from articles on Islam’s historical roots, its importance to the intellectual development of Medieval Europe, as well as the religion’s more contemporary roles in the West and relationship with extremist groups.

However, some searches were clearly problematic, especially searching for “muslim terrorist in the united states,” which returned a Wikipedia article on the religion itself. Matching a search for information about terrorists with the entire religion only adds to the stereotypes perpetuated in the aftermath of terrorist attacks in Paris and California. This is especially problematic given that Wikipedia itself has an entire entry on Islamic Terrorism (there is also a Christian Terrorism entry). Why not bring up the page specifically about terrorism related to that particular religion? The word “terrorist” only appears once in the Islam article, while it appears almost a hundred times in the Islamic Terrorism article. Even “united states” appears 24 times in the Islamic Terrorism entry, while only appearing 5 times in the article on Islam the religion.

What’s more, returning a list of individuals for a search on the “islamic day of judgment” is perplexing. No where in the article does the word “judgment” appear, and except for its appearance after “modern” in the title, the word “day” is absent as well. If someone was concerned with the Islamic “day of judgment,” what would they do about a list of individuals who are identified as muslims?

Searches for “muslim schools in the us” or “islamic schools in the us” return the general entry for “Education in the United States,” perhaps suggesting to some that all schools in the United States are muslim. But this is again an example of naive pattern matching, it seems. Like the earlier Mad Libs searches, you can enter nearly any noun you like into the pattern “X schools in the US” and get this result. This would seem to be more evidence for the additional weight of the title of the topic entry.

Biased Results About Race

Search terms

Topic Explorer Result

white slavery

Moral panic

the black woman

The Woman in Black

early childhood development policies in south africa

Apartheid in South Africa

forced marriage united states

Desegregation busing in the United States

race in the mediacrime definition in the united statesrace in the worldteaching about race

Race and crime in the United States

history of poverty in america

History of slavery

If you search in Summon for “white slavery,” the Topic Explorer will present you with a Wikipedia entry for “Moral Panic.” (Figure 5) Searching for “black slavery,” meanwhile, returns a blank topic explorer pane, perhaps because Wikipedia itself will offer you nearly 10,000 possible entries related to black slavery. But why is white slavery related to moral panic, while black slavery is not? It’s true that the Moral Panic article uses the phrase twice (although one of those is in the references), but Wikipedia itself also has a perfectly suitable entry on “White Slavery.” How could the algorithm make this kind of a judgment about a topic?

Figure 5

Most of the results so far have concerned indexical bias, but there are times when Topic Explorer results show both indexical as well as content bias. Searching for “the black woman,” a user will be presented with a Wikipedia entry for The Woman in Black, a horror film from the early 1980s. Strangely, Summon’s autocomplete algorithm seems to recognize that this search is probably a known-item search for The Black Woman: An Anthology, but the algorithm for autocomplete must not share what it has learned with the Topic Explorer. The content of the Wikipedia article describing the film, however, suggests that what you are looking for is a “menacing spectre that haunts a small English town.” Of course, what is throwing off the search is the addition of the definite article (common in known item searches). A search in Summon for “black woman” brings up the Wikipedia entry on “Black people,” which is redirected from the “Black woman” entry.

Making associations between business practices or child rearing techniques and Apartheid, the institutionalized segregation in South Africa that lasted until the mid-1990s is morally problematic. Likely this is because the algorithm again latches on to a pattern of words, including “in south africa” and uses the apartheid article as a go-to result. In fact, nearly any noun you add to the phrase “X in south africa” will return this result. From “freedom” to “snow” to “soccer,” nearly any search seems to imply to the algorithm, “you must be interested in apartheid.”

Further associations are to equate forced marriage in the US with the desegregation of busing during the Civil Rights movement. Since “marriage” doesn’t appear anywhere in the busing article, I’m unsure of what to make of this connection. The title of the article used in Summon was actually redirected to the “Desegregation busing” entry in January of 2013, which was before Summon 2.0 launched to the public, which presents another question: how often are these entries updated?

Many searches that involve crime or race on their own will show Wikipedia’s entry on “Race and crime in the United States.” I’ve shown the selection of four such searches from the 8,000 entries I examined. In the case of a search for a definition of “crime,” I’m especially perplexed as to why the Topic Explorer would choose the Race and Crime entry, since Wikipedia’s entry “Crime” contains the word “definition” 14 times (although some of those are in use of conceding that it is hard to agree on a consistent definition). But why bring race into question if the user only asked about crime? And why also bring crime into the conversation if the user is asking about race alone? The Wikipedia entry on Race (human categorization) contains a total of six uses of the word crime, although four of them are in “See also” notes. (Race (biology) has no mentions of crime.) This seems to be a case where the algorithm is suggesting possible cross-disciplinary connections, but it does so in a way that amplifies stereotypes. The same can be said of suggesting that poverty is a result of slavery, which is heavily tied up with race. Are we to assume that poor people are still slaves, or that only those whose lineage included being enslaved are poor? The connection is not helpful from a research standpoint, and so we are left trying to make moral or political associations between the topics.

What’s more, searching for items of importance to Africa, the continent, tend to default to topics about African-americans. For instance, “african history” and “african culture” both return articles about African-American history and culture. This isn’t dependent on GVSU being an American university, either. The result also appears at the University of Huddersfield in the UK as well as the National University of Singapore, both of which use Wikipedia as a Topic Explorer source1. This effectively implies that Africa itself doesn’t have a history or culture of its own. Wikipedia has a perfectly serviceable article on “History of Africa” and “Culture of Africa”, each of which can be found by searching Wikipedia natively with the same keywords as the Summon search.

Biased Results About Mental Illness

Search terms

Topic Explorer Result

mental illnessstigmas of mental illnessthe stigma of mental illness the suffering of illnesssigns of mental illnesssymptoms of mental illness

The Myth of Mental Illness

Figure 6

Nearly every search I’ve conducted in Summon that includes the words “of Mental Illness” shows me the Wikipedia entry for a controversial book by the psychiatrist Thomas Szasz entitled The Myth of Mental Illness. Szasz’s argument was one with the way that psychiatry and psychoanalysis were being conducted in the mid-twentieth century, but the title of his book was not intended to suggest that mental illness does not exist. Yet Summon consistently returns this strongly suggestive result whenever users search for topics related to mental illness. Wikipedia itself has a very long and in-depth entry on Mental Disorder, redirected from Mental Illness, which even includes a sentence about the role Szasz played in the development of the legal and psychological understanding of mental illness in the twentieth-century. Yet none of this is presented to our Summon users. Rather, any search for mental illness (or, in some cases, just “illness”) gives a headline that suggests that the topic they are studying is nothing more than a myth.

Next steps

Since the goal of the Topic Explorer is to identify the underlying topic behind a user's search, incorrect or biases results can have a great impact on a user's perception of a topic. By showing results that exploit stereotypes or bias, the Topic Explorer is saying to the user, “this is what you are looking for.” The purpose of my examination was to bring these anomalies to light and start a discussion within the library community about how to improve our search tools for everyone.

Figure 7

After examining the data, I took several steps to improve Summon’s Topic Explorer for my users. First, I made some modifications to the user interface of the Topic Explorer to give users the ability to report incorrect or biased Topic Explorer results, as well as offering some contextual information on how the Topic Explorer result was chosen (Figure 7). Second, I shared the analyzed data with ProQuest’s Summon team, so that they had data on search queries done by users and the Topic Explorer results that were returned. These two steps were meant to address the results that were either incorrect or biased. By sharing the analyzed data with ProQuest, I hoped that the algorithm could be improved so that biased results would be less common. And by offering contextual information about the Topic Explorer as well as a button to report a problem, I hoped to not only encourage reports but also to remind our users that these tools are not oracles, and the truths they claim can and should be questioned.

The third change I have proposed was more of a byproduct of the analysis. When Summon first launched, we left the default Topic Explorer sources in place: Wikipedia and Gale Virtual Reference Library. When the Topic Explorer was first launched, much of the discussion on the Summon list-serv was about the “reliability” of the reference sources. Wikipedia was scorned as a useful source, and some academic libraries still only provide Topic Explorer results from subscription reference sources like Credo and Gale Virtual Reference Library.

While analyzing the data, I discovered something that I wouldn’t have suspected before wading into the details of how the Topic Explorer works. Because the Topic Explorer attempts to give a quick and dirty summary of a topic rather than a lengthy treatise. Since Wikipedia has a style guide that requires authors to start their entries with a short summary, Wikipedia entries are perfectly suitable to being broken up and shown in bits and pieces. Gale Virtual Reference Library, however, was a more typical encyclopedic style that assumed the reader would take in the entire article at once. When Summon pulled out the lead paragraph of a Gale Virtual Reference article, then, it often seemed out of place with the search terms and the Topic Explorer heading.

For instance, a search for “curiosity in children” in Summon returns the Gale Virtual Reference Library entry on “Parenting styles,” which begins:

The study of human development is centrally concerned with understanding the processes that lead adults to function adequately within their cultures. These skills include an understanding of and adherence to the moral standards, conventional rules, and customs of the society. They also include maintaining close relationships with others, developing the skills to work productively, and becoming self-reliant and able to function independently. All of these may be important to successfully rear the next generation.

This paragraph hardly gives a summary of either children’s curiosity or parenting styles! Searching for the “freedom of religion act”returned a Gale article on “Freedom of Religion,” which begins, “The most authoritative statement of Catholic teaching on religious freedom is the Declaratio de Libertate Religiosa of VATICAN COUNCIL II” before wandering off in a description of Vatican II’s role in changing the face of the Catholic Church in the twentieth century2. I’m not sure that helps a novice researcher looking for information on the US Government’s 1993 Religious Freedom Restoration Act, which, incidentally, Wikipedia has a terrific entry on.

In addition, because GVSU doesn’t subscribe directly to Gale Virtual Reference Library, we found that our access to the materials didn’t always give our users the most up-to-date information. We get most of our GVRL content from the Michigan eLibrary, which purchased content in 2005 but hasn’t updated the content since. As a result, the entry shown for a search on Osama bin Laden details his life up until 2005, but includes nothing of the later developments in the War on Terror or in his assassination in 2011.

I've presented my findings to our electronic resources and collection development librarians, and hope they will agree to switch from Gale Virtual Reference Library to Credo Reference as our second source, since it lends itself to breaking out summary articles and we have a current subscription, allowing us access to up-to-date content.

The Topic Explorer was a good first step in analyzing discovery algorithms, since it's message was different than the rest of our search results. Moving forward, I hope that ProQuest will take steps to improve the algorithm that matches search results to Topic Explorer entries, while I continue to examine these “black box” algorithms for accuracy and, most importantly, bias.

Thank you to Annette Bailey, Hazel McClure, Kyle Felker, and Patrick Roth for conversations and suggestions while I worked on this project. Thanks also to Brent Cook of ProQuest for taking he research seriously, and bringing the results to the development team.

Notes

Of course, it’s possible that Summon is checking my IP address when doing a search and localizing. This would be common with a Google search, but I wouldn’t think it would be a good feature for a library discovery platform, since students and researchers may be living or traveling abroad, and their current location does not change the nature of the research needs. ↩

It also seems that Summon’s tool for scraping the content from Gale ignores font italics or all caps, so the actual entry shown in the Topic Explorer reads: “The most authoritative statement of Catholic teaching on religious freedom is the of .” ↩

Terkle, S. (1997). Life on the screen: Identity in the age of the Internet. New York: Simon & Schuster.

Vaidhyanathan, S. (2011). The googlization of everything (and why we should worry). Berkeley: University of California Press.

Edited 1/24/2017 to remove links to web page showing all results, as this tool was removed from the GVSU Libraries server. Analyzed results are still available at http://dx.doi.org/10.5281/zenodo.47723.

]]>reidsmam@gvsu.edu (Matthew Reidsma)Mon, 19 Feb 2018 12:00:00 -0500http://www.matthew.reidsrow.com/articles/173Improving the Catalog, a Retrospectivehttp://www.matthew.reidsrow.com/?b=170
I get asked from time to time about the changes I've made to our OPAC, and I usually have to dig around to find the Worknotes I've written about them or the code that helps make the changes.

This week I collected all the entries and code snippets I could find for a colleague, and thought I'd post them here in case they are useful for others. If you use Sierra and the WebPAC Pro, check out these resources, since they might be useful for you.

Live code for GVSU:

If you have any questions about modifying WebPAC Pro, drop me a line. I also have a book coming out in May that walks you through creating JavaScript modifications for vendor tools, and I use the OPAC as an example quite often, walking through the changes and the code step by step.

In March 2015, I gave the final-day keynote at the UXLibs Conference in Cambridge, UK. This is an adaptation of my talk. You can also watch the video, if you prefer.

Let me tell you a story.

In February of 1962, John Glenn became the first American to orbit the earth. He trained for three years for the five hour flight, and he spent most of his time in the space capsule working through a series of tasks that the NASA engineers had set for him. Experiments that would help them better understand space travel for future flights.

While the engineers thought of the space mission as a series of tasks, the American public was mostly ignorant of these experiments. If you ask my father what he remembers most from this mission, he’ll talk about a sweeping sense of American identity, an opportunity to redefine ourselves by reaching out into the stars. In fact, for the American public the most memorable part of this journey was a phrase that Glenn’s fellow astronaut Scott Carpenter said, as Glenn prepared to take off. It immediately became part of the American lexicon: “God speed John Glenn.” That emotional phrase resonated with the regular Americans at home, watching Glenn reach out for the heavens.

But if you won’t encounter that phrase anywhere in the official record of this mission, because it wasn’t important to NASA. In fact, Glenn himself never heard the phrase, and only learned about Carpenter’s sendoff when he returned to Earth and read about it in the newspaper.

As a kid, I used to read the flight transcripts at the Herrick Public Library while my Grandmother worked in the genealogy room. And the part of this mission that has always interested me wasn’t the dull and boring experiments, but was rather one moment a few hours in to the flight, where Glenn begins to notice something unusual happening outside the cockpit of the Friendship 7. I want you to hear him talk to Ground Control:

Glenn: I still have some of these particles that I cannot identify coming around the capsule occasionally. Over.

Ground Control: Roger. How big are these particles?

Glenn: Very small, I would indicate they are on the order of 16th of an inch or smaller. They drift by the window and I can see them against the dark sky. Just at sunrise there were literally thousands of them. It looked just like a myriad of stars. Over.

The experience of these particles outside his space capsule would stay with Glenn for the rest of his life. But Ground Control was only interested in these particles in how they might relate to the tasks they had set forth for Glenn: would they damage the spacecraft? Would they affect the conclusions of the experiments? Once they realized that the particles posed no danger to Glenn, they started thinking of them as a task. They said, “tell you what, ignore them. We’ll study them on a subsequent mission.”

I don’t know about you, but if I’m flying a space craft for the first time, and there are a billion luminescent sparkles floating around me, I’d have a hard time ignoring it. And Glenn’s curiosity for what he experienced was far more than simple scientific observation. He would later say that, as a man of faith, his first thought was that perhaps these were guardian angels or other heavenly bodies sent to guide him through the darkness of space. And 35 years after the orbit, as Glenn prepared to go into space again for the last time in his life, he recounted that no one could experience the stars, or “fireflies” as he was fond of calling them, without believing in a god or higher power. And that was 35 years after NASA figured out what these fireflies were. They weren’t guardian angels, they weren’t heavenly bodies. They weren’t even tiny stars.

It was pee. It was John Glenn’s urine, ejected from the space capsule, frozen into thousands of tiny drops, catching the light of the rising sun.

A
Myriad
OfStars

John Glenn’s experience of flying through a starfield of his own urine became the defining, transcendent moment of his life.

We don’t talk about this particular episode in Glenn’s life very much, and I think it’s because we’re a little embarrassed for the great pilot. He accomplished so much, and yet decades after this fairly disgusting experience, he was still moved by it.

But I think that this story is helpful for us when we try to talk about what we do as User Experience designers, because it highlights the distinction between thinking in tasks and thinking about experience. They are not the same thing.

Tasks ≠ Experience

Now in libraries we’ve been thinking a lot about User Experience, and that’s great. We’ve stopped designing single screens and individual steps in a process, and we now think in user journeys. We understand how people move from one system to another and from one place in the library to another, across silos. But like Ground Control, we still often think in terms of tasks. We think only about what steps each user must take along the way to complete a task. But the thing is, experience is different than just working our way through tasks. There is more to it.

We often think that experience is like tasks plus emotion. You see this in a lot of writing about UX and “emotional design.” For Aarron Walter, in Desiging for Emotion and Don Norman in Emotional Design, emotion exists in a causal relationship with tasks.

If we make something hard to use, then people will get mad. If we make it easy, then they will love us.

There is some truth to that. But it makes an assumption that all of our users come to us from an emotionally neutral place. Anger arises only because the button is the wrong color blue.

But that is not how we experience the world. We’re not just moving through the world checking off tasks.

Now, I’m not saying that we shouldn’t focus on tasks. Focusing on tasks is essential to what we do. You cannot understand how someone goes through the actual steps of requesting a book, renewing something, or using one of your services without thinking in tasks. But we like tasks because they are easy to quantify: how many people successfully renewed a book? Have the numbers gone up since the last redesign? Those numbers can tell us one kind of story, but our users’ experience tells us another kind of story, one that helps us better understand the full picture of what they go through in the library.

We tend to talk about transportation as if the ultimate goal were mere movement, measured in speed, time and capacity. The ultimate goal of transportation, though, isn’t really to move us. It’s to connect us – to jobs, to schools, to the supermarket.

And she started by defining how we normally think about transportation, and how maps are defined for that kind of thinking. It’s a very task-based way of thinking about transportation. “I’m going to move from one place to another.” But we don’t experience transportation as mere movement; we experience it as something more. Because we’re people that are in the world, caught up in all kinds of activities. We have reasons for wanting to do things, emotions, social and cultural and historical context that helps influence the way that we move through the world and do things.

This is as true about transportation as it is about libraries. Hugh Rundle reminds us that our users do not think in the same task-based silos that we do. This is the level of task thinking. Find books. Find Magazines.

Your members don’t come to the library to find books, or magazines, journals, films or musical recordings. They come to hide from reality or understand its true nature. They come to find solace or excitement, companionship or solitude.

Rather, they come to us from a bigger place. A place that is about experience, and life, and culture, and history. A messy place, that’s harder to nail down on a user journey.

Now most libraries define their missions according to helping people access information. And we are good at doing that, and we need to keep thinking about this, but maybe it’s time we thought a little deeper about our mission. Because our focus on information access keeps us locked into those task-based workflows. If we’re only focused on helping people access something, then we’re always going to be focused on the steps that make access happen.

[Libraries] let people transform themselves through access to information and one another.

We’re about transforming people, and that involves access to information, but it also involves access to people, to community, to librarians, to other users. This is a different way of thinking about out work that is on a higher level than task-based thinking. We’re thinking about the experience of the people involved, and what happens when they use the library.

It’s about people.

Do you know the story about the astronomer who gave a lecture about the galaxies and the universe? And afterwords, an old woman came up and said, “that was very clever, what you said. But it was rubbish. The universe sits on the back of a turtle.”

And the astronomer said, “Ah ha! But what does the turtle sit on?”

And she said, “Don’t get smart with me. It’s turtles all the way down.”

The library is people all the way down. Where did the information that we curate and provide access to come from? Did it appear fully formed? People wrote that. People come to us to find that information. We like to complain about library websites, and how many links they have. Do you know where those links came from? People put them there!

The Libraryis People
All
the
Way
Down

When we can shift our thinking, and be able to think about tasks on the micro- level and experience on the macro- level, we can change the way we help people transform themselves.

Now, we’re about 15 minutes in, and I’ve been yammering on about experience for most of this time. But what the hell is experience, anyway? We haven’t defined it. It sounds very touchy-feely. And also, you’re probably asking yourself, “didn’t they invite this guy to talk about usability? What does this have to do with usability?”

Well, I’m glad you asked.

This is the part of the talk where the committee realizes that they inadvertently invited a former Philosophy lecturer to talk to you about design. (The doors have been locked.)

I started thinking about this connection between experience and usability, while I was sitting in a rather depressing Homeland Security Immigration and Naturalization office in Grand Rapids, Michigan, a couple months ago. I was there filling out a lot of paperwork to get my work visa, so I could come and speak to you today. I don’t know if you’ve ever been to an office like this, but they’re pretty much like every other government office. There are about 300 chairs, and 7 people. And while I waited for my number to be called, there was a television playing a loop of information videos, over and over and over. And I tried my best to ignore the TV, but eventually I gave in.

And about a minute after I started watching the first video, I heard a word that I now hear everywhere, since what I do for a living because something everyone is interested in: usability.

The video was about a service called “Self-check.” It’s a website for people in my country to verify their eligibility for employment. The video said, “Self check is available in Spanish to improve usability.”

I looked around the room, and started wondering who the target audience was for Self-check. I was born approximately 35 miles from where I was sitting. I’ve lived in the United States my whole life. This is not a website that I would consider using when I was applying for a job. I’m not the target audience.

The target audience is people who may not be able to work in the United States. Maybe they are in the country illegally. So let’s think about the process that the designers went through, when they built this website.

They said, it’s going to be simple. All you need to do to find out if you are eligible to work in the United States, is go to this website, tell us what your name is, tell us where you live, tell us your phone number, tell us all the people that live with you, and then we’ll tell you whether or not you can be in the country and have a job.

And, just so that it’s usable, we’re going to make sure to offer it in Spanish.

So I thought to myself, if you’re using this system, can you remember the address you lived at five years ago, if you’re worrying about your children being deported? Can you click the links if your hands are shaking? Is usability all about tasks? Is that all we have for people?

Usability

When we think about usability strictly as task-based, it becomes an attribute of whatever it is we are designing. We think about it in terms of removing friction. We think about it in terms of intuitiveness. Those are fine things to think about, but thinking only about reducing friction leads us to the idea that making something usable is a matter of making the words we use understood by the users.

And let’s be honest, the world is a bit more complex than that. The problem with thinking of things as experience, is that we have to think beyond the tool. We need to understand that the important part happens only when people are using the tool or service, and that the experience goes beyond the steps they take to accomplish a goal.

When we evaluate our services and tools, we have to take that into account. It’s helpful to have a framework to understand what we mean by experience. So let me see if we can get closer to defining experience.

In our work, in the literature, today at this conference, we use the word experience over and over, but we never stop to ask what we might mean by that. Like usability, we tend to make assumptions about the meaning, and build our work around them.

One framework I like that helps us think about the two levels I’ve talked about, the task-based micro level and the experiential macro-level, was introduced by two researchers from MIT back in 1999. Vicki O’Day and Bonni Nardi, write in their book Information Ecologies: Using Technology with Heart, about how people use metaphor when they interpret the world. We think about things to be like other things. And this tendency helps us to frame and illuminate certain aspects of new things, but also obscures other aspects.

Frog legs taste like chicken. Well, what if they taste like something else, too?

Arguments are like war. We actually understand the argument as a war. (I’m told that arguments can also be like diplomacy.)

Nardi and O’Day focused on technology, and the metaphors that we use to understand it. Their definition of technology is quite broad, as is mine here in this talk. We’re not talking about just “gizmos,” gadgets, or things that plug in. We mean “Knowledge applied to practical ends.” This could be an app, but it could also mean a service, a coffee cup, a carpet. Anything we use to try to make experience better, is a technology.

The most common metaphor for technology is a tool. This is where we spend most of our time as designers. And this is the land of task-based thinking. The benefits of this approach are that we keep focused on the fact that people will be using our design. If we’re designing a hammer, we know that someone is going to need to grip the hammer, which helps us focus on designing something that can be gripped. But we’re still working through all the steps involved in traditional use. We think about each step involved in using a hammer: picking it up, adjusting it for the right balance, placing the nail, tapping to start, swinging back, and on and on.

Those questions are essential to making things that can be used by people. But they are not necessarily the right questions for creating great experiences. And thats because when we actually experience technology, in the act of using something, our experience is not the same as it would be if we were consciously working through a task.

We experience most of the different things and technologies that we use as extensions of ourselves. We’re not aware of the hammer as a hammer when we’re hammering a nail in the wall. We are not thinking to ourselves, “Wow! I’m going to swing it like this, and hit the nail, and then swing back again, and...”

If you do that you will have a hole in your wall.

O’Day and Nardi propose another metaphor for thinking about technology, a higher-level metaphor: an ecology. An ecology is a system of people, technologies, value, and culture in a local environment. A library is a great example of an ecology. You have librarians, users, databases, books, indices, newspapers, microfilm, computers, and coffee, all interacting in the big, messy way.

The thing about the ecology metaphor is that it highlights the interconnectedness of all of these different things coming together in one place. It emphasizes the co-evolution of technology and people. Its about people and tools together.

If we can pull back from task-based thinking and think more in terms of ecologies, then as we’re planning and designing, then we can move beyond thinking about the next tool. Thinking in tasks obscures how many of those interrelationships play out in our users lives.

A good science fiction story should be able to predict not the automobile, but the traffic jam.Frederik Pohl

We don’t want to just make the next tool. We want to understand the social and cultural changes that introducing the next tool will bring about.

Cameron Tonkenwise has a definition of design that I really like. We’ve heard several definitions of design this week, like ‘design is solving problems.’ And I like that definition, and it can be useful, but I’m not sure that everything that is designed is solving a problem. A lot of things out in the consumer marketplace that have been designed are designed to solve the problem of not enough money in the company’s bank account.

So I think there is another way to think about design. Tonkenwise says, “Design is doing philosophy with your hands.” And what I like about this definition is that the problem-focused definition of design keeps us focused on the task level, but if you’re doing philosophy, you have the ability (and one might say responsibility) to step back and think about your work critically. To gain a broader perspective to understand more about how people are actually experiencing the things that you make.

Design is
DoingPhilosophy
With Your
Hands

So, when we think about how we experience the world, most of us think about it as a subject (us) exists in the world and moves around interacting with the things (objects) of the world. I’m a subject, this glass is an object, and I can scientifically observe it. We have Descartes to thank for this. Cogito Ergo Sum. I think, therefore I am. A person is separate from the world, and our language reinforces this perception: subjects and objects.

But do you experience the world this way, as you go about your daily life?

Apologies, this is the philosophy part.

Martin Heidegger believed that we don’t exist in the world as subjects acting upon objects, rather, he described our existence as being-in-the-world. The hyphens are there because your existence and the world’s existence are not separate. They happen together.

We should also be careful about that word “in.” Being “in” the world is not being contained by the world, but rather, for Heidegger, being involved with the world. So we exist through our involvement with the world. Our existence is dependent on interacting with things in the world.

If we take this framework as a way to understand experience, then by interacting with the world, we’re creating meaning. We’re making sense of the world and ourselves through those interactions with the world. We’re not identifying things that already exist, it’s that those things reveal themselves to us through our interactions with them.

The branch of philosophy that deals with experience is called phenomenology, and it’s basically describing what it is like to experience the world. I love how one of my undergrad professors, Corey Anton, tried to describe the way we actually experience the world. He said that we don’t experience the world as a separate body moving around in a world, a subject acting upon objects. Rather, your face is a hole in your neck where the world opens up. Thats how you experience the world.

So now, if we think about existence as meaning making, as interacting with the things of the world, we can start to think differently about usability. If interacting with things is how we make meaning out of our lives and the world, then usability has to be more than just keeping things “easy” to use. It’s about making sure that people can have meaningful interactions with the world.

And when we start thinking about usability as helping us make meaning, from thinking of our existence as interdependent on the existence of the things of the world, we can move beyond the idea that the pinnacle of usability is perfect functionality.

Task-based thinking is very functional. But we experience the things that we use in a very different way than we talk about them as we design them.

I want you to think about how you feel when you leave your phone at home. Is it the same as if you left a blank piece of paper at home? Or a notebook you’ve never used? Does it feel a bit more like you accidentally left the dog outside? Or left one of your children on a bus?

When we think in tasks, we think that this is because we’re emotionally attached to the object. It’s given us good experiences, so the phone has generated an emotional attachment. And that’s probably true to a point.

But when it comes time to get a new phone, do you have a ceremony retiring the old one? While there may be some lingering nostalgia for the way something worked on an old phone, we continue to use the new ones. That’s because our primary concern with the object is in its usefulness to us. This also explains why we keep using things that are hard to use, because the value of what we are able to accomplish makes the struggle worth it.

When Heidegger talks about how we interact with the things of the world, he says that there are basically two modes, and we need to understand them to really understand usability from the perspective of experience design. He says that objects are “present-at-hand” to us when we know them scientifically. This is a glass, Im looking at it, observing it, noting its size and weight and attributes. We don’t know most of the things in the world as present-at-hand., most of the time.

Most things present themselves to us as usable objects, they are, for Heidegger, “ready-to-hand.” All of you in this lecture hall are sitting in chairs. How many of you have been sitting here for the past 35 minutes saying, “I’m sitting in a chair. I’m sitting in a chair.” You were not aware of the chair as a chair, you were acting through the chair. The chair has effectively become invisible as a separate object, because it has become an extension of your body. For Heidegger, your conscious awareness is not interested in the chair because it has presented itself as something usable, and you are able to act through it.

Martin Heidegger

As we act through technology that has become ready-to-hand, the technology itself disappears from our immediate concerns. We are caught up in the performance of the work.

When you take a phone call, you’re not thinking about how to hold the phone, where the microphone is, where to hold the speaker. Unless, of course, you have a phone like mine that requires you to carefully position it in such a way so that you can hear people on the other end, and they can hear you. For Heidegger, at that moment when you become aware of the phone as an unusable object, it becomes present-at-hand to you. Once you shift back into using the phone and your conscious awareness moves back to what you are doing through the phone, the object again becomes ready-to-hand.

Now these shifts between present-at-hand and ready-to-hand, Heidegger calls “breakdowns.” This is starting to sound a lot like a usability problem. If something is intuitive, we just use it. The chair, for instance, is intuitive. We don’t usually have to think about the chair to use it.1 When the chair breaks, it becomes unusable and we are aware of it as a chair. We’re suddenly thinking about it! Even when I said the word “chair” to you a few minutes ago, when you were all just sitting in your chairs listening, your chairs quickly came to your awareness: “Oh, I am sitting in a chair!” You’re probably thinking about the chair again right now, as a matter of fact.

But this isn’t really a usability problem. Too often we think that anytime we have to think about the tool, then it’s somehow become unusable. We spend a lot of our time trying to keep our users from thinking too much about our tools. And while I agree that we should strive to make all of our tools easy to use and intuitive, we have to acknowledge that breakdowns are inevitable. If we think about how things operate in an ecology, it’s not just a user in isolation using the thing that you built, with no other distractions in their life. There are so many things beyond our control; breakdowns will happen. Because we are designing things for people, who exist in a cultural, social, and historical context.

Now the thing is, breakdowns are often self-correcting, but when we typically think of usability issues, we assume that they are not. Let’s say you are in a cafe, trying to get some work done on your laptop. You’re sitting at a small table, and using a wireless mouse because you hate the trackpad. As you move the mouse across the table, are you aware of the fact that you are using a mouse? No, your body is acting in such a way that the pointer on your screen and the mouse in your hand are extensions of your body.

Now, if the mouse goes off the side of the table, you are suddenly aware of the mouse as a tool that you are using, because it’s usefulness has disappeared once it came off the hard surface. In typical usability thinking, we’d want to find a way to stop the mouse from going off the table, because we don’t want it to stop working! But is it really a usability problem? No, because you already understand how mice work, and so you just move the mouse back onto the table and get back to work. For a moment, during the breakdown, the mouse became present-at-hand to you, but you recover without much delay and it again becomes an extension of yourself, a ready-to-hand tool.

Because you understand how the mouse works, it’s easy for recover. When we think in tasks, it is harder to recover from breakdowns, because tasks are generally linear, and they build upon each other, step by step by step. If you have an issue with one of the steps, task-based thinking doesn’t give us an easy way to get back on track, and pick up where we left off.

There is a constant movement between present-at-hand and ready-to-hand in everyday life, and designing with that movement in mind is the job of experience designers.Thomas Wendt

I think that this is the key difference between designing for tasks and designing for experiences. That moment when you shift from ready-to-hand to present-at-hand is unique to thinking about things as experiences. That’s our realm, as experience designers. How do we design things that get people back to using them after a breakdown? If breakdowns are inevitable, how do we empower our users to self-correct when a breakdown occurs?

If breakdowns are inevitable, we need to reconsider our understanding of usability. The goal of usability is generally seen as making our services or tools perfect, eliminating any possibility of a problem or issue. We run usability tests, and watch for the parts where people can’t figure something out, and then we try to design a better label, or link, or button, so that they understand what to do.

I think we should try to make things easier, and eliminate the problems. But that won’t keep people from having issues. There will still be breakdowns. And so usability for us should become more about how to get people back on track. How they recover from a breakdown.

User-friendliness is not merely an issue of the number of errors made per unit of time. It is rooted in the confidence of being able to handle disruptions.
Klaus Krippendorff

How do we do that? You likely can’t walk into work and write up on the whiteboard, “Being-in-the-world,” and then kick back to applause from your coworkers. However, I think there are a few ways that we can shift the way we work.

Rethink Usability

We need to change the way we think about usability. It needs to be more than just perfect task-execution. Usability can be more about helping people better understand our tools and services, so they can recover from the inevitable breakdowns.

Test to Learn, Not Just Perfect

When we test our services, we should think more about testing as a way to uncover coping techniques. For Heidegger, moving from ready-to-hand to present-at-hand and back was called coping. We are always coping with the world. (If you’ve used the doors here at Cambridge University, you know what coping is.) By testing, we can see how people recover from breakdowns, and learn how to design those ways of recovery right into the tools.

Design for Breakdowns

Last, I think we need to design for these breakdowns, rather than designing to avoid them. By acknowledging that they will happen, and making sure that there are plenty of ways to cope and recover, we’ll make our tools better for everyone.

Experience design is more than just making cool new things that make people happy, it’s more than just making effective services that help people get things done. We’re helping people become the best versions of themselves. And I think we have a unique responsibility as experience designers and researchers, to take that seriously. The things we design are not just dumb control panels with switches and buttons, they are an extra way for us to help people transform themselves.

We make so many connections here on earth. Look at us—I’ve just met you, but I’m investing in who you are, and who you will be, and I can’t help it.Fred Rogers

UX Lib was unlike any conference I've ever attended, and has been from the start. When I was asked to keynote, I was also put on the organizing committee (along with another of the keynoters, Paul-Jervis Heath). The idea was that we'd have a better understanding of the conference if we helped make it, and so we could create something bespoke for the event. I can say that I've never felt comfortable talking about phenomenology at a library conference before, but by working with the team that created and pulled off this unique event, I knew that this was my chance. (Not that I didn't panic, even up until 15 minutes beforehand, that this talk would be a flop. I rewrote it five times in the weeks before the event.)

Thankfully, this wasn't the case. Afterward, I received the highest compliment I've ever gotten for a talk: Andy Priestner, the committee chair, told me that he was so engrossed in what in was saying, that he kept forgetting to Tweet about it. (He'll deny that, of course, but Ned Potter made the mistake of putting that in writing in the last paragraph here, so I'm satisfied.)

[Edit: Donna Lanclos later called it "the best damn keynote talk I’ve ever attended," which made me blush and is crazy since she was at her own keynote a few days before.]

I spoke about the difference between designing for tasks and designing for experience, and how the latter often looks remarkably like the former, but with some added emphasis on the types of emotions that doing a task might cause. But life is bigger than that. Emotions aren't just the by-product of doing things, and experience is more than just emotions and tasks. By exploring a few themes—the metaphors by which we understand technology, how phenomenology can help us learn about experience, how Heidegger saw our interactions with the world—I made the case for rethinking usability from the removal of problems to designing things that enable users to recover from the inevitable breakdowns that occur when things are used by people.

The conference was largely about immersing ourselves in UX—ethnographic research, ideation, prototyping, and selling our ideas. But one of the most amazing things that came out of UX Lib wasn't part of the official conference at all; it was a Twitter account that sprang up during the conference that helped explain UX in terms of stories that most folks are already familiar with: the world of Harry Potter. Cambridge's UX Lib was indeed a lot like @HogwartsUXLib, with the medieval buildings and grand halls and traditions. Sometimes it can be tough to talk about how UX plays out in our day-to-day work, but the genius behind @HogwartsUXLib manages to explain UX concepts like personas and ethnographic research by taking us out of our everyday and into something fantastic but familiar: the world of Hogwarts School for Witchcraft and Wizardry.

The account is so good at staying in character, and not just with clever things to tweet to the world. When my pal Matt Borg and I got lost in Cambridge walking between venue spots, the best help we got was from @HogwartsUXLibs:

I'm still digesting the experience of being at, participating in, speaking at, and down-right living UX Lib for a week, so there will be more to say. If you're interested in finding out more now, I have seen several wonderful write-ups for the conference. It was an experience that most of us who attended won't soon forget. (In fact, I've seen nearly as much twitter traffic today, 4 days after the conference ended, than when the conference was in full swing.) Check out these roundups (more will be added as I see them):

]]>reidsmam@gvsu.edu (Matthew Reidsma)Mon, 19 Feb 2018 12:00:00 -0500http://www.matthew.reidsrow.com/articles/125Weave Journal of Library UX: Issue 1http://www.matthew.reidsrow.com/?b=115
A year and a half ago, I announced that some colleagues and I had started an Open Access, Peer-reviewed journal dedicated to Library User Experience. WeaveUX was born out of the frustration of not having a place for library user experience practitioners to have rigorous discussions about the theory and practice of our work. Of course, UX articles have appeared in many of library literature's pantheon, but because the venue is never UX related, the articles always spent a lot of time just trying to explain what user experience is, and rarely seemed to get beyond usability testing.

We're already hard at work on our second issue, which will be published in March, 2015. We have a few articles making their way through the peer reviewers and editing stations, but we could always use more. If you have something to share with your UX colleagues, please submit an article or a pitch, even if you think no one else will care about it. I guarantee we will (or we'll help you craft it into something folks will love).

]]>reidsmam@gvsu.edu (Matthew Reidsma)Mon, 19 Feb 2018 12:00:00 -0500http://www.matthew.reidsrow.com/articles/115Holistic UXhttp://www.matthew.reidsrow.com/?b=72
This afternoon I gave a talk at the 2014 Library Technology Conference on Pizza Hut architecture, Bill & Ted’s Excellent Adventure, and using data for UX work in libraries.

I recorded my session in case you want to watch it (a transcript is included below). You can also find my slides on Speakerdeck, or listen to the talk on Huffduffer. Thanks to the Library Technology Conference folks for having me, and for the awesome folks who came to my talk and laughed at my dumb jokes.

I grew up in a town known for manufacturing. Furniture, automobile parts, canned soda, and pickles were what put food on my friend’s tables. But by the early eighties factories were closing and left empty. In turn, small businesses that relied on the laid-off workers began to close. Riding through town with my Grandfather or father, both lifelong residents, I learned about these empty buildings and what they had once been, how much they had meant to the city. My Grandfather told me about the old Baker furniture plant, and how my great-Grandfather would show up every morning during the depression to see if they needed any extra workers that day. He showed me the building that saved his life when he was ejected from his car in an accident. The building stopped the car that was rolling over behind him, while he slipped past the empty space where a corner should have been. We drove past the empty building that had been his childhood home, the Reidsma IGA Grocery, as he told me about they day they removed the horse troughs after paving the last dirt road into town.

The town has had a remarkable comeback. Now the Baker furniture plant and the Reidsma IGA are apartments and condos. The building with the odd shape that saved my grandfather’s life has been torn down, along with my old elementary school and the gas station shaped like a Windmill, where I bought baseball cards. The house I grew up in was recently gutted, and is unrecognizable to me. The stories of what these buildings used to be are stored away only in the memories of the people who knew them.

But there was always one building in town, my Grandfather and I joked, that we would recognize no matter who the tenant was. A building that encoded its own history and origins right in the roofline and the trapezoidal windows. And that was Pizza Hut.

For a long time, we needed to rely on the memories of our fellow staff members, our institutional memory, to know what our patrons did in the library. Despite the greasy fingerprints in the card catalog and the rubber stamped due dates, patron usage didn’t really leave much of a trace behind, other than in the memories of those who witnessed it.

But that changed in the eighties and nineties when most of us started putting our collections, tools, and resources online. These systems collect gate counts, database counter statistics, book circulation statistics, interlibrary loan requests, website analytics, questions asked. But these systems are often maintained by the people who are responsible for individual tools. Each of us in our own little corner of the library hones in on our little mound of data. Your Interlibrary Loan person can recite article requests and budget numbers from three years ago to the present if you asked. Maybe you see Google Analytics charts or acquisitions by LC Call Number dancing in your head at night.

We each have our one piece of data for our one tool and we focus on it.

This data only comes together for annual reports to accreditation boards or library organizations that ask us to fill out a survey. But these data trails are like those distinctive features on old Pizza Huts, traces of our patrons’ journeys weaving through our various tools and services, moving from one system to another in the attempt to accomplish their goals.

It’s one thing to see an old Pizza Hut in your hometown and recognize its story. It’s another to spend some time browsing the stories of hundreds of former Pizza Huts around the world, seeing the trends and understanding that your own local building is just part of a larger movement. In our work, seeing how our little fiefdom of data fits in with the others throughout our library is a necessity if we are going to make user-centered services. Because where we see individual tools, our patrons see a holistic service platform.

——

I speak and write a lot about user experience research and how libraries should be talking to their patrons, testing their services and tools. But the reality on the ground is that many libraries don't have the time to conduct usability tests and interviews, let alone spend time building a culture of UX in the library. We're each too busy creating our own little fiefdoms of data.

What if we could harness that data to make libraries better? What if we looked up from our individual tools and saw things as a continuum, the way our patrons experience our services? It turns out there is already a great example of how to make this work, a framework for collecting the relevant data from our past and using it to make a better future. A framework you’re probably familiar with.

If you weren’t born yet or weren’t spending 1989 looking for hair metal bands your parents wouldn’t approve of, I’ll give you the short version of the film: The peaceful future of the universe somehow depends on Bill and Ted achieving success as musicians, which they can’t do unless they get an A+ on their high school history presentation. George Carlin comes to the rescue in a time traveling phone booth so that Bill and Ted can learn first hand about history. The duo travel all through time, picking up important historical figures like Socrates, Billy the Kid, and Joan of Arc, and then put on a remarkable presentation where the great figures of history talk about their lives and work.

Here are seven lessons that Bill & Ted’s Excellent Adventure can teach us about getting out of our data silos and making our libraries most excellent for our patrons.

1. There is useful data just sitting in the past. Go get it.

Bill and Ted traveled around, collecting data from many different periods of time, jumping around to different points in the past. Through this they learned that immersing themselves in the past made it easier to see the value in that data (mostly through “historical babes”).

Your library systems have probably been collecting data on how people use your tools for years. Your OPAC records the searches your patrons enter, the books they check out, the items they request. You’ve also got data on your gate counts, your web visits, your database COUNTER statistics, and your interlibrary loan requests (and loans). You might have course reserves or LibGuides data, or maybe data on attendance for events or participants in scavenger hunts. I have three years of transcripts from monthly usability tests. Figure out what you have and look at it. Not just once—make it a habit.

Sometimes you’ll find you don’t have enough, or you don’t have the data you want. That’s okay. You can always start collecting the data you want now. Our discovery service, Summon, doesn’t track things like what number results get clicked on, or whether facets are used. In fact, every time a facet gets clicked, Summon counts it as a new search! So we wrote a new statistics package that sits on top of Summon and gets the data we want. We did the same thing for our OPAC (Sierra’s WebPAC). We now have a good year to a year and a half of every search that has been done on these two systems, as well as how each of our patrons interacted with the results. That kind of data is really useful.

2. Think broadly.

The report required Bill and Ted to talk about 3 figures from different periods of history. They had a little time left after getting their three historical figures, so they went and got a bunch more (Extra credit!)

Get as much data as you can handle. Go back farther than you think you need to, and grab more systems than you think you need. You’ll probably find some gaps in what you have, so you might end up supplementing with some new research to learn more about what people are up to.

My favorite way to get more data is during usability tests. Besides asking relevant follow-up questions to each scenario, I always let the student go a minute or two after I think we’ve learned everything we can from them. I always see some new behavior that gives me a better understanding into how students perceive the library website, and it only takes an extra minute.

We’ve recently moved into a new building at GVSU, and are running a lot of studies to learn how our patrons use the new spaces. One of the things we’re really interested in is the density of specific areas in the building. For a long time, we didn’t have a way to get that data directly, so we looked to other measures to help us. I started recording how many computers were in use on each floor to gauge how heavily used the collaborative computer areas were on each floor, and I even started collecting the counts recorded by our water bottle refill stations to see which floor had the highest number of refills. (The third floor.)

Eventually, our UX team developed a way to do head counts based on quadrants of each floor to better gauge how students were distributed at different times of the day. But that still only gets you so far, so they are supplementing that with a qualitative “Library Use Journal” survey to learn a little more about how and why patrons use the library in the ways they do.

3. You don’t have to be an expert.

Bill and Ted had very little idea of what they were doing. They knew just enough to know where to find the data (and they had help-the Circuits of Time book). And because they didn’t have a predefined idea of what an A+ presentation looked like, they made something unique that worked.

This can be a tough one, because no one likes to admit they don’t know something. But like Socrates, the best way to move forward is to admit that we don’t have all the answers. The key is wanting to find out what the answer is! In the documentary Eames: The Architect and the Painter, Richard Saul Wurman describes Charles Eames’ drive to learn new things as the key to his success:

You sell your expertise, you have a limited repertoire. You sell your ignorance, it’s an unlimited repertoire. [Eames] was selling his ignorance and his desire to learn about a subject, and the journey of him not knowing to knowing was his work.

4. Keep an open mind.

Bill and Ted didn’t really have any preconceived ideas of what to expect in the past (or how to behave, really). As they met each of the historical figures, they didn’t pigeonhole them into a caricature of themselves. They were open to the unique experience each figure brought. During the presentation, they even let the figures speak for themselves.

Too often, we often start with solutions, rather than trying to understand problems. And when we already have an idea of what we want to do to make “things” better, we tend to skew everything to fit our preferred solution. I think the best way to explain this is by watching the West Wing:

The Mercator projection was designed for a specific purpose: to aid navigation across the ocean. But that purpose has skewed the value of the map for the rest of us who aren’t navigating ships. As Frank Chimero puts it:

A map’s biases do service one need, but distort everything else.

In libraries, we often have a pretty good idea of how we think things should work, and these assumptions guide us as we try to solve real problems facing our patrons. When we moved into our new library this past summer, we talked a lot about how we hoped the building would be for the students, a place they could use in the way they needed. We avoided creating any policies about how things should run before we actually got in the building (and frankly, we don’t have many policies now). We let students create their own etiquette around our group study rooms rather than imposing policies that restrict how they can use the library’s spaces. We’ve even had students “reserve” public spaces for study by simply writing “This place is reserved tonight from 5-6pm” on white boards and wheeling them into place. Rather than telling students they couldn’t do that, we asked them why they didn’t use one of the reservable group study rooms so we could better understand their needs.

In my work, I use a technique for making connections in the data without letting my pre conceived biases guide me. Empathy maps are tools that helps you collect a variety of data and sort them into categories of human perception. It’s a way of keeping your data focused on the people you are hoping to serve.

In this map, I took data from 3 years worth of usability tests—my own notes, recordings of the sessions, notes from other observers, blog posts I wrote about the changes—and sorted the data into four quadrants: things that people see, say, think, and feel. The data about thinking and feeling came from watching non-verbal cues, but also from asking them what they were feeling or thinking if they weren’t saying what was on their mind. The empathy map is really a tool to get a lot of data out of a list and into a form that keeps it focused on users.

5. Tell a story.

The genius of Bill & Ted was that they didn’t just recite dates and facts. They contextualized the historical figures, and presented them as narratives. This is one of the reasons that tools from the quantified self movement like Fitbits are mainstream now. They don’t just present you with raw data and facts. Rather, they help you make the connections between events by creating little narratives.

Nicholas Felton has made a career of this. His Annual Reports are wonderful exercises in converting data into storytelling. In 2010, after the death of his father, Felton used old photographs, receipts, datebooks, and other ephemera to piece together a story, told in data, about his father. If he simply had enumerated a list of appointments, the result wouldn’t have been so compelling.

Inspired by Felton’s Reporter app from his 2012 report, I built a little web app last year that has a short survey on it that I fill out every 90 minutes.3 But I’m not just interested in aggregate data about how many times I see Jon Fink or Cody Hanson every year, but rather the narrative of the context that pulls this data together. The whole structure of the app is based on telling stories about these little moments.

And any of us that have tried to make sense of a bunch of raw data and use it to make our library services better know that it’s hard to do. We need to take the time to see the connections, the stories, that sit inside the data, and bring that to the forefront. Our patrons don’t experience our library as a set of data points, but rather as a narrative playing out in their own lives. And our coworkers are rarely moved by a decontextualized spreadsheet. Aarron Walter, the head if UX at MailChimp, points out that this is the only way your data is going to make a difference in your organization:

Research cannot create change in an organization until it’s turned into a compelling story.

User Journeys

At GVSU, we have a few ways to turn data points into stories. First, we think in user journeys and complete tasks, rather than individual tools. By centering our focus on the tasks patrons do, we get out of the siloed approach to working with our tools. Very few of our patrons ever experience our Document Delivery site by itself. Rather, they do a search in our discovery service or a subject database, find a citation they are interested in, and discover we don’t have the full text. Often, this takes them from their search service to our Link Resolver. From the Link Resolver, many of our Document Delivery users click an OpenURL link that populates the request form in Illiad for them, but others try another database or search our catalog before committing to the request. By tracking the possible paths patrons can take to get to our tools, we can make sure to spot roadblocks in their way and fix them. And we do this all through breaking down the patron’s journey into its composite parts, and then assembling them into a narrative.

Personas

In addition, we use a few personas, or typical users, that were built out of data we’ve collected over the past few years. When I built our most recent empathy map, I was able to distill our student population into 3 broad groups: undergraduates studying hard or health sciences, undergraduates studying social sciences or humanities, and graduate students. Most of the behaviors of these groups overlapped, but there were some striking differences. Hard and health science students are impatient at GVSU. They don’t have time to request items from Document Delivery, and they aren’t interested in resources that aren’t available in full text. We saw these behaviors and quotes in student after student in this particular group, but never saw it in humanities undergrads or graduate students. That told me that they needed their own group.

But the Empathy map is more like a spreadsheet of data: it doesn’t have the context or the connections between different data points made explicit. The persona is a way to match the demographic information about your user groups with particular needs and goals. By giving each of our personas a name and a photo, we have an easy shorthand for talking about large user groups. Instead of asking about the behavior of health sciences undergraduates, we can simply say, “would Amanda be confused by this?” Personas are a good way to keep your focus on the patrons as people, rather than getting bogged down in demographic data.4

6. Be excellent to each other.

Bill & Ted weren’t just making a presentation for a history class, they were changing themselves, putting in effort to learn to do something. They were changing the culture of the world around them.

A lot of the ideas in this talk require a particular kind of culture to work, and that culture hasn’t been fostered in libraries much. But by putting forth the effort to really make a difference in your own work, you can have bigger impacts on the way things around you happen. It’ll be a slow process, but you might make some headway in building a more user-focused culture if you’re patient.

Since you’ve collected a lot of data, sorted through it and created some tools and narratives to help make sense of all this information, share it with others! Tell those user stories to your coworkers, don’t horde all this data in yet another silo. Get it out there, doing some good.

A few years back, my colleague Kyle and a ridiculously talented graduate student set about to built an analytics dashboard that would give library staff real-time data on any of our systems. They made a great product, but the maintenance was a nightmare since the connections to each system were fragile and broke every time a vendor changed things.

Then last fall, Aarron Walter from Mailchimp wrote an article about how they share UX data in their company. At the time, we had just hired a new full-time UX Manager to handle the physical spaces of the building (I handle digital UX), and she and I set about coordinating our efforts. Based on Walter’s plan, we created an Evernote notebook, and set about dumping a lot of data into it every week that could be quickly searched by anyone in the library who needs to know data from Google Analytics, or LibAnalytics, or the results of the last round of user interviews can search the notebook. It’s not as elegant as Kyle’s vision for the Analytics Dashboard, but it gets the job done (although it hasn’t been widely adopted in the organization yet).

Since a lot of our data is collected manually, I also started offering an incentive to anyone who collected data as part of their daily work that led me to make an improvement on the website. My Data Bounty has been quite popular, and I’ve bought free coffee for several coworkers and students.

But don’t just stop at sharing with your coworkers. Libraries, as Cody Hanson has reminded me a few times, like to believe in their patrons’ exceptionalism. We all think that our patrons have such radically different needs than other libraries, that we can’t possibly learn anything from their data. But we spend so much time trying to find the things that are different about patrons (and patron groups) that we often miss what is the same, which is most everything.

We built public Github repositories into our workflow to push new projects live. Every bit of code we write is automatically shared publicly. I write up the results of our usability tests every month and post them publicly in my Work Notes. Cody and his team at the University of Minnesota upped the ante by ranking other Committee on Institutional Cooperation Libraries on one crucial web metric: page speed.

7. Focus on outcomes not outputs

Bill and Ted spend the majority of the movie putting together a history report, but it isn’t really the report they are working towards. It’s not even the A+ they know they need. Rather, they are working toward an outcome—keeping the band together—and they are flexible in how they get there.

Too often we get focused on the features we think we be good for students - Advanced Searching! Boolean Operators! A single search box! A tabbed search box! But these are outputs, lists of features we have specified we want to add to our services. If prodded, we’ll all admit that these features are ultimately in service of a particular outcome, like increasing the number of patrons who make it through the link resolver to their full text article, or decreasing the number of citation questions at the service desk.

A while back I was asked to build a page that would provide easy access to all the various places our staff has to enter data. The assumption was that data was not getting entered because folks couldn’t find the different systems. But during my initial interviews, I learned that everyone knew where to enter the data. They had bookmarked the systems and used them at times. But because the purpose of the data collection was never shared, they didn’t see much incentive to put their data in. Because I was able to focus on the outcome—get more and better data—I was able to scrap the original output requested and work to solve the problem another way.

If we focus on those outcomes instead of the outputs we are sure will get us there, then we free ourselves up to find the best solution for the problem. But this is a cultural shift more than a technique you can employ. This requires autonomy, and that might threaten your existing organizational structure. But Jeff Gothelf in Lean UX makes this pretty clear:

[Teams] must be empowered to decide for themselves which features will create the outcomes their organizations require.

Ultimately, the way I approach design at GVSU is that I’m testing hypotheses.5 I look at data and uncover potential problems, and through research, I try to get a holistic view of what is happening (see steps 1-5). Then, I brainstorm, often with coworkers, a bunch of possible solutions. Then we think about which sounds like it might work. We try that one, and test it. If we get the outcome we hoped for, we move on to the next problem. If not, we try a different one. Sometimes we fail, but it’s okay as long we learn something and we make it better in the end.

——

Last fall I heard Karl Fast give a talk about data. In it, he shared that now, everything is data. Astronomers, who used to spend time looking at stars, now deal with data. Stars have become data. Biologists now deal more with data than with plants or animals. Even Pizza Huts have become data.

For Fast, our task is not so much to collect the data, but to represent it in such a way that we are able to move forward, to take some sort of action.

I use data a lot in my job, although I’m not data-driven. I’m data-influenced. The data helps us make better decisions at GVSU, and has helped make us a better place to work and a better library for our patrons. I hope it can help you, too.

——

Until this year, we used a heavily-customized version of LibStats. Now we use LibAnalytics from Springshare. ↩

If you have time, Felton’s talk at EYEO in 2013 goes into a lot of detail about the Reporter app he made for collecting data in his 2012 report. Felton’s more polished Reporter app for iOS was recently released to the public, although I was impatient and built mine last summer. Someday I’ll package it up and put it on Github. ↩

Our personas are pretty simple: just a few items of demographic information, needs and goals, and reminders of how we can serve their particular needs. Personas can be really complex, but simple is sometimes better. I was heavily influenced by the persona sections in Lean UX for developing ours. I got the names from a random name generator, and the photos from Greg Peverill-Conti’s Creative Commons Flickr series “1000 Faces.” For more on user journeys and personas, Newfangled.com recently ran a great article on using tools to communicate how the user sees your services. ↩

I got outcomes vs outputs from the book Lean UX by Jeff Gothelf & Josh Seiden, and they also gave me the vocabulary of hypothesis testing to describe the way I practice design work. ↩

]]>reidsmam@gvsu.edu (Matthew Reidsma)Mon, 19 Feb 2018 12:00:00 -0500http://www.matthew.reidsrow.com/articles/72The Library with a Thousand Databaseshttp://www.matthew.reidsrow.com/?b=58
Earlier this week I gave a short talk as part of NISO's Virtual Conference on Web Scale Discovery. They asked me to speak about the User Experience of discovery systems, and I decided to focus on something that I've been obsessed with over the past few years: user journeys. Specifically, I was interested in the feelings our patrons often experience when doing library research. Last month at the Midwest UX Conference I heard Seth Starner of Amway talk about the experience many customers go through using physical spaces that are not designed with them in mind. His talk pulled out Joseph Campbell's work on myth, equating the difficult experiences we make for customers to the Hero's Journey, and I realized that this difficulty is applicable online as well, especially in libraries.

Here is a video of the talk, followed by an approximate writeup of my notes as I went through the presentation.

Let me tell you a story.

King Minos had a problem. His wife had given birth to a minotaur, a creature with a human body but e head of a bull that was hungry for human flesh. He had a huge labyrinth built to contain the creature so that he wouldn't get out an eat the King's subjects. Conveniently, King Minos found a steady food source in the upstart city of Athens, just across the Mediterranean. Each year 14 Athenean youths, 7 men and 7 women, were sent to Crete as tribute to become minotaur food.

Theseus, the son of the Athenean King, decided to put an end to the minotaur, and so he set sail with the otheryouths to Crete. King Minos' daughter, Ariadne, to a shine to Theseus and gave him a sword and a string that he could use to find his way out of the labyrinth.

So in he went. He wandered around in confusion for a while until he faced his great challenge: the minotaur.

Theseus managed to kill the beast and follow the string out to the labyrinth's entrance, and he sailed for home a hero.

Joseph Campbell, in his book The Hero with a Thousand Faces, notes what he calls the "monomyth" that makes up the structure not only of this story about Theseus, but of most myths, legends, and religious narratives.

A hero ventures forth from the world of common day into a region of supernatural wonder: fabulous forces are there encountered and a decisive victory is won: the hero comes back from this mysterious adventure with the power to bestow boons on his fellow man.1

Campbell's structure of the monomyth, or "hero's journey," helps us not only understand the narrative structure of myth, he says, but also how we understand big challenges we face in our own lives.

We interpret our own difficult experiences as a sort of "hero's journey," as if facing down our challenges was like Theseus in the labyrinth. (Not convinced? Think back to the last time you heard someone talk about an airline delay.)

Now, let me tell you another story.

The Professor, we'll call him Professor Minos, posted an assignment. Each of the fourteen undergraduates in his ancient history class must write a research paper on a topic of his or her choosing.

Ted, a sophomore, heard that Professor Minos gave this assignment every year, and he didn't want to repeat the class. He decided to write the research paper, but first, he needed to do some research.

He entered the Library's online Database A-Z list and wandered around in confusion for a while, until he faced his biggest challenge: the default advanced search screen in Academic Search Premier.

With help from a kind librarian, Ted managed to find a few relevant full-text articles and went on to write his paper. As for whether he was hailed a hero, we'll have to wait for Professor Minos to grade the papers.

This story shares a structure with the tale of Theseus, but it also contains Campbell's monomyth. If you talk to some of your patrons, you'll find that it is surprisingly common for folks to experience library tools like Minos's labyrinth.

They leave the normal world for the special realm of library research, crossing the threshold between worlds at the darkened door of the Database A-Z list.

After suffering through the supreme ordeal of looking for relevant items in this or that database, the strongest might emerge with full-text in hand.

This isn't just hyperbole, this is how many people experience using our tools for research. It's unfamiliar, and so the metaphor of the monomyth helps them make sense of their struggle.

At GVSU our website was a confusing labyrinth of library jargon and specific tools for years. Here it is in 2004:

Want a book? Try the "University Libraries Catalog" or "WorldCat" or "Other Library Catalogs," whatever those are. Want a book chapter? Should you look under articles or books? And since everything has "More," more of what?

Our patrons had to know what specific tool they needed to use before they could begin. That's something that many of us went to Library School to learn!

A few years later we hopped on the tabbed search bandwagon, because it seemed like we could fool our patrons into thinking that search was easier if they could only see one search box at a time.

But you still need to know the tool before you start, only now our attempt to make the labels more intuitive have actually made things worse: does the library "own" the article you need? What can you find under "New!"? And what the heck is Nautilus? It might be a minotaur.2

But we kept trying.

In 2009 we became the first customer of Serials Solutions new web scale discovery tool Summon. True to our form, we treated it like any other tool (with equally suspicious labeling). Now, instead of facing the beast Nautilus, folks would "Start here." We didn't give them any idea where they might end up, of course.

Our patrons still had to know beforehand the right tool for the job, something novice researchers have a hard time with.

By forcing our patrons to wander through a list of specialized tools organized by silo, we were making their search experience difficult. Usually they would wander around in confusion for a while, and if they were lucky, someone would hand them a sword or a piece of string so they could find their way through.

We sent everyone from the search of the normal world, to a confusing special realm where the rules of search didn't apply. We took something that people did every day and made it unfamiliar.

But we kept coming back to the same question: how do we make this easier?

This wasn't just about slimming the site down and getting rid of links (although that helped). This was an issue of familiarity, about making the search experience perform the way our patrons expect.

What makes something simple or complex? It's not the number of dials or controls or how many features it has: it is whether the person using the device has a good conceptual model of how it operates. –Don Norman3

There is something to be said for simplifying, but simplicity isn't our end goal. Familiarity is. So we started thinking about how to make this process—not just a screen or one part of the search, but the whole process—more familiar.

So in 2010 GVSU ditched the tabbed search and went with a single search box.

The goal here was not just a single search box, but a single search process that would go from initial search to having results in-hand regardless of how difficult the search was. What we wanted, it seems, was one search to rule them all.

In 2011 we made the search even more prominent4, but kept links to the advanced tools we hope people can learn over time. In the heat map below, you can see that the Database A-Z list gets almost as much traffic as the search box. That's great, as long as folks understand how a tool works, we want them to use that tool. If you get how individual databases work, they are no longer a "supreme ordeal."

We forget that most of our patrons don't come to us to find things because they want to find things. Their research is always part of a larger story: answering a question, writing a paper or a book, cooking a meal, buying a house. Too often librarians get excited about all of the tools we have to help people, and we end up hijacking their stories and making them about the library. We bring everyone into our special realm out of a genuine desire to help.

Web scale discovery isn't perfect, of course, but it's an effective way to help patrons feel more comfortable and to keep the focus on what they really need to do. It's a way to keep their stories on track, in their own world, and to keep the beasts at bay for a little longer.

I have a short video that shows the evolution of our website over that year based on usability tests, if you're bored: [https://vimeo.com/52561335](https://vimeo.com/52561335) ↩

]]>reidsmam@gvsu.edu (Matthew Reidsma)Mon, 19 Feb 2018 12:00:00 -0500http://www.matthew.reidsrow.com/articles/58Good for Whom? Reduxhttp://www.matthew.reidsrow.com/?b=47
This morning I gave the keynote at The Library Network's Technology Forum at the Bloomfield Township Library here in Michigan. I gave a version of a talk I've done a few times before, but with a few tweaks and changes. While I continued to focus on rethinking how we choose our tools in libraries, I wanted to emphasize the idea that the tools we choose to sit between us and our patrons—our OPACs, link resolvers, and even databases—are stand-ins for us. When I first became a cataloger, I learned that records were surrogates for the original item, something that had to be a quality replacement for the original. Our library websites, catalogs, and sundry online tools are now surrogates for us, the flesh and blood folks who make the library run. We need to take care to choose those tools wisely so they reflect the care we put into our work.

You can watch the whole talk here, or check out a bunch of other formats below.

Thank you to Andrew Mutch and the rest of the planning committee for inviting me, and the great audience and cool folks I met today. (I finally got to meet Brad Czerniak in the flesh, if only for 35 seconds.)