Social

Sponsored by

Adventures in Low Fidelity: Designing Search for Egreetings

One of the dirty little secrets about being an information architect is that most of “From our first meetings with Egreetings, there was controversy about how to best implement search. From the experience of the Egreetings team and our own observations during testing, we knew that users were strongly drawn to browsing rather than searching when selecting cards.”us only bat .500 at best. We labor and agonize over making recommendations and designing information architectures that are supposed to change the world, but many of our designs never see the light of day. Sometimes our clients or the implementation teams don’t listen to us. Or maybe organizational politics bury our ideas like ancient cities beneath the sands —I once delivered my intranet recommendations to a client just in time for the division to be reorganized out of existence a week later. This article describes one such experience I had while working on a project for Egreetings. Rather than moan about why my designs were not implemented, I want to share my story because it illustrates the value of employing user testing techniques during IA design and applying ideas about facets and controlled vocabularies to creating a search interface.

It all started so wellIn the spring of 2000, Egreetings engaged Argus to help with a redesign effort to improve the number of online greetings sent by users. At the time, Egreetings was typically ranked #3 among online greeting sites (behind Blue Mountain and American Greetings). The company’s core content was its collection of online greeting cards comprised of flash animations, animated GIF images and still images. Some were created in-house by a highly talented team of graphic artists, and the rest were licensed from outside art sources such as cartoons from the New York Times. Anywhere between 40,000 and 200,000 cards were sent each day, depending on the season. Argus was called in by Tim Scheele, the Senior Director of Publishing, to help with a number of goals:

Increase card-sending statistics,

Reorganize the collection (taxonomy/controlled vocabulary),

Improve navigation and searching,

Suggest key places for ads and promotions (need to “monetize”),

Find an approach for organizing the music greeting collection,

Improve the checkout process.

The team consisted of four Argonauts: a Lead Information Architect (myself), an Assisting Information Architect (Michele de la Iglesia), a Project Manager (Shawn Stemen), and a Usability Specialist (Keith Instone) who worked part time on the project to advise us.

We began our work by conducting a strategy and recommendations phase, knowing that Egreetings was hoping for major look-and-feel redesign with the target of a fall 2000 relaunch. An information architect’s approach should always involve an investigation of the content, the organizational context and the users. Often the user research part of the methodology gets less emphasis than it deserves because of time and budget constraints. However, this project contained user testing and research during each phase.

During the first two-thirds of the project, we accomplished a great deal. In the strategy and recommendations phase, we used techniques like card sorting and content analysis to determine facets for categorizing online greetings. (Facets are attributes or aspects used to classify content. For example, fruit facets might include color, region, size and type.) For more on exactly what we did during this phase, please see my article in the June/July 2002 issue of the Bulletin of the American Society for Information Science and Technology. Our deliverable at the end of this phase was a report that contained a number of recommendations on how to reorganize the site. We drafted a revision of the top level of the site’s taxonomy and recommended that it be made more consistent by focusing on “reason to send.” We suggested that the facets “recipient” and “card image content” be used as a means to systematically subdivide lower-level categories in the site. The “emotion” and “format” facets were to be used as additional metadata indexing elements separate from the “reason to send” taxonomy so that users could filter, narrow and search according to these secondary facets.

Furthermore, we encouraged them to create controlled vocabularies for these facets so the cards could be consistently indexed. We also delivered wireframes at this point, including one for the new main page of the site to show how to integrate our taxonomy suggestions into the site.

Egreetings liked our recommendations enough that some of our interface and category recommendations were implemented right away. They also hired us to help in the next phase in which we delved deeper into the task of redoing the taxonomy and controlled vocabularies. We created a new version of the taxonomy, taken three levels deep, consisting of over 850 terms.

In addition, we drafted lists of controlled terms for the other facets. Then we tested these with users and made changes accordingly. After we delivered the new taxonomy to Egreetings, we worked with their team to provide guidance on how to apply the terms consistently as they reclassified the entire collection of cards. By the middle of summer, the client was busy handling all of the details and issues that go with a major redesign.

The problem of search
At that point we began our work on the search interface, which was planned as a future enhancement to be added after the fall relaunch. From our first meetings with Egreetings, there was controversy about how to best implement search. From the experience of the Egreetings team and our own observations during testing, we knew that users were strongly drawn to browsing rather than searching when selecting cards. This has a lot to do with the mental model formed by shopping for traditional paper cards. However, after talking to several rounds of users, I felt that I had a good idea of what they would want in a search interface. While the majority seemed to enjoy the shopping and browsing process, there was a great opportunity to shorten this process for people in a hurry. Many users came to the site with a particular occasion, recipient or emotion for a card in mind. Some also looked for particular types of subject matter or images.

For a time, the site included a search interface which was intended to allow users to select different card criteria from categories like “collection” and “publisher.” There was a lot of overlap between these categories, and users frequently got zero results when they selected more than a few choices. We didn’t shed many tears when technical changes to the content management system made this search interface disappear by surprise. This allowed us to start from scratch.

Most search interfaces offer an open text box for a user to type in a query. In this case, we felt that the ubiquitous search box could be optional. Egreetings was cautious about getting involved with a search engine vendor because of the costs involved. From a practical perspective, any free-text search on a collection of a few thousand objects (rather than hundreds of thousands of objects) would need to be fairly sophisticated in order to avoid offering users null results. We felt we could provide a great deal of utility to users by exposing the choices and controlled vocabularies for selections that would be guaranteed to deliver results. The content management database Egreetings had built could be adapted for fielded searching.

Lastly, I had some definite opinions and ideas about how search should work:

I felt that search should leverage the work we had done to define the facets and metadata for the cards.

I was inspired by sites like Epicurious and Virtual Vineyards. These sites combine searching and browsing via databases of content objects and products that are well classified with rich metadata.

The new search interface should NOT disappoint users with an empty results page. On an ecommerce and advertising site like Egreetings, it is important to suggest something to the user even if it doesn’t meet all of the criteria.

With these parameters in mind, I set out to create draft wireframes of a design. My philosophy was that by using a step-by-step wizard interface, we would create an interface that would be a shopping assistant to the user, which would allow them to narrow their choices down using faceted criteria. Each page of the wizard would concentrate on a separate facet. This would give users a reasonable number of cards to browse while making it less likely that they would be returned null results.

When we next met with Egreetings they liked many of my ideas, especially the “My philosophy was that by using a step-by-step wizard interface, we would create an interface that would be a shopping assistant to the user, which would allow them to narrow their choices down using faceted criteria. ”idea of making search an assistant. Their graphic designers came up with a superhero-like figure to be our “Card Finder.” However, they definitely didn’t like the idea of having the search interface separated into multiple wizard screens. Even when I explained that each page/facet could be optional, they felt that the users would be frustrated with too many steps and too many choices. The Egreetings team encouraged me to try creating a search interface with just one page for input and a smaller set of controlled vocabulary choices. We decided that the best way to settle on a design direction would be to create prototypes of both approaches and let the users help us decide.

The test
The test took place over the course of three days with 12 users. We used a market research firm in Southfield, Michigan to recruit a variety of representative users. We were lucky enough to be able to perform the tests in this firm’s well-appointed facilities, complete with a two-way mirror for observation and videotaping equipment so that Egreetings could also review the tests.

While planning this round of user testing, I got really excited about the idea of prototyping and how to get the most out this kind of test during the design process. A colleague and close friend, Dennis Schleicher, had just returned from the UPA 2000 conference with some great ideas on prototyping. I found that different professionals had diverging opinions on how to create effective prototypes. I learned a great deal by considering the arguments for both low- and high-fidelity prototypes, and came to some of my own conclusions about how to conduct this particular test. (See What IAs Should Know About Prototypes for User Testing for some of my ideas and research on prototyping.) After some pondering, we decided to proceed with testing the search inputs with a low- to medium-fidelity prototype created with Visio printouts that we cut up into pieces users could interact with. This made sense because it meant that we didn’t need much help from the Egreetings technical and creative teams to create high-fidelity interactive prototypes. At the time, they were much too busy with the relaunch to worry about that. However, they did help us by providing some high-fidelity screen comps to use in our test sessions to get reactions from users (we showed one of each style of search interface and another of the main page access points for navigating to the search page). In order to make the test feel as automated as possible, we asked the users to imagine interacting with a computer to perform tasks with both interfaces.

We prepared about eight tasks, such as, “You regularly share jokes with your favorite brother and his birthday is next week. Find a card for him.” We made sure to compose these so that they included multiple facets and there was more than one possible answer. During testing, we varied the order in which we gave the users the tasks and we also alternated the prototype presented first between users to eliminate any first-last bias. For each task, we asked the user to interact with the interface on the tabletop and to pretend that they were using the computer. We had laminated the Visio printouts so that the users could write on them with a thin whiteboard marker. One of us took notes while the other facilitated and simulated the feedback given by the computer. For example, in the wizard interface we wrote down the number of matching cards on a slip of paper as the user made each choice.

The wizard-style prototype was divided into five steps. Each step was presented on a separate page and showed as many choices as possible. Since the “reason to send” taxonomy was so large, we allowed users to drill down from main categories to sub-categories on this page. Any of the steps could be skipped, and users could elect to view the cards in their “bucket” at any point. Each time the user entered choices with the “continue” button, we wrote on a slip of paper to show the choices made so far and the number of cards in the bucket. We felt that this interactive feedback would help them understand the narrowing process. However, it didn’t work quite as well as we had planned.

For the one-page prototype we cut the interface into horizontal strips, one for each facet or screen element. Since there were only so many things that could be shown on the page, we subdued some of the options and presented only short lists of representative terms for each facet. If a user clicked the “show more reasons” link, we swapped the strip for one that showed the complete set of options for that term. This simulated a screen refresh that would expand the page.

Since it would have been very difficult to show users actual results, we stopped each task when users told us they were ready to hit the “search” button. The best way to get feedback from the users under these circumstances was to determine how confident they felt about the search. So, after each task, we asked a series of questions:

How confident are you that the Card Finder would find cards that match the task?
1 – Not confident at all
2 – Not very confident
3 – Somewhat confident
4 – Confident
5 – Very confident

How many cards do you think the Card Finder might find?

Do you have any comments on this version of the Card Finder?

We also devoted the second half of each test session to a separate activity devoted to the design of the results page. We asked them to select from cutouts of elements that could appear on a results screen and build their ideal search page.

Facing the music
Nobody likes being wrong. I pride myself in my efforts not to bias tests by leading with my own opinions. I must have done a good job. I was able to hide my pain over the three days of the test as the majority of users chose the one-page search interface over my wizard approach. Our testing and analysis revealed the following:

Users preferred to see multiple criteria on a single page.

They had difficulty noticing “show more” functionality, which expanded their options. Some preferred to see complete lists of options by default.

Users offered opinions on the ordering and priority of criteria. “Reason to send” and “tone” were both high priorities. “Recipient” was more important than we anticipated.

The decision was clear —I may have lost, but the users won. In the aftermath of the test and the subsequent report we delivered to the client, I needed to create an interface based on the one-page paradigm. So I updated my design for the test according to the feedback from the users. In particular, I reorganized the way the facets were presented so that “Reason to send” was the most prominent and the other facets were given equal secondary emphasis. I relied heavily on the idea that users would see a relatively short page at first that could expand as needed. We also recommended that search provide “smart” results. Because the one-page search interface presented a high risk of offering null results, we specified that the search engine would need to present best-bets if not all criteria matched.

All for naught?
Once I got over my angst about losing the battle over the search interface, I felt really great about the conclusion of the project. I had swallowed my pride and designed a direction for the interface based on what the users wanted. Moreover, I felt good because our months of working with Egreetings finished with a very successful relaunch which happened on time. Even better, the initial statistics after the launch showed a positive impact from the new card categories and navigation that we’d recommended. Transactions (cards sent) and the number of visits went up immediately —a rare achievement in any redesign because it usually takes users some time to adjust when a site undergoes major changes. Egreetings had implemented roughly half of our recommendations and the others were put onto their priority list for future updates. We even received thanks from the CEO and VP.

However, a month or two after our consulting engagement with Egreetings ended, we began getting some disturbing news from them. First came the announcement of layoffs of a portion of the large and talented team that had been assembled to complete the work of the relaunch. By the end of the year, we read the news that the company had been sold to one of their largest competitors, American Greetings. The San Francisco office closed after a transition period and the staff was mostly dispersed as the site’s operations were transferred to AG’s headquarters in Ohio.

This was certainly disturbing news for me. I felt sorry that the Egreetings team would no longer be together. I also worried that the site on which I’d spent such a considerable amount of time and effort would be wiped out. That has not happened so far, and my solace is that the site lives on. Even though it has now adopted a new subscription model as a way to generate revenue, much of the taxonomy and interface recommendations remain. Of course, there have been some changes, but a good information architecture should be flexible enough to adapt with a site’s needs over time. My short list of grievances includes my opinion that the tab navigation is no longer relevant. Also, my recommendations about filtering and searching never saw the light of day —Card Finder never got to fly. Nonetheless, I got to have the experiences of low-fidelity, paperprototyping and designing a faceted search interface. And I can always fantasize that, maybe someday, one of the folks at American Greetings will find our report, dust it off and give our ideas a try.

Chris Farnum is an information architect with over four years’ experience, and is currently with Compuware Corporation. Three of those years were spent at Argus Associates working with Lou Rosenfeld and Peter Morville, the authors of Information Architecture for the World Wide Web.

Share this:

Chris Farnum

Chris Farnum is a Senior Information Architect at Enlighten. His role is to define site structure and navigation based on user needs, strategic objectives, and well-organized content. He also specializes in creating taxonomies and defining the metadata needed for searching, browsing and content management. An essential part of his methodology is to incorporate user research into the design process.
Chris has been an IA for over eight years. His experience prior to joining Enlighten includes working at ProQuest Information and Learning, where he was instrumental in redesigning the ProQuest search interface. He has also worked for a wide array of clients as a consultant with Argus Associates and Compuware. His preparation for being an IA includes working as a professional librarian and earning a Masters in Information and Library Studies from the University of Michigan. View all posts by Chris Farnum

This comment isn’t directly related to the article (you’ve heard that one before haven’t you…).

Reading the article made me wonder, how much empirical research is their into the value of facets, controlled vocabularies, etc in the domain of websites, intranets, and so forth. I can easily understand the value of these approaches in a library context, but how well do they translate to the tasks that we do on the web? I’d just be interested to know what research has been done.

The main thing that got me thinking about this was the fact you tried to apply theory/knowledge to the search issue, but your informed design proved to be unsuccessful when you did your user testing.

Hey, I don’t mind tangents. To address a couple of your points:
– I didn’t think that the user testing actually discredited the faceted org scheme. Instead it killed the wizard UI approach. I definitely included facets in the alternate design and in the final recommendations.
– I have to admit that I didn’t do a comprehensive lit survey on facet research during the project. I felt that facets were a design pattern that had already well established (invented by S.R. Ranganathan in the 1930s) in the realm of classification and info retrieval. Of course, part of my not-so-secret librarian agenda has always been to try to apply ideas from LIS to web design.

If anyone knows of some good research studies on applying classification/facets to web design, please feel free to share.

Although I found the case study interesting, I have a few comments about your process and your approach to the results.

First of all, you started the search discussion by describing this as an opporunity to “shorten this process for people in a hurry” — i.e. the goal of search was to make the process of shopping for a card super fast. Right off the bat this argues directly against having a multi-page search wizard. As someone who has designed many a wizard and many a search, the one thing I can say is that even if having multiple pages makes things quicker in the long run, users still have the perception that having multiple pages is slower and more draining when they are trying to do something fast. Wizards seem to work best when there is a redundant process that by its nature will take a while but that always follows the same steps and has a clear finish. This does not really relate to search. As a result, I was SHOCKED that the client agreed to a 12 person user study to compare the two ideas. Not only does 12 people seem like a bit of an overkill for deciding which of these two ideas works better, but having such a formal test with a two way mirror for observation with videotaping in order to make a decision about two formats that were still in rough paper prototype format seems like a particular waste of money. Note that I am not arguing against user study in this situation…I just think that you would have been able to have the exact same results testing 5 users in person without the fancy facilities and the incredible outlay of time.

However, this takes me to my next concern about the article. In the end, you discuss your pain and angst at “losing” and triumph your decision to swallow your pride. As a fellow designer, this commentary was a bit disappointing to me. I view our role as designers is to be the lone champions of users in a world of people trying to program things that they don’t like or sell them things they don’t want. Growing one’s ego about being the design expert and therefore the one with presumably the best ideas is a negative approach to interaction design. When I work with clients, I always emphasize that I don’t have a monopoly on good ideas, I am just the one they hire to make sure that the good ideas get through and the user always wins. Of course in the end, you were happy with the results, but I am surprised that your article presented your shock at being incorrect.

Ouch! I humbly accept your criticism. Your scolding is well founded. In my defence I’d like to offer a few comments:

– Hindsight vision = 20/20 At the time, there were a number of design constraints that lead me to consider a wizard. In hindsight, I’m glad that I took the time to flesh out an alternate design. In the end I learned from both.
– For the purposes of the article I’m perhaps overstating my angst. I think that everyone should have an experience like this at least once. It’s a wonderful learning opportunity when you are proven wrong, especially when you are really attached to a misguided idea. But I hope you’ll forgive me for the momentary pain I experienced in the process.
– Spending on user testing was a little different in 2000 than now. Even so, one of the outcomes that the client specifically requested throughout the project was to have a nice audio/visual record of the tests. There was actually an advantage in that this is cheaper than travel/hotel costs for moving multiple people between San Francisco and Detroit.

It’s *very* refreshing and important to hear stories about how people other than us IA’s can be right about UI issues. You are a brave soul to write an article about how you “disproved” your own best idea (at the time) and had the clarity of thought to see that in the end you DID achieve your goal because the user’s won. While it’s great to hear IA success stories, I also know that the kind of lessons like yours happen more often than not. They are just as valuable for us as individuals and as a collective as the case studies outlining why the IA was right from the get go. Nicely done.

Great story, I really appreciate you sharing the results of your search wizard testing (people often ask me about that).

I’m finding more and more situations where facts apply and avoiding dead-ends is a huge plus. All kinds of e-commerce, especially high-ticket items like jewelry or expensive vacations. Even Internet Yellow Pages are taking this approach. So I think you were really on the right track.