Every workshop has improved on the previous year’s, and HCIR 2011, which will take place on Thursday, October 20, will be no exception.

Our venue will be Google’s headquarters in Mountain View, California. We could hardly imagine a more appropriate venue: Google has done more than any another company to contribute to everyday information access. Google has been extremely generous as a host and sponsor (other sponsors include Endeca and Microsoft Research), and its location in the heart of Silicon Valley is ideal for attracting researchers and practitioners building the future of HCIR.

Our keynote speaker will be Gary Marchionini, Dean of the School of Information and Library Science at the University of North Carolina at Chapel Hill. Gary coined the phrase “human–computer information retrieval” in a lecture entitled “Toward Human-Computer Information retrieval“, in which he asserted that “HCIR aims to empower people to explore large-scale information bases but demands that people also take responsibility for this control by expending cognitive and physical energy.” We are honored to have Gary deliver this year’s keynote.

But of course the main attraction is the contribution of participants. This year we invite three types of papers: position papers, research papers and challenge reports. Possible topics for discussion and presentation at the workshop include, but are not limited to:

8 responses so far ↓

At the risk of running a little curmudgeonly, don’t you see the irony in hosting HCIR at Google?

Sure, it’s true as you say: “Google has done more than any another company to contribute to everyday information access.”

But look again at Marchionini’s definition of HCIR (which also struck me the first time I read it; I wholeheartedly agree):

”HCIR aims to empower people to explore large-scale information bases but demands that people also take responsibility for this control by expending cognitive and physical energy.”

See in particular that last bit: people taking responsibility and expending more cognitive energy. The HCI can’t just do it for them. It has to be designed to let the user express that effort, themselves.

I’m fully on board with that goal. But I can’t think of a company more against that goal than Google. Most of what Google does is algorithms and interfaces that are geared toward reducing the need, or even the ability, for users to improve their information seeking task by taking responsibility and expending cognitive energy.

For example, “Google Suggest” as an HCI interaction mechanism quickly fills in the most popular, successful queries, steering you away from your own, unique expressions. And Google Universal Search takes away your ability (by default) to express what type of information you are looking for, e.g. a video, a book, etc. Instead, it just blends it best guess of result type into a single universal interface.

Those are all very nice tools. Don’t get me wrong. But they hardly seem in line with Marchionini’s vision, in that they’re all designed to lower the cognitive effort of the user, rather than give the user an interface in which to more powerfully express his or her cognitive information seeking efforts. I’ve been waiting for over a decade now, for example, for explicit relevance feedback. Not an explicit +1, which doesn’t affect my current information seeking task and therefore is not a true HCI expression of my cognitive efforts to find information. But real relevance feedback.

I bounced over to the HCIR website, and became intrigued by the “IP&M Special Issue”. Went to the IP&M site and tried to figure out how to get a subscription… they don’t seem terribly keen to increase readership. HHIR fail, but I’m going to buy it anyways 🙂

Wordle word cloud FTW? Telltale grey line down the right – I’ve created many a screenshot with the same watermark.

Jeremy: you would disappoint me if you weren’t at least a little bit curmudgeonly. 🙂 I’m certainly aware of the irony — I’ve done my share of positioning HCIR relative to the expectations set by Google. That makes me all the more proud of enlisting Google as a sponsor last year (not to mention having Dan Russell as a keynote) and as a host this year. Google may have differences with the HCIR vision, but it’s certainly taking that vision seriously.

molten_tofu: yup, it’s a Wordle. I thought of editing it, but I decided to keep it as is — I borrowed it from Tony Russell-Rose. As for IP&M, I believe that Elsevier sells individual articles and subscriptions through ScienceDirect. You can always ask them.

The challenge task sounds pretty fun! At first I was thinking of a clustering approach, but I kind of think you might have better luck with random walks of the citation graph. On the example task, something like taking the search results for [Latent Semantic Indexing Deerwester], walking papers cited by those results and citations of those results, then filtering for 1988 or before. After all, no real need to do the relevance matching on topic when the authors of papers probably have already done that for you in their citations. Probably need a bit of tuning to make sure you’re sticking to documents that are unusual to be cited — everyone cites the 1983 Information Retrieval textbook, for example, so Deerwester citing that doesn’t indicate anything particularly useful — but I suspect that might pretty good results pretty easily. Hmm, I wonder what kind of UI you’d want on top to try to make it easy for people to walk through the documents and find relevant ones. Beyond just finding the documents, you really need to highlight relevant snippets out of the paper or otherwise help focus attention, but it’s not trivial to know what a relevant snippet is given that they don’t really use LSI or terms from the definition of LSI. That part could be a bit tougher. Seems like you easily could go down a rats nest of work only to find that just surfacing the abstract and conclusion is a more useful to help people filter than all the attempt you made to have the system highlight snippets deeper in the paper. Well, anyway, sounds like a fun challenge!