I want an email client that offers a ‘reply to list’ function in addition to ‘reply to author’ and ‘reply to all’.

Even more importantly, I want it to put my message compose window in bright red or something whenever I’m replying to a list by default ‘reply’, which it should be able to detect.

Oh, while we’re at it, let me pick own my default-reply on a per-list basis.

Is that too much to ask? From how often I see people making this mistake (sometimes embarressingly), I’d guess this is one of the biggest email usability issues (right after spam prevention).

Email list servs don’t always use list-* headers that would make it easy for a client to do this but I can think of some heuristics that could succesfully identify list traffic much of the time. Like when the “To:” header matches the “Sender:” header, or when the “To:” header matches the “Reply-to:” header–most of the time that’ll be a mailing list, and most mailing list messages can probably be caught by rules like this.

Or: There’s no such thing as a free lunch, but some meals are still better than others, even on a budget.

The horizon-l list has rather incredibly turned into “open source ILS for newbies” lately.

Someone posted [paraphrased, it’s a closed list] “I know open source isn’t free, despite what everyone says. We don’t have much in-house technical expertise, and we’re worried that the open source ILSs don’t have the feature we need from a mature ILS yet. What can we do?”

I have been thinking lately of a library subject guide system. A really great utopian library subject system. I imagined a system where librarians would list databases and other resources (chosen from Metalib and/or some other central repository of our stuff, when possible; URLs entered manually when not); also add other narrative text as desired. And organize the whole thing coherently somehow, without knowing any HTML.

1) New product will be based on Unicorn. Will moving from Horizon to Unicorn be no easier than moving from Horizon to another vendor’s ILS…. will it be harder than moving to Evergreen?

2) Prior to the end of 2008, will we have a few more more succesful implementations of open source ILSs, in libraries comparable to our own, to give our own timid libraries enough confidence to make that move? Fall 2008 instead of Summer 2007, the previous (well, the latest previous) Corinthian release date—gives us more time.

The biggest winner of this announcement is Evergreen and Koha.

Oh, and all the rest of us too. Sometimes the only way to get off the sinking ship is to be pushed.

Dan Chudnov envisions a scenario for using OpenURL to let a person carry their ‘services’ around with them from website to website, in an automatic way. At least that’s my interpretation of his scenario, I’m sure he or someone else will correct me if I’m mis-characterizing it.

That’s started me thinking in more detail about what the architecture needed to support this scenario would look like. I’m going to make a few posts about this, starting with this one investigating how we should think about the ‘link resolver’. (Note that I’m not sure if this is exactly what dchud was thinking just stated differently, or expands upon it, or even contradicts it! That’s why we write these things down, to tease out from each other what we mean and build shared mental models and vocabulary, right?)

So, what is a ‘link resolver’? Well, of course, it’s something that takes an OpenURL, which represents a bibliographic item and tells the user where he can get electronic access to the item. (And, yes, the OpenURL includes not just an item citation, but the ‘context’ of the request, but let’s face it, the item requested, the ‘referrent’, is the principle payload, and the main thing that ‘link resolvers’ act upon; in practice the extra stuff is just bonus). The very name ‘link resolver’ implies this scenario, but let’s consider an alternate more abstract understanding of the class of services our ‘link resolvers’ fit into. Continue reading “‘Link Resolver’ understood as ‘OpenURL Service Provider’”

Richard Wallis of Talis posts on a project that impacts our fantasies of local indexing (rather than cross-search) for scholarly articles.

“By embedding Onix encoded journal article information in to a RSS 2.0 feed it was possible to build a process, capable of being automated, for those articles to be inserted in to a library catalogue without human intervention.”

As I’ve told some people, I have some code to put the info from SFX on what databases (ie SFX ‘targets’) have online coverage for a given serial, on the OPAC page for that serial. I think this is fairly easy improvement you can make with a big impact.

So I finally get around to making a blog to write about library matters. Attending the Code4Lib conference was the final impetus. What a great conference. Nice to spend a week discussing with very smart people very interesting ideas about how to make libraries work better in the digital environment. The way we try to extend and grow this community is with communication, right? Less re-inventing of wheels and isolation, more synergy and collaboration. Not just on code, but sharing of analyses, plans and experiences, participation in public discourse to take our collective practice forward. So a blog is one way of doing that public communication (‘publishing’ is just a word for ‘public communication’, right? A blog may not be the best way to do it, but better than nothing).