building the digital branch for libraries

Ever since I watched SNL’s digital short, “Lazy Sunday” on YouTube, I recognized the fun potential of this online video thing. In time, YouTube would deliver other priceless moments. Try a Google search for “boom goes the dynamite” or “chocolate rain” to see what I’m talking about. I say, “priceless”, with tongue firmly planted in cheek. YouTube moments are usually viewed within a certain frame of satire and in the spirit of lampooning the “everyone is a star” meme that this great Internet enables.

So, it was with a high level of interest that I decided to participate in an experiment to use YouTube to promote discourse and civic debate. The idea comes from some local independent filmmakers and several academics from Montana State University who ask a series of questions about the emerging video platform by creating a YouTube video. Among the questions asked: How are people thinking about the effects of YouTube on society? What is unique about this new age of video sharing? Is it just a game that kids play to create alter-egos to insult others? Are hate speech and acts of violence a necessary consequence of allowing free video sharing? Does YouTube have far wider implications for social justice, religion and disenfranchised groups?

But enough with my writing… It’s all very meta, so have a watch and leave a comment if you have something to say.

Finding and replacing strings and characters can be a dicey operation for a web developer. Too much can lead to breaking your web site. Too little can lead to missing that piece of HTML needing to be updated or deleted. Many tools exist that can help you in the process – Dreamweaver, Homesite, and many text editors have powerful interfaces for searching and replacing. (E.g., Use cntrl-f on windows or cmd-f on mac to see the Find and Replace window in Dreamweaver.) But there’s some really great functionality on the command line for Unix/Linux users that shouldn’t be overlooked. I’ve been experimenting with a procedure for making these global matches and replaces within the Unix shell environment and I wanted to document the process somewhere. This seems like as good a place as any…

Step 1: Find the pattern needing to be replaced or updated, print out files needing change

find . -exec grep ‘ENTER STRING OR TEXT TO SEARCH FOR’ ‘{}’ \; -print

*Note: I’m using the “find” and “grep” commands to search for a matching pattern which will print out a list of files and directories that need changes. If I’m at the top level of my web site the “.” in the find command will search for the pattern down through any directories below. On a large site, the process can take some time.

*Note: These directories would be named according to the directories or files you need to change based on the results from Step 1. The “-p” will preserve owners groups and timestamps in the copied directory. The “-r” will copy recursively down through any associated sub-directories. I do this so I can compare the new directory to the original directory after I’ve test run the global changes.

Step 3: Run test on find and replace expression in /test-backup/ directory

*Note: I’m using the “find .” command to search for .php and html files in the current working directory as I only want to target the files that need to be touched (you should change according to your requirements), next I’m piping that result to the “grep” command which searches for the string or text specified and holds only the matched files in memory, and finally I’m passing the grep result to the “sed” command which matches the string or text and replaces it with the new string or text value.

Step 5: After testing, run expression from Step 3 in ALL the directories or files needing the changes. Delete /test-backup/ directory

Steps 1 and 3 are the heart of the matter. I’m learning the power of these commands, so I’m pretty cautious about backing up and testing on directories and files that aren’t live. Once I have the expression dialed in, I’ll run it on a more global scale. So, there you have it – my find and replace process in a nutshell. Use at your own discretion and feel free to share your thoughts in the comments.

It’s been just over a week since I returned from Computers in Libraries. The InfoToday crew usually puts together a nice set of speakers and ideas and this year was more of the same. I’m not even going to try and summarize everything – check LibrarianInBlack for some of the best summaries or the Technorati CIL2008 tag to follow along from home. As I mentioned in a previous post, I arrived as the conference was winding down, but I was still able to pick up some tips, teach a couple of workshops, and spot some library trends. I’m going to focus on the trends part as the week has given me some time to collect my thoughts. So, here they are, the library trends I’m seeing (based on CIL’s programming and my interests).

Twitter and Libraries – Microblogging and its associated platforms are starting to be noticed and utilized in some library settings. Right now, the emphasis is on connecting to friends, but as more info gets shared in 140 character bits – the Twitter channel is becoming a resource. It’s all about following keywords and topics and choosing your Twitter network of followers (aka “friends”) Pownce and Tumblr are two similar services to watch.

Web Services for Everybody – When Yahoo Pipes came on the scene about a year ago, I wondered when it might start showing up as a tool for library mashups. It’s actually a pretty simple way to use web services in a graphical user interface. Pipes seems limited to pretty simple formats (RSS and ATOM) generally, but it introduces the power and concepts behind web services intuitively. In the long term, it’s still best to learn web services, XML, and some scripting for truly robust mashups, but Pipes lowers the bar for entry in a really nice way.

The Portable Library – At MSU libraries, we’ve been looking at ways to bring library resources into a user’s networked environment. See our widgets and tools for some examples. It was great to meet other developers and libraries pursuing similar efforts. I got to have an extended discussion with Binky Lush, lead web developer at Penn State University Libraries, about her efforts to place library web services into users’ working environments. It’s refreshing to see some of these attempts to move away from the gatekeeping model of web sites as single points of service. Leveraging the network and learning to broadcast bits and pieces of the library into multiple web spaces – iGoogle, iTunes, Course Management Systems – will be an important move for libraries going forward.

Open Source and Learning Outside the Profession – Open source solutions for libraries are becoming easier to implement, but it was nice to see the balanced conversation and practical examples of Open source possibilities for libraries that were part of the “Open Source” track moderated by Nicole Engard. I wasn’t able to see the “Beyond Libraries: Industries Using Hot Tech”, but the idea of looking outside our comfort zone and learning from other industries really resonates with me as an essential trend to follow. Steven Cohen (the track organizer) is onto something here. Innovation frequently happens elsewhere; let’s hear more about it.

I came a bit late to the Computers in Libraries 2008 party by arriving on Tuesday night, but I was stilI able to catch up with a few people and make some new friends. It was interesting presenting to a group as they were eating lunch (never done that one before), but the presentations went well. I also had a great time teaching the workshops yesterday. For those that are interested, all the files and code samples from my talks are available below.

David Lee King presented on strategies for implementing web 2.0 in the library. It was great to see David address the “if you build it, they will come” myth. He stressed the need to ask questions about: why a new service is needed, how a service will be supported, and how to promote and encourage use of the service. A nice, balanced presentation.

Frank Cervone and Amanda Hollister presented on moving towards a base of evidence for design and development decisions. Frank stressed that the research process was iterative. Ongoing and continual…. Amanda demoed a tracking user paths software application (built with ASP, XML, and Visual Basic). She was able to show anticipated paths and actual paths to content which was a nice measuring stick. It was a great session helping to frame exactly how to carry out research for development decisions and move away from the anecdotal.

Jeff Wisniewski spoke about the new rules of web design. He took out the magnifying glass to really consider some of the pillars of web design. Among his findings: simplicity rules – we need to move away from religion of simplicity; content is king, but design matters; all content is created equally, but some content is more equal than other – eResources content is primary; design for 800×600? – 1024×768 is the new 800×600; RIP websafe palette – most devices are able to display a high range of colors; how often to redesign – constantly, iteratively; top of the page is prime real estate – nope, there is banner blindness. (Note to self: Jeff had a slide of at the end of all of his citations. I’ll have to get the link.)

Has your library discussed creating a Flickr account? A MySpace teen site? Creating a blog? All these ideas are great, yet all have the potential to fail if not well-implemented. This session provides practical planning and implementation tips for dealing with emerging digital trends, focusing on setting up new Web 2.0 services such as MySpace, blogs, and Flickr to meet client needs.

Delivering services based on evidence, rather than anecdotes, is a growing trend within librarianship. Learn how two libraries have introduced evidence-based practice into the Web design process. The Northwestern case study explores the implementation of research into practice through an examination of the environment and the method of facilitation that led to evidence-based decision making for the library’s Web site. The Memorial Library Web site team collects and analyzes paths that users take through the site to discover what users are doing. Do students use the subject pages? How many links do they click before entering a database? Learn how the library has started to use the information about paths and user groups to create a personalized Web site.

Web design has evolved over the last decade: Do you know what the new rules are? Is less still more? Is scrolling bad? Is Flash still verboten? Learn about which design guidelines are still relevant, which no longer apply, and what you need to know to design a site that’s useful, usable, and attractive in the Web 2.0 world.

My preconference,”AJAX for Libraries”, with Karen Coombs went really well. It’s always great working with Karen. She’s cool, composed, and “wicked” knowledgeable. For the second year in a row, we had a great group of participants. It’s nice to see a growing interest in emerging web programming frameworks and how they might be applied to libraries.

table_name is the database table you need to edit.
column_name is the database table column name to edit.
old_string/text is the original string or text to match and replace.
new_string/text is the new string or text you want to add.

It’s short, sweet, and delightfully efficient. The longevity and active development around MySQL and SQL always surprise me. If you have a business problem, chances are there is a function or built procedure already in the code ready to answer the problem.

Tips and tricks just like this can be found pretty regularly by trolling the MySQL manual and forums. Another method for keeping up and learning is “tagwatching” on del.icio.us. Some possibilities for watching the tag “mysql”:

That’s right. With that last RSS URL, you can subscribe to a tag and watch as the latest posts come into your feedreader of choice. (del.icio.us has a feed for all of its tags.) Good stuff for keeping the learning going.