Archive

Monthly Archives: December 2005

Last week I was reading Dorothea Salo’s posting about OCLC’s report on library branding, and it got me to thinking about this a bit.

In particular, I thought about her comment:

I would want to trial-balloon a “Deep Web” play in my next survey, if I were OCLC. I would want to know how many people have heard of the Deep Web, what they think is in it, whether they think information useful to them is in it, whether they would access it through their libraries if they could. This moves away from free-vs.-paid and toward exclusive-vs.-nonexclusive. People like the idea of being privileged. If the library is a place that privileges them, I think they’ll go for it. Special-collections and archives get a boost in this campaign, too; access to rare or unique information is the ultimate in privilege.

I see a tension here. While many push for an “information wants to be free” model, this would, inherently, devalue the role of the organization that makes it free. In fact, to take her quote even farther, this is especially true of special collections and archives.

Allow me to explain.

Users aren’t particularly discriminatory as to where they get their information. Our students or faculty don’t really care if the article or research they are looking at comes to them courtesy of Georgia Tech or if it was found in Citeseer. They are more likely to say they found something in “Google Scholar” vs. the actual institutional repository for the school they are actually getting it from. The more open the information is, the less exclusive our collection becomes and the less leverage and value we hold (at least conforming to our traditional model).

With special collections, this is especially true. Special collections are “special” because they are “unique”. Libraries spend a lot of money curating these collections. Historically, this has enjoyed a fairly good ROI because it distinguishes the library (and therefore, larger institution) as something “special” itself. These materials are exclusive to that particular institution and give value to the collection.

However, there is pressure to digitize and publish these collections. If all of these collections are digitized and published, we have a bunch of silos strewn about the internet requiring the user know about find them to use them. Since it is a lot of work to digitize and mark up these collections, there’s not a terribly good return for the effort.

In an effort to improve findability, the collections need to be aggregated with other similar collections to increase their exposure. However, the result of this is improved awareness and accessibility, but at the same time it dilutes exclusiveness and branding. Whoever provides the aggregation/discovery service gets the benefit of the content, so some of the content providers (inherently) must lose.

So, what does this mean? It should not prevent us from making our collections more open and accessible. That runs counter to our mission. However, we need to start thinking of ways to generate value when our information is free. There are plenty of ways of doing that, such as tailoring services that aggregates the “free” information for our communities, or building systems that can use the information in unique and specialized ways.

There is a large cultural shift that needs to take place to realize this future, however. We still place a lot of emphasis (way too much, really) on the size and uniqueness of our collections. With a world of information available (or a lot of it, at any rate), it’s not so much an issue of how many books you have in your building, but how you are able harness all the good data and present it in useful and meaningful ways. There aren’t easy metrics to this. ARL just can’t count book spines and annual budget. Serious consideration needs to be paid to what and how a library is utilizing the collection outside their walls.

Since my foray into python a couple of months ago, I’ve been enjoying branching out into new languages.

I had pitched the concept of a link resolver router for the state universal catalog to a committee I sit on (this group talks about SFX links in the 856 tag and whatnot). The problem with making links for a publicly available resource point to your institutional resolver is just that. It’s pointing your your institutional resolver, despite the fact that your audience could be coming from anywhere. This plays out even greater in a venue such as a universal catalog, since there’s not really a “home institution” to point a resolver link, anyway. OCLC and UKOLN both have resolver routers, and OCLC’s certainly is an option, but I don’t feel comfortable with the possibility that all of our member institutions might have to pay for the service (in the future). My other problem with OCLC’s service is that you can only belong to one institution and I have never liked that (especially as more and more institutions have link resolvers).

So, in this committee I mentioned that it would be pretty simple to make a router, and since I was having trouble getting people to understand what exactly I was talking about, I decided to make a proof-of-concept. And, since I was making a proof-of-concept, I thought it’d be fun to try it in Ruby on Rails.

Now, a resolver router is about the simplest concept possible. It doesn’t really do anything but take requests and pass them off to the appropriate resolver. It’s a resolver abstraction layer, if you will. I thought this was a nice, small project to try to cut my Ruby teeth on. There’s a little bit a database, a little bit of AJAX. It’s also useful, unlike making a cookbook from a tutorial or something.

It took about three days to make this. After you pick your resolver (Choose a couple! Add your own!), you’ll be taken to a page to choose between your various communities for appropriate copy.

I chose this particular citation because it shows the very huge limitation of link resolvers (if you choose Georgia Tech’s resolver and Emory’s resolver, for instance); despite the fact that this is freely available, it does not appear in my resolver. That’s not really the use case I envision, though. I am thinking more of a case like my co-worker, Heather, who should have access to Georgia Tech’s collection, Florida State’s resources (she’s in grad school there), Richland County Public Library (she lives in Columbia, SC), and the University of South Carolina (where her husband is a librarian). The resolver router alleviates the need to search for a given citation in the various communities (indeed to even have to think of or know where to look within those communities).

Sometime later this winter, I’ll have an even better use case. I’ll keep that under wraps for now.

Now, my impression of Ruby on Rails… For a project like this, it is absolutely amazing. I cannot believe I was able to learn the language from scratch and implement something that works (with this amount of functionality) in such a short amount of time. By bypassing the need to create the “framework” for the application, you can just dive into implementation.

In fact, I think my time to implementation would have been even faster if the number of resources/tutorials out there didn’t suck out loud. Most references point to these tutorials to get you started, but they really aren’t terribly helpful. They explain nothing about why they are doing what they are doing in them. I found this blog posting to be infinitely more useful. Her blog in general is going in my aggregator, I think.

When it comes to learning Ruby, this is a masterful work of art… but… not terribly useful if you just want to look things up. I recommend this for that.

Anyway, I am so impressed with Ruby on Rails that I am planning on using it (currently) for “alternative opac project“, which is now being code named “Communicat”. More on this shortly (although I did actually develop the database schema today).