In particular, I thought about her comment:
I would want to trial-balloon a Deep Web play in my next survey, if I were OCLC. I would want to know how many people have heard of the Deep Web, what they think is in it, whether they think information useful to them is in it, whether they would access it through their libraries if they could. This moves away from free-vs.-paid and toward exclusive-vs.-nonexclusive. People like the idea of being privileged. If the library is a place that privileges them, I think theyÂll go for it. Special-collections and archives get a boost in this campaign, too; access to rare or unique information is the ultimate in privilege.
I see a tension here. While many push for an “information wants to be free” model, this would, inherently, devalue the role of the organization that makes it free. In fact, to take her quote even farther, this is especially true of special collections and archives.
Allow me to explain.
Users aren’t particularly discriminatory as to where they get their information. Our students or faculty don’t really care if the article or research they are looking at comes to them courtesy of Georgia Tech or if it was found in Citeseer. They are more likely to say they found something in “Google Scholar” vs. the actual institutional repository for the school they are actually getting it from. The more open the information is, the less exclusive our collection becomes and the less leverage and value we hold (at least conforming to our traditional model).
With special collections, this is especially true. Special collections are “special” because they are “unique”. Libraries spend a lot of money curating these collections. Historically, this has enjoyed a fairly good ROI because it distinguishes the library (and therefore, larger institution) as something “special” itself. These materials are exclusive to that particular institution and give value to the collection.
However, there is pressure to digitize and publish these collections. If all of these collections are digitized and published, we have a bunch of silos strewn about the internet requiring the user know about find them to use them. Since it is a lot of work to digitize and mark up these collections, there’s not a terribly good return for the effort.
In an effort to improve findability, the collections need to be aggregated with other similar collections to increase their exposure. However, the result of this is improved awareness and accessibility, but at the same time it dilutes exclusiveness and branding. Whoever provides the aggregation/discovery service gets the benefit of the content, so some of the content providers (inherently) must lose.
So, what does this mean? It should not prevent us from making our collections more open and accessible. That runs counter to our mission. However, we need to start thinking of ways to generate value when our information is free. There are plenty of ways of doing that, such as tailoring services that aggregates the “free” information for our communities, or building systems that can use the information in unique and specialized ways.
There is a large cultural shift that needs to take place to realize this future, however. We still place a lot of emphasis (way too much, really) on the size and uniqueness of our collections. With a world of information available (or a lot of it, at any rate), it’s not so much an issue of how many books you have in your building, but how you are able harness all the good data and present it in useful and meaningful ways. There aren’t easy metrics to this. ARL just can’t count book spines and annual budget. Serious consideration needs to be paid to what and how a library is utilizing the collection outside their walls.