Archive

unapi

Ariadne #48 includes “Introducing unAPI“, written by Dan, Jeremy, Peter, Michael, Mike, Ed and me. It explains the potential of unAPI and shows some examples on implementations.

Like the last article I “wrote” with this group, I didn’t have to do much. On the flipside, publishing has such a minute effect (if any) on my career, I guess the “effort” reflects the “rewards”.

Still, this one was tough. A refactoring of the Umlaut broke the unAPI interface right about the time the editors were going over the example cases in the article. It was extremely difficult to put development cycles into this when I had a ton of other things I desperately needed to get completed before the start of the semester.

And that’s where the struggle is. While I’m completely sold on the “potential” of unAPI, that’s all it is right now. COinS was much easier concept for me get behind (although I would definitely say I have invested more in unAPI than COinS) since it’d be so much easier to implement on both the information provider and client ends. UnAPI is a much, much more difficult sell. However, the payoff is so much greater, one has to have faith and sometimes dedicate time that should probably have been spent on more immediate needs.

I’ve been waiting for a while to have this title. Well, actually, not a long while, and that’s testimony to how quickly I’m able to develop things in Rails.

While I think SFX is fine product and we are completely and utterly dependent upon it for many things, it does still have its shortcomings. It is not a terribly intuitive interface (no link resolver that I’m aware of has one) and there are some items it just doesn’t resolve well, such as conference proceedings. Since conference proceedings and technical reports are huge for us, I decided we needed something that resolved these items better. That’s when the idea of the übeResolver (now mainly known as ‘the umlaut’) was born.

Although I had been working with Ed Summers on the Ruby OpenURL libraries before Code4Lib 2006, I really began working on umlaut earlier this month when I thought I might have something coherent together in time before the ELUNA proposal submission deadline. Although I barely had anything functional on the 8th (the deadline — 2 days after I had really broken ground), I could see that this was actually feasible and doable.

Three weeks later and it’s really starting to take shape (although it’s really, really slow right now). Here are some examples:

The journal ‘Science’

A book: ‘Advances in Communication Control Networks’

Conference Proceeding

Granted, the conference proceeding is less impressive as a result of IEEE being available via SFX (although, in this case, it’s getting the link from our catalog) and the fact that I’m having less luck with SPIE conferences (they’re being found, but I’m having some problems zeroing in on the correct volume — more on that in a bit), but I think that since this is the result of < 14 days of development time, it isn't a bad start. Now on to what it's doing. If the item is a "book", it queries our catalog for ISBN; asks xISBN for other matches, queries our catalog for that; does a title/author search; does a conferenceName/title/year search. If there are matches, it then asks the opac for holdings data. If the item is either not held or not available, it does the same to our consortial catalog. Currently it’s doing both, regardless, because I haven’t worried about performance.

It checks the catalog via SRU and tries to fill out the OpenURL ContextObject with more information (such as publisher and place). This would be useful to then export into a citation manager (which most link resolvers have fairly minimal support for). While it has the MODS records, it also grabs LCSH and Table of Contents (if they exist). When I find an item with more data, I’ll grab it as well (such as abstracts, etc.).

It then queries Amazon Web Services for more information (editorial content, similar items, etc.).

It still needs to check SFX, but, unfortunately, that would slow it down even more.

For journals, it checks SFX first. If there’s no volume, issue, date or article title, it will try to get coverage information. Unfortunately, SFX’s XML interface doesn’t send this, so I have to get this information from elsewhere. When I made our Ejournal Suggest service, I had to create a database of journals and journal titles and I have since been adding functionality to it (since I am running reports from SFX for titles and it includes the subject associations, I load them as well — it includes coverage, too, so including that field was trivial). So when I get the SFX result document back, I parse it for its services (getFullText, getDocumentDelivery, getCitedBy, etc.) and if no article information is sent, I make a web service request to a little PHP/JSON widget I have on the Ejournal Suggest database that gets back coverage, subjects and other similar journals based on the ISSN. The ‘other similar journals’ are 10 (arbitrary number) other journals that appear in the same subject headings, ordered by number of clickthroughs in the last month. This doesn’t appear if there is an article, because I haven’t decided if it’s useful in that case (plus the user has a link to the ‘journal level’ if they wish).

Umlaut then asks the opac for holdings and tries to parse the holdings records to determine if a specific issue is held in print (this works well if you know the volume number — I have thought about how to parse just a year, but haven’t implemented it yet). If there are electronic holdings, it attempts to dedupe.

There is still a lot more work to do with journals, although I hope to be able to implement this soon. The getCitedBy options will vary from grad students/faculty to undergrads. Since we have very limited seats to Web of Science, undergraduates will, instead, get their getCitedBy links to Google Scholar. Graduate students and faculty will get both Web of Science and Google Scholar. Also, if no fulltext results are found, it will then go out to the search engines to try to find something (whether it finds the original item or a postprint in arxiv.org or something). We will also have getAbstracts and getTOCs services enabled so the user can find other databases that might be useful or table of content services, accordingly. Further, I plan on associating the subject guides with SFX Subjects and LCC, so we can make recommendations from a specific subject guide (and actually promote the guide a bit) based, contextually, by what the user is already looking at. By including the SFX Target name in the subject items (which is an existing field that’s currently unused), we could also match on the items themselves.
The real value in umlaut, however, will come in its unAPI interface. Since we’ll have Z39.88 ContextObjects, MODS records, Amazon Web Services results and who knows what else, umlaut could feed an Atom store (such as unalog) with a whole hell of a lot of data. This would totally up the ante of scholarly social bookmarking services (such as Connotea and Cite-U-Like) by behaving more like personal libraries that match on a wide variety of metadata, not just url or title. The associations that users make can also aid umlaut in recommendations of other items.

The idea here is not a replacement of the current link resolver, the intention is to enhance it. SFX makes excellent middleware, but I think it’s interface leaves a bit to be desired. By utilizing its strength, we can layer more useful services on top of it. Also, a user can add other affiliations that they belong to in their profile, so umlaut can check their local public library or, if they are taking classes at another university, they can include those.

At this point I can already hear you saying, “But Ross, not everyone uses SFX”. How true! I propose a microformat for link resolver results that could be parseable by umlaut (and in an ‘eating your own dog food’ fashion, will add this to umlaut’s template, eventually), making any link resolver available to umlaut.

There is another problem that I’ve encountered while working on this project, though, too. Last week and the week before, while I was doing the bulk of the SRU development, I kept on noticing (and reporting) our catalog (and, more often, it’s Z39.50 server) going down. Like many times a day. After concluding that, in fact, I was probably causing the problem, I finally got around to doing something that I’ve been meaning to do for months (and I would recommend to everyone else if they want to actually make useful systems): exporting the bib database into something better. Last week I imported our catalog into Zebra and sometime this week I will have a system that syncs the database every other hour (we already have the plumbing for this for our consortial catalog). I am also experimenting with Cheshire3 (since I think it’s potential is greater — it’s possible we may use both for different purposes). The advantage to this (besides not crashing our catalog every half hour) is that I can index it any way want/need to as well as store the data any way I need to in order to make sure that users get the best experience they can.

Going back to the SPIE conferences, there is no way in Voyager that I can limit my results to less than 360+ results for “SPIE Proceedings” in 2003. At least, not from the citations I get from Compendex (which is where anyone would get the idea to look for SPIE Proceedings in our catalog, anyway). With an exported database, however, I could index the volume and pinpoint the exact record in our catalog. Or, if that doesn’t scale (for instance, if they’re all done a little differently), I can pound the hell out our zebra (or cheshire3 or whatever) server looking for the proper volume without worrying about impacting all of our other services. I can also ‘game the system’ a bit and store bits in places that I can query when I need them. Certainly this makes umlaut (and other services) more difficult to share to other libraries (at least, other libraries that don’t have similar setups to ours), but I think these sorts of solutions are essential to improving access to our collections.

Oh yeah, and lest you think that mirroring your bib database is too much to maintain: Zebra can import marc records (so you can use your opac’s marc export utility) and our entire bib database (705,000 records) takes up less than 2GB of storage. The more indexes added, the larger the database size, of course, but I am indexing a LOT in that.

After arguing with Dan for hours on how to implement unAPI, I decided to take him up on his request and implement an unAPI service for our OPAC.  This way I would have some real world experience to back up my bitching.

So, here you go.  This script takes our SRU to OpenSearch services for the OPAC and the Universal Catalog.  I’ve modified the OpenSearch CGI to include the unAPI identifier.

The unAPI service is written in PHP and calls the SRU service and presents the results accordingly.

So, my reactions:

  1. The JSON requirement doesn’t really bother me.  Whatever is chosen is rather arbitrary, so I’m ok with JSON.  I’d be just as ok with XML or text delimited.  I see no real difference.
  2. I am not entirely convinced that just exposing the metadata (with no wrapper) is scaleable.  I’m not saying I can’t be convinced, I’m just saying I’m not currently.
  3. I see no point in sending the status along with the response.
  4. This is, indeed, much simpler than making an OAI-PMH server in PHP over Voyager.
  5. I would really prefer parameters to paths.  I think there are too many assumptions about the implementor (and the future) to use paths.

What I am still not entirely sure about fundamentally, however, is why we need another sharing protocol.  Dan claims it’s because OAI-PMH and ATOM are still too complicated for stupid people and the less technically inclined to pick up and he wants something simpler.

What I don’t understand is why we don’t just make a “simpler” API to one of those protocols.  Choose a particular syndication protocol (after all, that’s really what OAI-PMH is) and then just make an API to “Gather/Create/Share” with it.

Personally, I am much more interested in making our OAI-PMH archives and our SRW/U servers available via ATOM, much like we made SRU available to OpenSearch.  That way we pick up on the wide variety of tools already out there.

This interests me on other levels, since I’m really starting to picture the Communicat as a sort of ATOM store.

I see a lot of potential with unAPI (a whole hell of a lot of potential), but I would rather utilize an existing protocol (especially one that promises to have a lot of users, read: ATOM) than build another library system that is largely ignored outside our community.

On a related note, I want to point out what I see as two wastes of library developers’ time:

  • Walt is right:  DON’T MAKE LIBRARY TOOLBARS (it’s in there, trust me).  It’s asking too much to expect people to install and use something that has such a limited scope.
  • The OPAC jabberbot follows the same line of thinking.  In #code4lib, our resident bot (Panizzi) employs an OpenSearch client.  Honestly, this makes tons more sense since it’s easy (honest!) to make your OPAC speak OpenSearch and there are going to be a lot more useful things available via OpenSearch than a handful of library catalogs.

No, I think we’re better served making our data work contextually in a user’s information sphere.  Push our data out to common aggregators rather than replicate the services to handle our arcane practices.