For a couple of months this year, the library world was aflame with rage at the proposed OCLC licensing policy regarding bibliographic records. It was a justifiable complaint, although I basically stayed out of it: it just didn’t affect me very much. After much gnashing of teeth, petitions, open letters from consortia, etc. OCLC eventually rescinded their proposal.
Righteous indignation: 1, “the man”: 0.
While this could certainly counted as a success (I think, although this means we default to the much more ambiguous 1987 guidelines), there is a bit of a mixed message here about where the library community’s priorities lie. It’s great that you now have the right to share your data, but, really, how do you expect to do it?
It has been a little over a year since the Jangle 1.0 specification has been released; 15 months or so since all of the major library vendors (with one exception) agreed to the Digital Library Federation’s “Berkeley Accord”; and we’re at the anniversary of the workshop where the vendors actually agreed on how we would implement a “level 1” DLF API.
So far, not a single vendor at the table has honored their commitment, and I have seen no intention to do so with the exception of Koha (although, interestingly, not by the company represented in the Accord).
I am going to focus here on the DLF ILS-DI API, rather than Jangle, because it is something we all agreed to. For all intents and purposes, Jangle and the ILS-DI are interchangeable: I think anybody that has invested any energy in either project would be thrilled if either one actually caught on and was implemented in a major ILMS. Both specifications share the same scope and purpose. The resources required to support one would be the same as the other, the only difference between the two are the client-side interfaces. Jangle technically meets all of the recommendations of the ILS-DI, but not to the bindings that we, the vendors, agreed to (although there is an ‘adapter’ to bridge that gap). Despite having spent the last two years of my life working on Jangle, I would be thrilled to no end if the ILS-DI saw broad uptake. I couldn’t care less about the serialization; I only care about the access.
There is only one reason that the vendors are not honoring their commitment: libraries aren’t demanding that they do.
Why is this? Why the rally to ensure that our bibliographic data is free for us to share when we lack the technology to actually do the sharing?
When you look at the open source OPAC replacements (I’m only going to refer to the OSS ones here, because they are transparent, as opposed to their commercial counterparts): VuFind, Blacklight, Scriblio, etc. and take stock of hoops that have to be jumped through to populate their indexes and check availability, most libraries would throw their hands in the air and walk away. There are batch dumps of MARC records. Rsync jobs to get the data to the OPAC server. Cron jobs to get the MARC into the discovery system. Screen scrapers and one off “drivers” to parse holdings and status. It is a complete mess.
It’s also the case for every Primo, Encore, Worldcat Local, AquaBrowser, etc. that isn’t sold to an internal customer.
If you’ve ever wondered why the third party integration and enrichment services are ultimately somewhat unsatisfying (think BookSite.com or how LibraryThing for Libraries is really only useful when you can actually find something), this is it. The vendors have made it nearly impossible for a viable ecosystem to exist because there is no good way to access the library’s own data.
And it has got to stop.
For the OCLC withdrawal to mean anything, libraries have either got to put pressure on their vendors to support one of the two open APIs, migrate to a vendor that does support the open APIs, or circumvent the vendors entirely by implementing the specifications themselves (and sharing with their peers). This cartel of closed access is stifling innovation and, ultimately, hurting the library users.
Hold your vendor’s feet to the fire and insist they uphold their commitment.