Bootstrapping Jangle

After several months of trying, Jangle.org is finally starting to take off.  I set up a Drupal instance yesterday on our new web host.

When I was still at Georgia Tech, one of the things I was trying to work on was a framework to consistently and easily expose the library’s data from its various silos into external services. In that case, my initial focus was the Sakai implementation that we were rolling into production, but the intention was to make it as generic as possible (i.e. the opposite of a “Blackboard Building Block“) so it could be consumed and reconstituted into as many applications as we wanted.

Coincidentally (and, for me, conveniently), Talis was also thinking about such a framework that would supply a generic SOA layer to libraries (and potentially beyond) and contacted me about possibly collaborating with them on it as an open source project. Obviously that relationship changed a bit when they hired me and they put me and my colleague Elliot Smith (reports of his demise have been greatly exaggerated) in charge of trying to get this project off the ground. Thankfully, Elliot is the other Talis malcontent who prefers Ruby, so our early prototypes all focused on Rails (the Java that originally seeded the project, like all Java, made my eyes glaze over).

We had a hard time getting anywhere at first. Not even taking into consideration the fact that he and I were an ocean apart, we really had no idea what it was that we should be building or why it would be useful to Talis (after all, they are paying the bills) since they already have an SOA product, Keystone. Also, we didn’t want to recreate Apache Synapse or Kuali Rice. In essence, we were trying to find a solution to a problem we hadn’t really defined, yet.

In December and early January, I drove across town for a couple of meetings with Mike Rylander, Bill Erickson and Jason Etheridge from Equinox to try to generate interest in Jangle and, at the same time, solicit ideas from them as to what this project should look like and do. Thankfully, they gave me both.

Jangle still foundered a bit through February. We were waiting for the DLF’s ILS and Discovery Systems API recommendation to come out (since we had targeted that as goal) and Elliot produced a prototype in JRuby (we had long abandoned Rails for this) that effectively consumed the Java classes used for Keystone and rewrote them for Jangle.  The problem we were still facing, though, is that we were, effectively, just creating another niche library interface from scratch and there were too many possible avenues to take to accomplish that.  Our freedom was paralyzing us.

I gave a lightning talk on Jangle at Code4lib2008 that was big on rah-rah rhetoric (free your data!) and short on details (since we hadn’t really come up with any yet) that generated some interest and a few more subscriptions to our Google Group.  A week later, the DLF met with the vendors to talk about their recommendation.   I attended by phone.  While in many ways I feel the meeting was a wash, it did help define for me what Jangle needed to do.

At the end of my first meeting with Equinox, Mike Rylander asked me if we had considered supporting the Atom Publishing Protocol in Jangle.  At the time, I hadn’t.  In fact, I didn’t until I sat on the phone for 8 hours listening to the vendors hem and haw over the DLF’s recommendation.  The more I sat there (with my ear getting sore), the more I realized that AtomPub might be a good constraint to get things moving (as well as useful to appealing to non-library developers).

We are just now trying to start building how this spec might work.  Basically there are two parts.  First, the Jangle “core” which is an AtomPub interface to external clients.  It’s at this level that we need to model how library resources map to Atom (and other common web data structures, like vCard) and where we need to extend Atom to include data like MARC (when necessary).  The Jangle core also proxies these requests to the service “connectors” and translates their responses back to the AtomPub client.  The connectors are service specific applications that takes the specific schema and values in, say, a particular ILS’s RDBMS and puts them in a more syntax to send back to the Jangle core.  Right now, the proposal is that all communication between the core and connectors would be JSON over HTTP (again, to help forward momentum).

So at this point you may be asking why AtomPub rather than implementing the recommendations of the DLF directly?  The recommendation assumes the vendors will be complicit, uniform and timely in implementing their API and I cynically feel that is unrealistic.  I also think it helps to get a common, consistent interface to help build interoperability (like the kind that the DLF group is advocating), since then you’d only have to write one, say, NCIP adapter and it would work for all services that have a Jangle connector.  Also, by leveraging non-library technologies, it opens up our data to groups outside our walls.

So, if you’re interested in freeing your data (rah-rah!), come help us build this spec.  We’re trying to conform to the Rogue ’05 specification that Dan Chudnov came up with for development of this so, while it will still be a painful process, it won’t be painful and long. 🙂  In other words, this ain’t NISO.

4 comments
  1. dchud said:

    The ROGUE process requires that the spec be developed on a public list and begin with a public draft. Unless I’m missing something, you have neither.

  2. Emily Lynema said:

    Ross,

    I’ve been trying to figure out how Jangle and the DLF API recommendations co-exist in peaceful harmony for awhile now. Are you thinking that Jangle will be the connector between the ILS, providing data that will be re-worked to meet the specifications in the DLF recommendations?

    I will say that we (or I) by no means have an implicit assumption that vendors will be implementing the recommendations. My implicit assumption is that the real work will begin with libraries themselves writing ILS connectors (which they’re already doing) that conform more fully to these recommendations (which they’re not doing) and make them as free and open for sharing as available (which they’re only just beginning to do).

    Is there any reason that Jangle can’t be an implementation of the recommendation (laying aside the fact that you may want to use a more resource-orientated architecture, etc.)? As long as you can map Jangle functionality to the recommended functionality, Jangle is an implementation. I mean, Jangle is really just the intent to provide uniform access across all ILS’s, right? Basically, that’s just taking a single step beyond the existing DLF recommendations and saying ‘you shall all implement this functionality in the same way’.

    Maybe I’m missing something. I’d really hate to see 2 projects with similar goals diverge because we couldn’t figure out how we related to each other in the early stages….

  3. Ross said:

    Emily, the goal is enable the DLF recommendation, not compete with it (which is an understandable interpretation at this stage, and something I want to emphatically assure you isn’t our intention).

    I just responded to Peter Murray asking the same question. (Dan, linking to that I see your point and am fixing the Google Group to be publicly viewable).

    Here’s the transcript of that:

    PM: I had interpreted the proposal as creating a 1-to-1 binding with the proposed functions in the proposed DLF API. So I think I’m still not clear on the proposal…pieces of the DLF API could be implemented using NCIP, SRU, and other standards. Where would Jangle fit into that mix?

    ME: Right. The problem with the DLF proposal as it currently stands is that none of those standards exist in most ILSes (with the possible exception of NCIP and, I think we can all agree, is spotty, inconsistent and probably non-interoperable). So, this means the
    vendors would need to not only get on board with NCIP (which they’ve been dragging their feet on for seven years), but also OAI-PMH and SRU (when most of them don’t even have a decent Z39.50 server). The ‘item
    status availability’ service isn’t even mapped to an existing standard, so who knows what would happen with that.

    Where I see Jangle is fitting is *behind* these services. It would be a common API to then build an OAI-PMH service on top of. We’d only have to build one, and all ILSes (or reserves systems or whatever)
    would benefit from it.

    The real winner that I see here would be NCIP (eh, and I’m speaking in relative terms), since I think the economy of scale would enable us to focus on a Jangle implementation rather than a bunch of inconsistent
    backend interfaces.

    PM: Ah! Now all becomes clearer. Encouraging the vendors to support a Jangle suite relieves them of having to provide separate NCIP, OAI-PMH, and SRU services, yes?

    ME: Yes. Or circumventing the vendors and relying on the geeks that are saddled with their products would be easier if we’re working towards a more consistent framework (I note the only vendors currently on this
    list are Talis, Equinox, LibLime and Bibliocommons — not exactly tremendous market share). “If I make a Jangle connector, I get all this other stuff, and I can start using GData clients to access it? But I really *wanted* to make an OpenURL interface specific to my III ILS!”

    So what I would prefer to see is Jangle as an enabler of the DLF API since it would (hopefully) eliminate the variation and foot draggery of the vendors.

    I don’t expect two libraries to conduct transactions via Jangle any more than a patron would search Solr, if that clears any of this up.

Leave a Reply

Your email address will not be published. Required fields are marked *