Community building

After several months of trying, is finally starting to take off.  I set up a Drupal instance yesterday on our new web host.

When I was still at Georgia Tech, one of the things I was trying to work on was a framework to consistently and easily expose the library’s data from its various silos into external services. In that case, my initial focus was the Sakai implementation that we were rolling into production, but the intention was to make it as generic as possible (i.e. the opposite of a “Blackboard Building Block“) so it could be consumed and reconstituted into as many applications as we wanted.

Coincidentally (and, for me, conveniently), Talis was also thinking about such a framework that would supply a generic SOA layer to libraries (and potentially beyond) and contacted me about possibly collaborating with them on it as an open source project. Obviously that relationship changed a bit when they hired me and they put me and my colleague Elliot Smith (reports of his demise have been greatly exaggerated) in charge of trying to get this project off the ground. Thankfully, Elliot is the other Talis malcontent who prefers Ruby, so our early prototypes all focused on Rails (the Java that originally seeded the project, like all Java, made my eyes glaze over).

We had a hard time getting anywhere at first. Not even taking into consideration the fact that he and I were an ocean apart, we really had no idea what it was that we should be building or why it would be useful to Talis (after all, they are paying the bills) since they already have an SOA product, Keystone. Also, we didn’t want to recreate Apache Synapse or Kuali Rice. In essence, we were trying to find a solution to a problem we hadn’t really defined, yet.

In December and early January, I drove across town for a couple of meetings with Mike Rylander, Bill Erickson and Jason Etheridge from Equinox to try to generate interest in Jangle and, at the same time, solicit ideas from them as to what this project should look like and do. Thankfully, they gave me both.

Jangle still foundered a bit through February. We were waiting for the DLF’s ILS and Discovery Systems API recommendation to come out (since we had targeted that as goal) and Elliot produced a prototype in JRuby (we had long abandoned Rails for this) that effectively consumed the Java classes used for Keystone and rewrote them for Jangle.  The problem we were still facing, though, is that we were, effectively, just creating another niche library interface from scratch and there were too many possible avenues to take to accomplish that.  Our freedom was paralyzing us.

I gave a lightning talk on Jangle at Code4lib2008 that was big on rah-rah rhetoric (free your data!) and short on details (since we hadn’t really come up with any yet) that generated some interest and a few more subscriptions to our Google Group.  A week later, the DLF met with the vendors to talk about their recommendation.   I attended by phone.  While in many ways I feel the meeting was a wash, it did help define for me what Jangle needed to do.

At the end of my first meeting with Equinox, Mike Rylander asked me if we had considered supporting the Atom Publishing Protocol in Jangle.  At the time, I hadn’t.  In fact, I didn’t until I sat on the phone for 8 hours listening to the vendors hem and haw over the DLF’s recommendation.  The more I sat there (with my ear getting sore), the more I realized that AtomPub might be a good constraint to get things moving (as well as useful to appealing to non-library developers).

We are just now trying to start building how this spec might work.  Basically there are two parts.  First, the Jangle “core” which is an AtomPub interface to external clients.  It’s at this level that we need to model how library resources map to Atom (and other common web data structures, like vCard) and where we need to extend Atom to include data like MARC (when necessary).  The Jangle core also proxies these requests to the service “connectors” and translates their responses back to the AtomPub client.  The connectors are service specific applications that takes the specific schema and values in, say, a particular ILS’s RDBMS and puts them in a more syntax to send back to the Jangle core.  Right now, the proposal is that all communication between the core and connectors would be JSON over HTTP (again, to help forward momentum).

So at this point you may be asking why AtomPub rather than implementing the recommendations of the DLF directly?  The recommendation assumes the vendors will be complicit, uniform and timely in implementing their API and I cynically feel that is unrealistic.  I also think it helps to get a common, consistent interface to help build interoperability (like the kind that the DLF group is advocating), since then you’d only have to write one, say, NCIP adapter and it would work for all services that have a Jangle connector.  Also, by leveraging non-library technologies, it opens up our data to groups outside our walls.

So, if you’re interested in freeing your data (rah-rah!), come help us build this spec.  We’re trying to conform to the Rogue ’05 specification that Dan Chudnov came up with for development of this so, while it will still be a painful process, it won’t be painful and long. 🙂  In other words, this ain’t NISO.