Archive

Monthly Archives: July 2009

For a long time, I was massively confused about what the Platform was or did.  Months after I started at Talis I was still fairly unclear of what the Platform actually did.  I’ve now got my head around it, use it, and have a pretty good understanding of why and how it’s useful, but I fully realize that a lot of people (and by people I’m really referring to library people) don’t and don’t really care to learn.

What they want is Solr.  Actually, no, what they want is a magical turnkey system that takes their crappy OPAC (or whatever) data and transmogrifies it into a modern, of-the-web type discovery system.  What is powering that discovery system is mostly irrelevant if it behaves halfway decently and is pretty easy to get up and running for a proof-of-concept.  These two points, of course, are why Solr is so damned popular; to say that it meets those criteria is a massive understatement.  The front-end of that Solr index is another story entirely, but Solr itself is a piece of cake.

Almost from the time I started at Talis I have thought that a Solr-clone API for the Platform would make sense.  Although the Platform doesn’t have all of the functionality of Solr, it has several of the sexy bits (Lucene syntax and faceting, for example) and if it had some way to respond to an out of the box Solr client, it seemed to me that it would make it a lot easier to turn an off-the-shelf Solr powered application (a la VuFind or Blacklight) into a Platform powered, RDF/linked data application with minimal customization.  It’s not Solr and in many ways is quite different than Solr — but if it can exploit its similarities with Solr enough to leverage the pretty awesome client base that Solr has, it’ll make it easier to open the door for things the Platform is good at.  Alternately, if the search capabilities of the Platform become too limited compared to Solr, the data is open — just index it in Solr.  Theoretically, if the API is a Solr-clone, you should be able to point your application at either.

The proof-of-concept project I’m working on right now is basically a reënvisioned Communicat:  a combination discovery interface; personal and group resource collection aggregator; resource-list content management system (for course reserves, say, or subject/course guides, etc.);  and “discovered” resources (articles, books, etc.) cache and recommendation service.  None of these would be terribly sophisticated at a first pass, I’m just trying to get (and show) a clearer understanding of how a Communicat might work.  As such, I’m trying to do as little development from the ground up as I can get away with.

I’ll go into more detail later as it starts to get fleshed out some, but for the discovery and presentation piece, I plan on using Blacklight.  Of the OSS discovery interfaces, it’s the most versatile for the wide variety of resources I would hope to be in a Communicat-like system.  It’s also Ruby, so I feel the most comfortable hacking away at it.  It also meant I needed the aforementioned Solr-like API for the Platform, so I hastily cobbled together something using Pho and Sinatra.  I’m calling it pret-a-porter, and the sources are available on Github.

You can see it in action here.  The first part of the path corresponds with whatever Platform store you want to search.  The only “Response Writers” available are Ruby and JSON (I’ll add an XML response as soon as I can — I just needed Ruby for Blacklight and JSON came basically for free along with it).  It’s incredibly naive and rough at this point, but it’s a start.  Most importantly, I have Blacklight working against it.  Here’s Blacklight running off of a Prism 3 store.  It took a little bit of customization of Blacklight to make this work, but it would still be interchangeable with a Solr index (assuming you were still planning on using the Platform for your data storage).  When I say a “little bit”, I mean very little.  Both pieces (pret-a-porter and the Blacklight implementation) took less than three days total to get running.

If only the rest of the Communicat could come together that quickly!

There were three main reasons that I took the old lcsh.info data that I had lying around and made http://lcsubjects.org:

  1. There were projects (including internal Talis ones) that really wanted to use that data and impatience was growing as to when the Library of Congress would launch id.loc.gov.
  2. Leigh Dodds had just released Pho and needed testers.  I had also, to date, done virtually nothing interesting with the Platform and wanted a somewhat turnkey operation to get started with it.
  3. While it’s great that the Library of Congress has made this data available, what is really interesting is seeing how this stuff relates to other data sets.  It’s unlikely that LoC will be too open to experimentation in this regard, these are, after all, authorities, so LCSubjects.org seemed a good place to provide both this experimentation and community-driven editing (which will, hopefully, be coming soon — Per an idea proposed by Chris Clarke, I would like to store user-added changes into their own named graphs, but that support needs to be added to the Platform) – which will, hopefully, make it more dynamic and interesting, while still deferring “authority” to the Library of Congress.

In the pursuit of number three, I had a handful of what I hoped were fairly “low hanging fruit” projects to help kickstart this process and actually make LCSubjects linked data instead of just linkable data (since that was fairly redundant to id.loc.gov/authorities/, anyway).  I have rolled out the first of these, which was an attempt to provide some sense of geocoding to the geographic headings.

There are just over 58,000 geographic subject headings in the current dump that LoC makes available.  11,362 of these have a ⁰ symbol in them (always in a non-machine readable editorial note).  I decided to take this subset and see how many I could identify as a single geographic “point” (i.e. a single, valid latitudinal and longitudinal coordinate pair), converted those from degree, minute, second format to decimal format and then saw how many of those had a direct match to points in Geonames.

Given that these are entered as prose notes, the matching was fairly spotty.  I was able to identify 9,127 distinct “points”.  837 concepts had either too many coordinates (concepts like this one or this one, for example) or only 1.  It’s messy stuff.  This also means there are about another 1,000 that missed my regex completely (/[0-9]*⁰[^NSEW]*[NSEW]\b/), but I haven’t had time to investigate what these might look like.  Given that these are just text notes, though, I was pretty surprised at the number of actual positive matches I got.  These are now available in the triples using the Basic Geo (WGS84 lat/long) vocabulary.

Making the links to Geonames wasn’t nearly as successful.  Only about 197 points matched.  Some of those that did could be considered questionable (click on the geonames link to see what I mean).  Others are pretty perfect.

All in all, a pretty successful experiment.  I’d like to take another pass at it and see how many prefLabels or altLabels match to the Geonames names and add those, as well.  Also, just after I added the triples, there was an announcement for LinkedGeoData.org, which will probably provide much better wgs84:location coverage (I can do searches like http://linkedgeodata.org/triplify/near/%latitude%,%longitude%/1 which would find points of interest within 1 meter of my coordinate pair).  So stay tuned for those links.

Lastly, one of the cooler by-products of adding these coordinates is functionality like this which roughly gives you all of the LCSH with coordinates found roughly inside the geographic boundaries of Tennessee (TN is a parallelogram, so this box style query isn’t perfect).

For Ian Davis‘ birthday, Danny Ayers sent out an email asking people to make some previously unavailable datasets accessible as linked data as Ian’s present.  It was a pretty neat idea.  One that I wish I had thought of.

Given that Ian is my boss (prior to about a month ago, Ian was just nebulously “above me” somewhere in the Talis hierarchy, but I now report to him directly) one could cynically make the claim that by providing Ian a ‘linked data gift’ that I would just be currying favor by being a kiss-ass.  You could make that claim, sure, but evidently you are not aware of how I hurt the company.

Anyway, as my contribution, I decided to take the data dumps from LibraryThing that Tim Spalding pretty graciously makes available [whoa, in the time that I first started this post until now, the data has gone AWOL, I suppose I did this just in time].  The data isn’t always very current and not all of the files are terribly useful (the tags one, for example, doesn’t offer much since the tags aren’t associated with anything — it’s just words and their counts), but it’s data and between ThingISBN and the WikipediaCitations I thought it would be worth it.

I wanted to take a very pragmatic approach to this: no triple store, no search, no rdf libraries, minimal interface.  Mostly this was inspired by Ed Summers‘ work with the Library of Congress Authorities, but, also, if Tim (or, whoever at LibraryThing) saw that making LibraryThing linked data was as easy as a few template tweaks (as opposed to a major change in their development stack) this exercise was much more likely to actually make its way into LibraryThing.

What I ended up with (the first pass released before the end of Ian’s birthday, I might add) was LODThing: a very simple application written in Ruby’s Sinatra framework, DataMapper and SQLite.  The entire application is less than 230 lines of Ruby (including the web app and data loader) plus 2 HAML templates and 2 builder templates for the HTML/RDFa and RDF/XML, respectively.  The SQL database has three tables, including the join table.  This is really simple stuff.  The only real reason it took a couple days to create was trying to get the data loaded into SQLite from these huge XML files.  Nokogiri is fast (well, Ruby fast), but a 200 MB XML file is pretty big.  It was nice to get acquainted with Nokogiri’s pull parser, though.

There are a few things to take away from this exercise.

  1. When data is freely available, it’s really quite simple to reconstitute it into linked data without any need to depart from your traditional technology stack.  There is nothing even remotely semantic-webby about LODThing except its output.
  2. We now have an interesting set of URIs and relationships to start to express and model FRBR relationships.
  3. The Wikipedia citations data is extremely useful and could certainly be fleshed out more.  One could imagine querying DBpedia or Freebase on these concepts and identifying if the Wikipedia article is actually referring to the work itself and use that.  Right now LODThing makes no claims about the relationships except that it’s a reference from Wikipedia.

LODThing isn’t really intended for human consumption, so there’s no real “default way in”.  The easiest way to use it is to make a URI from an ISBN:

If you know the LibraryThing ‘work ID’, you can get in that way, too:

Also, you can all of these resources as RDF/XML by replacing the .html with .rdf.

So, Tim, you wrote on the LT API page that you would love to see what people are doing with your data, here you go.  It would be even more awesome if it made it’s way back into LT — after all, it would alleviate some of the need for you to have a special API for this stuff.

Also, special thanks to Toby Inkster for providing a ton of help in getting this to resemble something that a linked data aware agent would actually want and finally turning the httpRange-14 light bulb on over my head.  He also immediately linked to it from his Amazon API LODifiier, which is sort of cool, too.

I’ll be happy to throw the sources into a github repository if anybody’s interested in them.