For a long time, I was massively confused about what the Platform was or did. Months after I started at Talis I was still fairly unclear of what the Platform actually did. I’ve now got my head around it, use it, and have a pretty good understanding of why and how it’s useful, but I fully realize that a lot of people (and by people I’m really referring to library people) don’t and don’t really care to learn.
What they want is Solr. Actually, no, what they want is a magical turnkey system that takes their crappy OPAC (or whatever) data and transmogrifies it into a modern, of-the-web type discovery system. What is powering that discovery system is mostly irrelevant if it behaves halfway decently and is pretty easy to get up and running for a proof-of-concept. These two points, of course, are why Solr is so damned popular; to say that it meets those criteria is a massive understatement. The front-end of that Solr index is another story entirely, but Solr itself is a piece of cake.
Almost from the time I started at Talis I have thought that a Solr-clone API for the Platform would make sense. Although the Platform doesn’t have all of the functionality of Solr, it has several of the sexy bits (Lucene syntax and faceting, for example) and if it had some way to respond to an out of the box Solr client, it seemed to me that it would make it a lot easier to turn an off-the-shelf Solr powered application (a la VuFind or Blacklight) into a Platform powered, RDF/linked data application with minimal customization. It’s not Solr and in many ways is quite different than Solr — but if it can exploit its similarities with Solr enough to leverage the pretty awesome client base that Solr has, it’ll make it easier to open the door for things the Platform is good at. Alternately, if the search capabilities of the Platform become too limited compared to Solr, the data is open — just index it in Solr. Theoretically, if the API is a Solr-clone, you should be able to point your application at either.
The proof-of-concept project I’m working on right now is basically a reënvisioned Communicat: a combination discovery interface; personal and group resource collection aggregator; resource-list content management system (for course reserves, say, or subject/course guides, etc.); and “discovered” resources (articles, books, etc.) cache and recommendation service. None of these would be terribly sophisticated at a first pass, I’m just trying to get (and show) a clearer understanding of how a Communicat might work. As such, I’m trying to do as little development from the ground up as I can get away with.
I’ll go into more detail later as it starts to get fleshed out some, but for the discovery and presentation piece, I plan on using Blacklight. Of the OSS discovery interfaces, it’s the most versatile for the wide variety of resources I would hope to be in a Communicat-like system. It’s also Ruby, so I feel the most comfortable hacking away at it. It also meant I needed the aforementioned Solr-like API for the Platform, so I hastily cobbled together something using Pho and Sinatra. I’m calling it pret-a-porter, and the sources are available on Github.
You can see it in action here. The first part of the path corresponds with whatever Platform store you want to search. The only “Response Writers” available are Ruby and JSON (I’ll add an XML response as soon as I can — I just needed Ruby for Blacklight and JSON came basically for free along with it). It’s incredibly naive and rough at this point, but it’s a start. Most importantly, I have Blacklight working against it. Here’s Blacklight running off of a Prism 3 store. It took a little bit of customization of Blacklight to make this work, but it would still be interchangeable with a Solr index (assuming you were still planning on using the Platform for your data storage). When I say a “little bit”, I mean very little. Both pieces (pret-a-porter and the Blacklight implementation) took less than three days total to get running.
If only the rest of the Communicat could come together that quickly!