Archive

Monthly Archives: January 2006

After arguing with Dan for hours on how to implement unAPI, I decided to take him up on his request and implement an unAPI service for our OPAC.  This way I would have some real world experience to back up my bitching.

So, here you go.  This script takes our SRU to OpenSearch services for the OPAC and the Universal Catalog.  I’ve modified the OpenSearch CGI to include the unAPI identifier.

The unAPI service is written in PHP and calls the SRU service and presents the results accordingly.

So, my reactions:

  1. The JSON requirement doesn’t really bother me.  Whatever is chosen is rather arbitrary, so I’m ok with JSON.  I’d be just as ok with XML or text delimited.  I see no real difference.
  2. I am not entirely convinced that just exposing the metadata (with no wrapper) is scaleable.  I’m not saying I can’t be convinced, I’m just saying I’m not currently.
  3. I see no point in sending the status along with the response.
  4. This is, indeed, much simpler than making an OAI-PMH server in PHP over Voyager.
  5. I would really prefer parameters to paths.  I think there are too many assumptions about the implementor (and the future) to use paths.

What I am still not entirely sure about fundamentally, however, is why we need another sharing protocol.  Dan claims it’s because OAI-PMH and ATOM are still too complicated for stupid people and the less technically inclined to pick up and he wants something simpler.

What I don’t understand is why we don’t just make a “simpler” API to one of those protocols.  Choose a particular syndication protocol (after all, that’s really what OAI-PMH is) and then just make an API to “Gather/Create/Share” with it.

Personally, I am much more interested in making our OAI-PMH archives and our SRW/U servers available via ATOM, much like we made SRU available to OpenSearch.  That way we pick up on the wide variety of tools already out there.

This interests me on other levels, since I’m really starting to picture the Communicat as a sort of ATOM store.

I see a lot of potential with unAPI (a whole hell of a lot of potential), but I would rather utilize an existing protocol (especially one that promises to have a lot of users, read: ATOM) than build another library system that is largely ignored outside our community.

On a related note, I want to point out what I see as two wastes of library developers’ time:

  • Walt is right:  DON’T MAKE LIBRARY TOOLBARS (it’s in there, trust me).  It’s asking too much to expect people to install and use something that has such a limited scope.
  • The OPAC jabberbot follows the same line of thinking.  In #code4lib, our resident bot (Panizzi) employs an OpenSearch client.  Honestly, this makes tons more sense since it’s easy (honest!) to make your OPAC speak OpenSearch and there are going to be a lot more useful things available via OpenSearch than a handful of library catalogs.

No, I think we’re better served making our data work contextually in a user’s information sphere.  Push our data out to common aggregators rather than replicate the services to handle our arcane practices.

All in all, I’ve been pretty unhelpful in the coordination of Code4Lib 2006. Those that know me know I’m not terribly organized and horribly forgetful, so I knew it wasn’t in anybody’s best interest for me to commit to much. With that, I felt it best if I was involved with the proposal submission/vetting process.

Early in the organizing process, the “organizing group” (committee is far too strong a term) set up a Backpack page to keep up with all of our tasks and documents. It worked pretty well. I liked the idea of a centrally located “document” that was editable by several users (the “proposal submission group” was made up of myself, Jeremy Frumkin and Dan Chudnov (on a non-committal basis)), so I used another Backpack page to compile the submissions. I am glad that I did this; I would never have been able to keep up with them, otherwise.

As we got closer to the submission deadline, it dawned on me that I needed to set up the voting mechanism quickly (we had decided to allow any potential attendees vote on the program they wanted to see). Roy Tennant had toyed with the voting module in drupal (since that’s what code4lib.org runs), but it wasn’t quite what I had pictured.

What I wanted was:

  • A page where the user could see all the proposals at once
  • A simple up or down vote
  • A way to tally how many “yea” votes the user has cast (since we were allowing them to cast 11 votes)
  • A way for the user to revise their votes
  • An auditing system
  • Something to tally the votes for me

So I made my own. One nice side effect to having compiled all of the submissions in Backpack was that it was very easy to screenscrape. All of the “notes” are contained in divs that have the class “note”. The id attribute contains a unique id for that note and the titles are contained in an H3 within that div. Perfect for scraping.

The first thing I did was set up a simple MySQL database that contained three tables: voters, proposals and votes. Voters probably wasn’t necessary, but I had this original plan to “register to vote” within code4lib.org and then the voting mechanism would “review your registration”. It wound up being much less sophisticated. So, basically “voters” was two fields, drupal user name and id.

“Proposals” was also just two fields: id and title, and the intention was that this would get scraped from the Backpack page (the id being the unique “note” id). “Votes” was just a joining table between voters and proposals.

Next, I made a bookmarklet that rewrote the Backpack page to have a bunch of forms with submit buttons on them within the “note” divs. There’s a lot of wasted space in Backpack, so I inserted an Iframe under the Backpack logo in the right column to tally the user’s votes. The bookmarklet was pretty simple and can be found here.

Next, I wrote a PHP script that set in the Iframe and tallied the votes. When the user posted a vote, it would check to see if the proposal had already been added to the proposals table (if not, it would add it) and add the user’s vote to the “votes” table. It would output all that they had already voted for and let them know how many votes they had left. It would also create links next to each submitted vote for the user to remove a cast vote.

The last step was making a node in Drupal that create made a bookmarklet link that included the user’s username (the user needed to be logged in to vote). This step, sadly, took me the longest because I had no idea how to get the username in Drupal.  It turns out you can “global” the $user variable and have access to all its attributes.
Pretty simple.

Lastly, I created a handful of webservices to keep a running tally of which proposals had enough votes to “win”, a list of registered voters and how many votes they had cast, and (when more reshuffling the schedule allowed 3 more presentations to be accepted) a list of all proposals and their vote counts.

For throwing something together at the last minute, it worked ok. There is one user that doesn’t have a username (no clue who that is or why it happened) and Ed Corrado doesn’t appear in the registered voters output because his “voting cookie” was set before I cleared out all of the “test votes” (I think).

All in all, a fun little hack.