In response to some conversations at the IIIF conference in Paris earlier this summer, we decided to take a look at producing an enhanced version of Turning the Pages that would have 4 key features:
- HTML5 based
- use an API to call in any repository items rather than use the existing proprietary TTP database
- allow magnification to the native resolution of the source scans (whatever that might be)
- accommodate PNGs, TIFFs, JPGs or JP2s as source files
This project would take the long experience we have of producing compelling page turn experiences with that we developed for the iNQUIRE framework of surfacing deep zoom images, especially those created from a variety of source file formats.
Going through these one by one:
Previous versions of TTP have provided a great user experience but been connected to a proprietary MS SQL Server database populated by the TTP CMS. This made it really easy to use, but hard to scale. The objective for TTP_DZ is for digital libraries to be able to just call up a URI (for example for a folder of images) and TTP would dynamically create the books on the fly. Rather like the Open Archive Book Reader works. This approach would allow us to scale to millions of books in an automated fashion.
One of the issues with rare books and manuscripts is the need to see as much resolution as possible. Traditionally we've used as high a resolution JPG or PNG as we can get away with, which has been good enough for the general public, but scholars want more. So we want to be able to provide the great TTP user experience, plus be able to zoom in to native resolution scans, all in a seamless way. No clunky image swapping or downloading of new page images.
Many of our clients are transitioning to using JP2s as their repository and delivery files format. We wanted to be able to accommodate this and serve up deep zoom pages derived from JP2s.
This project is a work in progress and the first build is now live here: http://armadillo.onlineculture.co.uk/deepzoom/ttp.html
This version hits a couple of our goals: it's HTML5 and it uses Deep Zoom versions of pages as required (i.e. beyond a certain zoom threshold). At the moment it's hard-wired in to the page images so the next thing we'll be working on is the abstraction from that model. It's also just using the standard TTP 3.0 interface for now - we may add some more scholarly features as we progress.
As a prototype, we welcome all feedback, so tell us what you'd like to see and we'll add it to the list...