The work plan

Since we have working versions of both VuFind and Blacklight installed, the plan is to investigate the following. As we go on, I will add further details on each of these topics in separate blog posts.

Holdings and multi-library search

The aim is to find out how we should implement holdings information, e.g. classmark (LCCN), on-loan, number of copies, etc. One of the primary objectives of our OPAC installation will be to enable multi-library search between the University of Sussex and the University of Brighton. A cursory glance at VuFind’s ILS drivers suggests that it is possible. Some universities (e.g. Swansea) have done it. The OPAC installation should integrate with our existing Talis system.

Read our experience with implementation of multi-ILS holdings information queries.

To-do: Multi-library search facility, which means having identifiers in MARC records that distinguish which library they come from. Such an identifier could be field 969 as explained in the multi-ILS holdings information queries.

System customisation

Having installed both systems, we will now want to look into how easy it is to:

  1. Customise the branding and layout.
  2. Change how bibliographic records and search results are displayed.
  3. Customise search refinements and facets.
  4. Provide bibliographic record enhancements, e.g. displaying book jacket covers, Amazon comments, etc.
  5. Provide extended export formats, e.g. BibTeX.

Apache Solr and data storage

Since both VuFind and Blacklight use Apache Solr, it is worth exploring if both of them can use the same Solr index. If this is possible, it will be easier to have both systems on trial and then phase in the one users prefer. Initial investigation shows that linking to the same Solr index may not be possible, but this continues to be a subject of investigation.

User authentication

We would like to find out how both VuFind and Blacklight works with some form of user authentication, e.g. LDAP and any other SSO and that too across multiple libraries.


It would be useful to setup automated scripts that can be run as cron jobs to implement daily/hourly updates of records imported as MARC data. It is worth looking into how one can delete certain records, preferably in a batch. Slightly side-tracking, it is important to see if the Talis-exported MARC records are actually usable by either of the OPAC systems. The fact that some MARC records were imported with errors could point to some incompatibility issues.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s