I participated in an event about Digital Humanities in the College of Arts that Luca had organised on Monday, at which I discussed the Books and Borrowing project. It was a good event and I hope there will be more like it in future. Luca also discussed a couple of his projects and mentioned that the new Curious Travellers project is using Transkribus (https://readcoop.eu/transkribus/) which is an OCR / text recognition tool for both printed text and handwriting that I’ve been interested in for a while but haven’t yet needed to use for a project. I will be very interested to hear how Curious Travellers gets on with the tool in future. Luca also mentioned a tool called Voyant (https://voyant-tools.org/) that I’d never heard of before that allows you to upload a text and then access many analysis and visualisation tools. It looks like it has a lot of potential and I’ll need to investigate it more thoroughly in future.
Also this week I had to prepare for and participate a candidate shortlisting session for a new systems developer post in the College of Arts and Luca and I had a further meeting with Liz Broe of College of Arts admin about security issues relating to the servers and websites we host. We need to improve the chain of communication from Central IT Services to people like me and Luca so that security issues that are identified can be addressed speedily. As of yet we’ve still not heard anything further from IT Services so I have no idea what these security issues are, whether they actually relate to any websites I’m in charge of and whether these issues relate to the code or the underlying server infrastructure. Hopefully we’ll hear more soon.
The above took a fair bit of time out of my week and I spent most of the remainder of the week working on the Books and Borrowing project. One of the project RAs had spotted an issue with a library register page appearing out of sequence so I spent a little time rectifying that. Other than that I continued to develop the front-end, working on the quick search that I had begun last week and by the end of the week I was still very much in the middle of working through the quick search and the presentation of the search results.
I have an initial version of the search working now and I created an index page on the test site I’m working on that features a quick search box. This is just a temporary page for test purposes – eventually the quick search box will appear in the header of every page. The quick search does now work for both dates using the pattern matching I discussed last week and for all other fields that the quick search needs to cover. For example, you can now view all of the borrowing records with a borrowed date between February 1790 and September 1792 (1790/02-1792/09) which returns 3426 borrowing records. Results are paginated with 100 records per page and options to navigate between pages appear at the top and bottom of each results page.
The search results currently display the complete borrowing record for each result, which is the same layout as you find for borrowing records on a page. The only difference is additional information about the library, register and page the borrowing record appears on can be found at the top of the record. These appear as links and if you press on the page link this will open the page centred on the selected borrowing record. For date searches the borrowing date for each record is highlighted in yellow, as you can see in the screenshot below:
The non-date search also works, but is currently a bit too slow. For example a search for all borrowing records that mention ‘Xenophon’ takes a few seconds to load, which is too long. Currently non-date quick searches do a very simple find and replace to highlight the matched text in all relevant fields. This currently makes the matched text upper case, but I don’t intend to leave it like this. You can also search for things like the ESTC too.
However, there are several things I’m not especially happy about:
- The speed issue: the current approach is just too slow
- Ordering the results: currently there are no ordering options because the non-date quick search performs five different queries that return borrowing IDs and these are then just bundled together. To work out the ordering (such as by date borrowed, by borrower name) many more fields in addition to borrowing ID would need to be returned, potentially for thousands of records and this is going to be too slow with the current data structure
- The search results themselves are a bit overwhelming for users, as you can see from the above screenshot. There is so much data it’s a bit hard to figure out what you’re interested in and I will need input from the project team as to what we should do about this. Should we have a more compact view of results? If so what data should be displayed? The difficulty is if we omit a field that is the only field that includes the user’s search term it’s potentially going to be very confusing
- This wasn’t mentioned in the requirements document I wrote for the front-end, but perhaps we should provide more options for filtering the search results. I’m thinking of facetted searching like you get in online stores: You see the search results and then there are checkboxes that allow you to narrow down the results. For example, we could have checkboxes containing all occupations in the results allowing the user to select one or more. Or we have checkboxes for ‘place of publication’ allowing the user to select ‘London’, or everywhere except ‘London’.
- Also not mentioned, but perhaps we should add some visualisations to the search results too. For example, a bar graph showing the distribution of all borrowing records in the search results over time, or another showing occupations or gender of the borrowings in the search results etc. I feel that we need some sort of summary information as the results themselves are just too detailed to easily get an overall picture of.
I came across the Universal Short Title Catalogue website this week (e.g. https://www.ustc.ac.uk/explore?q=xenophon) it does a lot of the things I’d like to implement (graphs, facetted search results) and it does it all very speedily with a pleasing interface and I think we could learn a lot from this.
Whilst thinking about the speed issues I began experimenting with Apache Solr (https://solr.apache.org/) which is a free search platform that is much faster than a traditional relational database and provides options for facetted searching. We use Solr for the advanced search on the DSL website so I’ve had a bit of experience with it. Next week I’m going to continue to investigate whether we might be better off using it, or whether creating cached tables in our database might be simpler and work just as well for our data. But if we are potentially going to use Solr then we would need to install it on a server at Stirling. Stirling’s IT people might be ok with this (they did allow us to set up a IIIF server for our images, after all) but we’d need to check. I should have a better idea as to whether Solr is what we need by the end of next week, all being well.
Also this week I spent some time working on the Speech Star project. I updated the database to highlight key segments in the ‘target’ field which had been highlighted in the original spreadsheet version of the data by surrounding the segment with bar characters. I’d suggested this as when exporting data from Excel to a CSV file all Excel formatting such as bold text is lost, but unfortunately I hadn’t realised that there may be more than one highlighted segment in the ‘target’ field. This made figuring out how to split the field and apply a CSS style to the necessary characters a little trickier but I got there in the end. After adding in the new extraction code I reprocessed the data, and currently the key segment appears in bold red text, as you can see in the following screenshot:
I also spent some time adding text to several of the ancillary pages of the site, such as the homepage and the ‘about’ page and restructured the menus, grouping the four database pages together under one menu item.
Also this week I tweaked the help text that appears alongside the advanced search on the DSL website and fixed an error with the data of the Thesaurus of Old English website that Jane Roberts had accidentally introduced.