I was on holiday for all of last week and Monday and Tuesday this week. My son and I were supposed to be visiting my parents for Easter, but we were unable to do so due to the lockdown and instead had to find things to amuse ourselves with around the house. I answered a few work emails during this time, including alerting Arts IT Support to some issues with the WordPress server and responding to a query from Ann Fergusson at the DSL. I returned to work (from home, of course) on Wednesday and spent the three days working on various projects.
For the Books and Borrowers project I spent some time downloading and looking through the digitised and transcribed borrowing registers of St. Andrews. They have made three registers from the second half of the 18th century available via a Wiki interface (see https://arts.st-andrews.ac.uk/transcribe/index.php?title=Main_Page) and we were given access to all of these materials that had been extracted and processed by Patrick McCann, who I used to work very closely with back when we were both based at HATII and worked for the Digital Curation Centre. Having looked through the materials it’s clear that we will be able to use the transcriptions, which will be a big help. The dates will probably need to be manually normalised, though, and we will need access to higher resolution images than the ones we have been given in order to make a zoom and pan interface using them.
I also updated the introductory text for Gerry McKeever’s interactive map of the novel Paul Jones, and I think this feature is now ready to go live, once Gerry want to launch it. I also fixed an issue with the Editing Robert Burns website that was preventing the site editors (namely Craig Lamont) from editing blog posts. I also created a further new version of the Burns Supper map for Paul Malgrati. This version incorporates updated data, which has greatly increased the number of Suppers that appear on the map and I also changed the way videos work. Previously if an entry had a link to a video then a button was added to the entry that linked through to the externally hosted video site (which could be YouTube, Facebook, Twitter or some other site). Instead, the code now identifies the origin of the video and I’ve managed to embed players from YouTube, Facebook and Twitter. These now open the videos in the same drop-down overlay as the images. The YouTube and Facebook players are centre aligned but unfortunately Twitter’s player displays to the left and can’t be altered. Also, the YouTube and Facebook players expect the width and height of the player to be specified. I’ve taken these from the available videos, but ideally the desired height and width should be stored as separate columns in the spreadsheet so these can be applied to each video as required. Currently all YouTube and all Facebook videos have the same width and height, which can mean landscape oriented Facebook videos appear rather small, for example. Also, some videos can’t be embedded due to their settings (e.g. the Singapore Facebook video). However, I’ve added a ‘watch video’ button underneath the player so people can always click through to the original posting.
I also responded to a query from Rhona Alcorn about how DSL data exported from their new editing system will be incorporated into the live DSL site, responded to a query from Thomas Clancy about making updates to the Place-names of Kirkcudbrightshire website and responded to a query from Kirsteen McCue about an AHRC proposal she’s putting together.
I returned to looking at the ‘guess the category quiz’ that I’d created for the Historical Thesaurus before the Easter holidays and updated the way it worked. I reworked the way the database is queried so as to make things more efficient, to ensure the same category isn’t picked as more than one of the four options and to ensure that the selected word isn’t also found in one of the three ‘wrong’ category choices. I also decided to update the category table to include two new columns, one that holds a count of the number of lexemes that have a ‘wordoed’ and the other than holds a count of the number of lexemes that have a ‘wordoe’ in each category. I then ran a script that generated these figures for all 250,000 or so categories. This is really just caching information that can be gleaned from a query anyway, but it makes querying a lot faster and makes it easier to pinpoint categories of a particular size and I think these columns will be useful for tasks beyond the quiz (e.g. show me the 10 largest Aj categories). I then created a new script that queries the database using these columns and returns data for the quiz.
This script is much more streamlined and considerably less prone to getting stuck in loops of finding nothing but unsuitable categories. Currently the script is set to only bring back categories that have at least two OED words in them, but this could easily be changed to target larger categories only (which would presumably make the quiz more of a challenge). I could also add in a check to exclude any words that are also found in the category name to increase the challenge further. The actual quiz page itself was pretty much unaltered during these updates, but I did add in a ‘loading’ spinner, which helps the transition between questions.
I’ve also created an Old English version of the quiz which works in the same way except the date of the word isn’t displayed and the ‘wordoe’ column is used. Getting 5/5 on this one is definitely more of a challenge! Here’s an example question:
I spent the rest of the week upgrading all of the WordPress sites I manage to the latest WordPress release. This took quite a bit of time as I had to track down the credentials for each site, many of which I didn’t already have a note of at home. There were also some issues with some of the sites that I needed to get Arts IT Support to sort out (e.g. broken SSL certificates, sites with the login page blocked even when using the VPN). By the end of the week all of the sites were sorted.