Week Beginning 20th February 2023

This was a week of server woes.  Two servers that host many of our most important sites went offline last Friday and our IT Support people weren’t able to get them back online again until the following Tuesday.  Then just as things were getting back to normal on Tuesday an issue was spotted with another of our servers that meant it needed to be taken offline, resulting in at least 30-40 websites going down, including this one.  As I’m typing this the following Monday the server is still offline and I’ve not heard anything about a timescale for getting it back up again.

Thankfully I’m currently working for the Books and Borrowing project, which is based at servers hosted at Stirling University so my ability to work was not affected by the outages, but it’s really bad news for active research projects that require the online resources to function and it reflects really badly on both the College of Arts and the University of Glasgow as a whole.

For the Books and Borrowing project I dealt with some data correction issues for Haddington library that one of the researchers had spotted, including swapping page images around and moving borrowing records to different pages.  I also corrected the occupation errors that we’d spotted with some borrowers from Innerpeffray library using a spreadsheet that Katie sent me.  She had noticed that there were also several duplicate borrowers in the data and had noted which records needed to be amalgamated so I dealt with these as well.

My main task for the week was to update the Solr data we use for the quick search to incorporate all of the data that we will also need to use for the advanced search.  On Monday and Tuesday I spent some time reworking the Solr instance running on my laptop so as to get it ready to handle the advanced search.  This involved adding new fields for all of the types of data the advanced search needs to query.

I also figured out how to get around the stemming issue for fields like occupation.  For text fields Solr creates stemmed versions of all recognisable words in the fields, so for example ‘searching’, ‘searched’, ‘searches’ etc all have the stemmed form ‘search’.  This then allows free-text searches to find data that’s of relevance, which can be really useful.  Unfortunately when displaying search filters it’s the stemmed forms that get returned and displayed beside the checkboxes and these can be a bit confusing.  I figured out that you can create copy fields for these text fields in Solr where the text is stored as strings rather than text, and strings do not get stemmed.  The search can then use the text field and the search filters can then use the string field.  Pressing on a search filter then searches the string field, which is case sensitive, but this isn’t an issue as what’s being searched is the full text of the checkbox label (e.g. ‘Religion and Clergy’) which will always match the string form Solr stores.  This means that the search filters now say something like ‘Education’ rather than ‘educat’ and full author names now get displayed, which is great.

I also added in borrower title and ESTC as search filters as I thought these might be useful.  Plus I’ve fixed the issue of fields that hold multiple values not being sortable.  For example, a borrowing record may have multiple occupations associated with it as there may be multiple borrowers and each borrower may have several occupations.  Because of this it was not possible to sort the search results by borrower occupation.  The fix for this was to generate a further field for each that stores all of the multiple values as a single string.  For borrower occupation for sorting purposes the occupation at the bottom of the hierarchy appears first, so if a borrowing record features a borrower with occupation ‘Law -> Advocate’ the record will be sorted under ‘Advocate’ then ‘Law’.  For borrower names and author names the ordering is surname then forename.

With all of these changes in place I took a copy of the live database (also taking the opportunity to deactivate all of the test libraries in the system), regenerated the JSON files that Solr indexes and then ingested them into my updated Solr instance on my laptop.  After that I ran some tests to check all was working fine.  After that I sent the data to the IT people at Stirling (I need to get them to import the data into the Solr instance on the server) and on Wednesday morning they imported it all and thankfully everything went smoothly.

With the new data in place I updated the API and the search results page to add in the new filters (Borrower title and ESTC) and to switch the filters over to using the string versions for display so we now have full occupation and author names displayed.  I also updated the ‘Order by’ facility to allow all sorting options to work.  Unfortunately whilst doing so I spotted that I’d forgotten to add in the code to populate the book edition title single field so I’m afraid sorting by this field doesn’t work yet, but other options such as borrower occupation and author and borrower name are now working.  I updated my Solr data generation script to add in the book edition title now so next time I regenerate the data this will work.

I then started to work on implementing the advanced search.  I decided to change the way the API is referenced for the search.  Previously there was going to be one endpoint for the quick search, which would accept one search parameter, and another for the advanced search, which would accept multiple parameters.  I decided instead to amalgamate the two into one single search endpoint as in reality both search facilities will need to do the same things:  format the search options for Solr, work out the pagination, deal with ordering options and work out which filters need to be applied.

In order to amalgamate the endpoints I needed to rework the quick search facility that I had already created, and this meant breaking the quick search for a while.  Thankfully I managed to put it all back together again with the quick search working once more, but with slightly different URLs and a differently structured API call.  With this in place I began to add the advanced search data types to the API so as to construct the query that will be passed to Solr to return the advanced search results.  This basically allows specific fields in the Solr data (e.g. author names, library names, dates) to be queried rather than querying all fields, which the quick search does.  As I left things off on Friday I was in the middle of adding in the option of searching author birth and death years, but I’d run into a little difficulty when processing negative years (i.e. BC years) that I’m going to have to investigate further next week.

Also this week I made some changes to an old interactive map I’d made back in 2015 showing important places relating to Edinburgh’s enlightenment.  This is hosted on the University’s T4 system and the T4 people were keen for alt tags to be added to the image map tiles.  Thankfully I found an answer for this on Stack Overflow (https://stackoverflow.com/a/27606381) whereby attributes can be set each time a tile is loaded.  The alt tag text is an empty string so I’m uncertain whether this will actually help anyone, but it pleases the validators, anyway.  There were some other issues with the site that had been caused by the University website changing its styles since the map was published, and I fixed these too.  As of yet the changes have not been approved, even after several days, so I’m not sure what’s going on there.

My other task this week was to create an initial interface for the VARICS project website, using the logo, fonts and colour scheme that the designer had created for the project.  I spent a bit of time customising the theme to incorporate these and have emailed the PI to let her know that things are ready to add content to.  It’s possible I’ll need to make further changes to the interface, but it’s a good starting point at least.

Week Beginning 30th January 2023

This was a four-day week as the latest round of UCU strike action began on Wednesday.  Strike action if going to continue for the next two months, which is going to have a major impact on what I can achieve each week.

I spent almost all of this week working on the Books and Borrowing project.  This first two days were mainly spent dealing with data related issues.  This included writing a script to merge duplicate editions based on a spreadsheet of editions that I’d previously sent Matt to which he had added a column to denote which duplicate should be merged with which.  It took quite some time to write the script due to having to deal with associated book works and authors.  Some of the duplicates that were to be deleted had book work associations whilst the edition to keep didn’t.  These cases had to be checked for and the book work association had to be transferred over.

Authors were a little more complicated as both the duplicate to be deleted and the one to keep may have multiple associated authors.  If the duplicate edition to keep had no authors but the one to be deleted did then each of these had to be associated with the edition to keep.  But if both the edition to delete and the one to keep had authors only those authors from the ‘to delete’ edition that were not already represented in the ‘to keep’ edition’s author list had to be associated.  In such cases where an author did need to be associated with the ’to keep’ edition I also added in a further check to ensure the author being associated didn’t have the same name (but different ID) as one already associated, as there are duplicate authors in the system.

With all of this done the script then had to reassign the holding records from the ‘to delete’ edition to the ‘to keep’ one and then finally delete the relevant edition.  As the script makes significant changes to the data I first ran it on a version of the data I had running on my laptop to check that the script worked as intended, which thankfully it did.  After completing the test I then (after taking another backup of the database in case of problems) ran the script on the live data.  The process resulted in 541 duplicate editions being deleted from the system and as far as I can tell all is well.  We now have 13,086 editions in the system and 13,014 of these do not have an associated book work.  We only have 75 book works in the system.

The next step is to assign book works to editions and add in book genres.  In order to do this I created a further spreadsheet containing the editions with columns for book work, authors and three columns which can be used to record up to three genres.  I also sent Matt and Katie a further spreadsheet containing the details of the 75 existing book works in our system.  It’s going to be rather complicated to fill in the spreadsheet as there’s a lot going on and it took me quite a while to figure out a workflow for filling it in.  Hopefully with that in place filling it in should be straightforward, if time-consuming.

I also ran some queries, did some checks and generated some spreadsheets for the Wigtown data for Gerry McKeever.  With these data related issues out of the way I then returned to developing the front-end.  Whilst working on an issue relating to ordering the results by date I noticed that we have quite a lot of borrowing records in the system that have no dates.  There are almost 12,000 that don’t have a ‘borrowed year’.  There’s possibly a good reason for it, but of these 2,376 have a borrowed day and a borrowed month but no year, which seems more strange.  I emailed Katie and Matt about this and they’re going to investigate.

I managed to finish work on the ‘Year borrowed’ bar chart this week.  Without providing a year filter the bar chart shows the distribution of borrowing records divided into decades, for example this search for ‘rome’, ordered by date borrowed:

You can then click on one of the decade bars to limit the results to just those in the chosen decade, for example clicking on the ‘1780’ bar:

This then displays a bar chart showing a breakdown of borrowing records per year within the selected decade.  You are given the option of clearing the year filter to return to the full view and you can also click on an individual year bar to limit the results to just that year, for example limiting to the year 1788:

When you reach this level no bar chart is displayed as year is the unit that’s filtered and there is only one year selected.  But options are given to return to the decade view or clear the year filter.  You can of course combine the year filter with any of the other filter options.  I guess at year level we could display a similar bar chart for borrowings per month, but this might be too fine-grained and confusing (plus would be a lot more work as everything is currently set up to work with year only).  It’s something to consider, though.

I did spot a problem with the bar chart:  I realised that when you searched for an individual year or a range within an individual year the results were still showing the options to view the decade and clear the year filter, both of which then gave errors.  This has now been sorted – no year filter options should be shown when the main search is only for a year.

For the remainder of the week I began working on the advanced search.  As specified in the requirements document, currently the advanced search page features two tabs, one for a ‘simple’ advanced search and one for an ‘advanced’ advanced search.  So far I’ve just been working on the forms, which in turn has necessitated making some changes to the API (to bring back a simple list of all libraries and to enable an entire list of registers to be returned).  The forms allow you to select / deselect libraries and select / deselect all.  In the ‘Simple’ tab there are also textboxes for entering date of borrowing, author forename and surname, year of birth / death and book title, plus a placeholder for genre.  The requirements document stated that date of borrowing would have boxes for entering years and days and a drop-down list for selecting month, with two sets to be used for range dates.  I’ve decided that since the quick search already allows dates to be entered directly as text that it would make sense to just follow the same method for the advanced search.

Author dates as currently specified are going to be a bit messy for BC dates, where people need to enter a negative value.  This is messy because a dash is used for date ranges so we may end up with something like ‘-1000–200’ (that’s two dashes in the middle).  I’m not sure what we can do about this, though.  I guess having different boxes for ‘from’ and ‘to’ for ranged dates would avoid the issue.  For the ‘advanced’ advanced search lists of selectable registers will appear depending on the libraries that are selected.  This is what I’m still in the middle of working on.

If I have the time I would like to create a new theme for the website that will look pretty similar but will use the Bootstrap front-end toolkit (https://getbootstrap.com/).  The current WordPress theme doesn’t use this which means creating complex layouts is more difficult and messy.  I created a Bootstrap based WordPress theme for the Anglo-Norman Dictionary (e.g. this search form: https://anglo-norman.net/textbase-search/) but I’ll just have to see how much time I have as I think it’s better to get the essentials in place first.  But what it means is in the meantime things like the search form layout will possibly not be finalised (but will be functional).

In addition to the above I fixed an issue with the Thesaurus of Old English data for Jane Roberts and I completed setting up an initial WordPress site for the VARICS project.  I also did a little work for the Dictionaries of the Scots language, fixing a broken link from entries to DOST abbreviations, replying to an email from Rhona about a cookie policy for the website and investigating an issue with text in italics in quotations not being found when a ‘quotations only’ advanced search is performed.

It turns out that the code I’d written to generate the data for the quotations was only set to pick up the direct contents of <q> and to ignore the contents of any child elements such as <i>.  This is not the case with the full text and ‘exclude quotations’ data.  I identified the issue and updated the code, running a test entry through it to test that the italicised text in quotes is now getting indexed properly.  It may well be that there was a reason why the code was set up in this way, though, as Ann mentioned that there are other tags within quotes whose content should be ignored.  I’ll need further input from the team before I do anything further about this.

Week Beginning 23rd January 2023

I spent much of the week working on the Books and Borrowing project, working with the new Solr instance that the Stirling IT people set up for me last week.  I spent some time creating a new version of the API that connects to Solr and then setting up the Solr queries necessary to search all fields of the Solr index for a regular quick search and the ‘borrowed’ fields for date searches.  This included returning the data necessary to provide the facetted search options on the search results page (i.e. filters for the search results).  I also set up a new development version of the front-end, leaving my existing pre-Solr version in place in case things go wrong and I need to revert to it.

As with the previous version of the site, you can perform a quick search, which searches the numerous fields that are specified in the requirements document.  Dates can either be a single date or a range.  Text searches can use wildcards * to match any characters (e.g. tes* will match all words beginning ‘tes’) and ? to match a single character (e.g. h?ll matching ‘hill’ and ‘hell’).

Currently the results still display the full records with 100 records per page.  I did consider changing this to a more compact view with a link to open the full record, but I haven’t implemented this as of yet.  I might add an option to switch between a ‘compact’ and ‘full’ view of the records instead, as I think having to click through to the full record each time you’re interested would get a bit annoying.

There have been a lot of changes to the back end, even if the front-end doesn’t look that different.  Behind the scenes the API now connects to the Solr instance and queries are formatted for and passed to Solr, which then returns the data.  Solr is very fast, but the loading of 100 results which are full records does still take some time.  Queries to Solr currently return the 100 relevant borrowing IDs and then another API call retrieves all of the data for these 100 IDs from the regular database.  A compact view could potentially rely solely on data stored in Solr, which would be a lot quicker to load, if we want to pursue that option.

In addition to returning the IDs for the 100 borrowing records that are to be displayed on any one results page, Solr also returns the total number of matching borrowing records plus the facetted search information.  For the moment the following information is included in the facetted data: Borrowing year, library name, borrower gender and occupation, author name and book language, place of publication and format.  These appear as ‘Filter’ options down the left-hand side of the results page, currently as a series of checkboxes, the name of the item in question and the number of results in the overall results that the item is found in.  Pressing on a checkbox filters the results in question and the results page reloads as soon as you press a checkbox.  This causes both the results and the filters to narrow, displaying only those that continue to match.  You can then click on other checkboxes to narrow things further, or deselect a checkbox to return to the non-filtered view.

I think the filters are going to be hugely useful, but they’re not perfect yet.  There are issues with the data for occupations and authors.  This is because the data has been stemmed by Solr for search purposes, meaning the field is broken down into individual word stems (e.g. ‘educ’ for ‘education’).  I will fix this but it will require me to regenerate the data and get the Stirling IT people to replace the existing data.  I’ve also noticed that data from test libraries is in Solr too and I’ll need to ensure this gets removed.

With all of this in place I then moved on to providing different sorting options for the search results, for example ordering the results by borrowed date, library or author name.  This required some tweaking of the Solr queries and the API and then some updates to the front-end to ensure the selected sorting option is dealt with and remembered.  However, I did come across a limitation in Solr, in that it is not possible for Solr to order data by fields that contain multiple values.  This means that for now sorting by things like author name and borrower occupation won’t work as each of these can contain multiple values per record.  I’ll therefore have to make concatenated versions of these fields for sort purposes and will do this when I regenerate the data.

This initial version of the facetted search results page displayed years in the same way as other search filters:  as a series of checkboxes, year labels and counts of results in each year.  What I really wanted to do was to display this as a bar chart instead, using the HighCharts library that I use for other visualisations in the front-end.  I wanted to group years into decades where the range of years is greater than a decade and enable the user to then press on a decade bar to view the results for individual years within the decade, with the bar chart then displaying the individual years.  I managed to get the ‘by decade’ bar chart working this week.  You can hover over a bar to view the exact total for the decade.  You can also click on a decade to filter the search to that decade.  This is the bit I’m still working on.  Currently no bar chart is displayed and you need to use your browser’s ‘back’ button to return, but the filter does actually work.  Eventually a bar chart with borrowings for each year in the decade will be displayed, together with a button for returning back.  In this view you will be able to further click on a year bar to filter the results to the selected year.  I’ll continue with this next week and the ‘Year borrowed’ checkboxes will be removed once the bar chart is fully working.  It’s took quite a while to get the bar chart working as there was a lot of logic that needed to be worked out in order for borrowings to be grouped into decades and to accommodate gaps in the data (e.g. if there is no data for a decade we still need this decade to get displayed otherwise the graph looks odd).  Below is a screenshot of the new front-end with facetted searching and ‘year borrowed’ bar chart:

Also for the Books and Borrowing project this week I had a Zoom call with Katie and Matt to discuss genre classification (which has now been decided upon) and batch editing the book edition records to fix duplicates and to auto-generate book work records for any editions that need them.  I also sent on some data exports containing the distinct book formats and places of publication that are in the system as they will need some editorial work as well.  I also responded to a few queries from one of the project RAs who wanted some queries run on the data for a library he has worked on.

Also this week I created an initial WordPress site for the VARICS project after the domain was set up by Russell McInnes, an IT guy from the College of Engineering who is helping out with Arts IT Support due to their staffing issues.  Russell has been hugely helpful and it’s such a relief to have someone to work with again.  Also this week I spoke to Marc Alexander about some financial issues relating to a number of projects I’m involved with and spoke with him about equipment that I might need in the coming year and conferences I’d like to attend.  I also made a tweak to the ‘email entry’ feature I’d changed o the DSL website last week.  Next week I’ll be continuing to work on the B&B front-end.

 

 

Week Beginning 3rd October 2022

The Speak For Yersel project launched this week and is now available to use here: https://speakforyersel.ac.uk/.  It’s been a pretty intense project to work on and has required much more work than I’d expected, but I’m very happy with the end result.  We didn’t get as much media attention as we were hoping for, but social media worked out very well for the project and in the space of a week we’d had more than 5,000 registered users completing thousands of survey questions.  I spent some time this week tweaking things after the launch.  For example, I hadn’t added the metadata tags required by Twitter and Facebook / WhatsApp to nicely format links to the website (for example the information detailed here https://developers.facebook.com/docs/sharing/webmasters/) and it took a bit of time to add these in with the correct content.

I also gave some advice to Anja Kuschmann at Strathclyde about applying for a domain for the new VARICS project I’m involved with and investigated a replacement batch of videos that Eleanor had created for the Seeing Speech website.  I’ll need to wait until she gets back to me with files that match the filenames used on the existing site before I can take this further, though.  I also fixed an issue with the Berwickshire place-names website which has lost its additional CSS and investigated a problem with the domain for the Uist Saints website that has still unfortunately not been resolved.

Other than these tasks I spent the rest of the week continuing to develop the front-end for the Books and Borrowing project.  I completed an initial version of the ‘page’ view, including all three views (image, text and image and text).  I added in a ‘jump to page’ feature, allowing you (as you might expect) to jump directly to any page in the register when viewing a page.  I also completed the ‘text’ view of the page, which now features all of the publicly accessible data relating to the records – borrowing records, borrowers, book holding and item records and any associated book editions and book works, plus associated authors.  There’s an awful lot of data and it took quite a lot of time to think about how best to lay it all out (especially taking into consideration screens of different sizes), but I’m pretty happy with how this first version looks.

Currently the first thing you see for a record is the transcribed text, which is big and green.  Then all fields relating to the borrowing appear under this.  The record number as it appears on the page plus the record’s unique ID are displayed in the top right for reference (and citation).  Then follows a section about the borrower, with the borrower’s name in green (I’ve used this green to make all of the most important bits of text stand out from the rest of the record but the colour may be changed in future).  Then follows the information about the book holding and any specific volumes that were borrowed.  If there is an associated site-wide book edition record (or records) these appear in a dark grey box, together with any associated book work record (although there aren’t many of these associations yet).  If there is a link to a library record this appears as a button on the right of the record.  Similarly, if there’s an ESTC and / or other authority link for the edition these appear to the right of the edition section.

Authors now cascade down through the data as we initially planned.  If there’s an author associated with a work it is automatically associated with and displayed alongside the edition and holding.  If there’s an author associated with an editon but not a work it is then associated with the holding.  If a book at a specific level has an author specified then this replaces any cascading author from this point downwards in the sequence.  Something that isn’t in place yet are the links from information to search results, as I haven’t developed the search yet.  But eventually things like borrower name, author, book title etc will be links allowing you to search directly for the items.

One other thing I’ve added in is the option to highlight a record.  Press anywhere in a record and it is highlighted in yellow.  Press again to reset it.  This can be quite useful as you’re scrolling through a page with lots of records on if there are certain records you’re interested in.  You can highlight as many records as you want.  It’s possible that we may add other functionality to this, e.g. the option to download the data for selected records.  Here’s a screenshot of the text view of the page:

I also completed the ‘image and text’ view.  This works best on a large screen (i.e. not a mobile phone, although it is just about possible to use it on one, as I did test this out).  The image takes up about 60% of the screen width and the text takes up the remaining 40%.   The height of the records section is fixed to the height of the image area and is scrollable, so you can scroll down the records whilst still viewing the image (rather than the whole page scrolling and the image disappearing off the screen).  I think this view works really well and the records are still perfectly usable in the more confined area and it’s great to be able to compare the image and the text side by side.  Here’s a screenshot of the same page when viewing both text and image:

I tested the new interface out with registers from all of our available libraries and everything is looking good to me.  Some registers don’t have images yet, so I added in a check for this to ensure that the image views and page thumbnails don’t appear for such registers.  After that I moved onto developing the interface to browse book holdings when viewing a library.  I created an API endpoint for returning all of the data associated with holding records for a specified library.  This includes all of the book holding data, information about each of the book items associated with the holding record (including the number of borrowing records for each), the total number of borrowing records for the holding, any associated book edition and book work records (and there may be multiple editions associated with each holding) plus any authors associated with the book.  Authors cascade down through the record as they do when viewing borrowing records in the page.  This is a gigantic amount of information, especially as libraries may have many thousands of book holding records.  The API call loads pretty rapidly for smaller libraries (e.g. Chambers Library with 961 book holding records) but for larger ones (e.g. St Andrews with over 8,500 book holding records) the API call takes too long to return the data (in the latter case it takes about a minute and returns a JSON file that’s over 6Mb in size).  The problem is the data needs to be returned in full in order to do things like order it by largest number of borrowings.  Clearly dynamically generating the data each time is going to be too slow so instead I am going to investigate caching the data.  For example, that 6Mb JSON file can just site there as an actual file rather than being generated each time.  Instead I will write a script to regenerate the cached files and I can run this whenever data gets updated (or maybe once a week whilst the project is still active).  I’ll continue to work on this next week.

Week Beginning 12th September 2022

I spent a bit of time this week going through my notes from the Digital Humanities Congress last week and writing last week’s lengthy post.  I also had my PDR session on Friday and I needed to spend some time preparing for this, writing all of the necessary text and then attending the session.  It was all very positive and it was a good opportunity to talk to my line manager about my role.  I’ve been in this job for ten years this month and have been writing these blog posts every working week for those ten years, which I think is quite an achievement.

In terms of actual work on projects, it was rather a bitty week, with my time spread across lots of different projects.  On Monday I had a Zoom call for the VariCS project, a phonetics project in collaboration with Strathclyde that I’m involved with.  The project is just starting up and this was the first time the team had all met.  We mainly discussed setting up a web presence for the project and I gave some advice on how we could set up the website, the URL and such things.  In the coming weeks I’ll probably get something set up for the project.

I then moved onto another Burns-related mini-project that I worked on with Kirsteen McCue many months ago – a digital edition of Koželuch’s settings of Robert Burns’s Songs for George Thomson.  We’re almost ready to launch this now and this week I created a page for an introductory essay, migrated a Word document to WordPress to fill the page, including adding in links and tweaking the layout to ensure things like quotes displayed properly.  There are still some further tweaks that I’ll need to implement next week, but we’re almost there.

I also spent some time tweaking the Speak For Yersel website, which is now publicly accessible (https://speakforyersel.ac.uk/) but still not quite finished.  I created a page for a video tour of the resource and made a few tweaks to the layout, such as checking the consistency of font sizes used throughout the site.  I also made some updates to the site text and added in some lengthy static content to the site in the form or a teachers’ FAQ and a ‘more information’ page.  I also changed the order of some of the buttons shown after a survey is completed to hopefully make it clearer that other surveys are available.

I also did a bit of work for the Speech Star project.  There had been some issues with the Central Scottish Phonetic Features MP4s playing audio only on some operating systems and the replacements that Eleanor had generated worked for her but not for me.  I therefore tried uploading them to and re-downloading them from YouTube, which thankfully seemed to fix the issue for everyone.  I then made some tweaks to the interfaces to the two project websites.  For the public site I made some updates to ensure the interface looked better on narrow screens, ensuring changing the appearance of the ‘menu’ button and making the logo and site header font smaller to they take up less space.  I also added an introductory video to the homepage too.

For the Books and Borrowing project I processed the images for another library register.  This didn’t go entirely smoothly.  I had been sent 73 images and these were all upside down so needed rotating.  It then transpired that I should have been sent 273 images so needed to chase up the missing ones.  Once I’d been sent the full set I was then able to generate the page images for the register, upload the images and associate them with the records.

I then moved on to setting up the front-end for the Ayr Place-names website.  In the process of doing so I became aware that one of the NLS map layers that all of our place-name projects use had stopped working.  It turned out that the NLS had migrated this map layer to a third party map tile service (https://www.maptiler.com/nls/) and the old URLs these sites were still using no longer worked.  I had a very helpful chat with Chris Fleet at NLS Maps about this and he explained the situation.  I was able to set up a free account with the maptiler service and update the URLS in four place-names websites that referenced the layer (https://berwickshire-placenames.glasgow.ac.uk/, https://kcb-placenames.glasgow.ac.uk/, https://ayr-placenames.glasgow.ac.uk and https://comparative-kingship.glasgow.ac.uk/scotland/).  I’ll need to ensure this is also done for the two further place-names projects that are still in development (https://mull-ulva-placenames.glasgow.ac.uk and https://iona-placenames.glasgow.ac.uk/).

I managed to complete the work on the front-end for the Ayr project, which was mostly straightforward as it was just adapting what I’d previously developed for other projects.  The thing that took the longest was getting the parish data and the locations where the parish three-letter acronyms should appear, but I was able to get this working thanks to the notes I’d made the last time I needed to deal with parish boundaries (as documented here: https://digital-humanities.glasgow.ac.uk/2021-07-05/.  After discussions with Thomas Clancy about the front-end I decided that it would be a good idea to redevelop the map-based interface to display al of the data on the map by default and to incorporate all of the search and browse options within the map itself.  This would be a big change, and it’s one I had been thinking of implementing anyway for the Iona project, but I’ll try and find some time to work on this for all of the place-name sites over the coming months.

Finally, I had a chat with Kirsteen McCue and Luca Guariento about the BOSLIT project.  This project is taking the existing data for the Bibliography of Scottish Literature in Translation (available on the NLS website here: https://data.nls.uk/data/metadata-collections/boslit/) and creating a new resource from it, including visualisations.  I offered to help out with this and will be meeting with Luca to discuss things further, probably next week.