Week Beginning 11th January 2021

This was my first full week back of the year, although it was also the first week of a return to homeschooling, which made working a little trickier than usual.  I also had a dentist’s appointment on Tuesday and lost some time to that due to my dentist being near the University rather than where I live.  However, despite these challenges I was able to achieve quite a lot this week.  I had two Zoom calls, the first on Monday to discuss a new ESRC grant that Jane Stuart-Smith is putting together with colleagues at Strathclyde while the second on Wednesday was with a partner in Joanna Kopaczyk’s new RSC funded project about Scots Language Policy to discuss the project’s website and the survey they’re going to put out.  I also made a few tweaks to the DSL website, replied to Kirsteen McCue about the AHRC proposal she’s currently putting together, replied to a query regarding the technologies behind the Scots Syntax Atlas, made a few further updates to the Burns Supper map and replied to a query from Rachel Fletcher in English Language about lemmatising Old English.

Other than these various tasks I split my time between the Anglo-Norman Dictionary and the Books and Borrowing projects.  For the former I completed adding explanatory notes to all of the ‘Introducing the AND’ pages.  This was a very time consuming task as there were probably about 150 explanatory notes in total to add in, each appearing in a Bootstrap dialog box, and each requiring me to copy the note form the old website, add in any required HTML formatting, find and check all of the links to AND entries on the old site and add these in as required.  It was pretty tedious to do, but it feels great to get it done, as the notes were previously just giving 404 errors on the new live site, and I don’t like having such things on a site I’m responsible for.  I also migrated the academic articles from the old site to the new one (https://anglo-norman.net/articles/) which also required some manual formatting of the content.  There are five other articles that I haven’t managed to migrate yet as they are full of character encoding errors on the old site.  Geert is looking for copies of these articles that actually work and I’ll add them in once he’s able to get them to me.  I also begin migrating the blog posts to the new site too.  Currently the blog is hosted on Blogspot and there are 55 entries, but we’d like these to be an internal part of the new site.  Migrating these is going to take some time as it means copying the text (which thankfully retains formatting) and then manually saving and embedding any images in the posts.  I’m just going to do a few of these a week until they’re all done and so far I’ve migrated seven.  I also needed to look into how the blogs page works in the WordPress theme I created for the AND, as to start with the page was just listing the full text of every post rather than giving summaries and links through to the full text of each.  After some investigation I figured out that in my theme there is a script called ‘home.php’ and this is responsible for displaying all of the blog posts on the ‘blog’ page.  It in turn calls another template called ‘content-blog.php’ which was previously set to display the full content of the blog.  Instead I set it to display the title as a link through to the full post, the date and then an excerpt from the full blog, which can be accessed through a handy WordPress function called ‘the_excerpt()’.

For the Books and Borrowing project I made some improvements and fixes to the Content Management System.  I’d been meaning to enhance the CMS for some time, but due to other commitments to other projects I didn’t have the time to delve into it.  It felt good to find the time to return to the project this week.

I updated the ‘Books’ and ‘Borrowers’ tabs when viewing a library in the CMS.  I added in pagination to speed up the loading of the pages.  Pages are now split into 500 record blocks and you can navigate between pages using the links above and below the tables.  For some reason the loading of the page is still a bit slow on the Stirling server whereas it was fine on the Glasgow server I was using for test purposes.  I’m not entirely sure why as I’d copied the database over too – presumably the Stirling server is slower.  However, it is still a massive improvement on the speed of the page previously.

I also changed the way tables scroll horizontally.  Previously if a table was wider than the page a scrollbar appeared above and below the table, but this was rather awkward to use if you were looking at the middle of the table (you had to scroll up or down to the beginning or end of the table, then use the horizontal scrollbar to move the table along a bit, then navigate back to the section of the page you were interested in).  Now the scrollbar just appears at the bottom of the browser window and can always be accessed no matter where in the table you are.

I also removed the editorial notes from tables by default to reduce clutter, and added in a button for showing / hiding the editors’ notes near the top of each page.  I also added a limit option in the ‘Books’ and ‘Borrowers’ pages within a library to limit the displayed records to only those found in a specific ledger.  I added in a further option to display those records that are not currently associated with any ledgers too.

I then deleted the ‘original borrowed date’ and ‘original returned date’ fields in St Andrews data as these were no longer required.  I deleted these additional fields from the system and all data that were contained in these fields.

It had been noted that the book part numbers were not being listed numerically.  As part numbers can contain text as well as numbers (e.g. ‘Vol. II’), this field in the database needed to be set as text rather than an integer.  Unfortunately the database doesn’t order numbers correctly when they are contained in a non-numerical field  – instead all the ones come first (1, 10, 11) then all the twos (2, 20, 22) etc.  However, I managed to find a way to ensure that the numbers are ordered correctly.

I also fixed the ‘Add another Edition/Work to this holding’ button that was not working.  This was caused by the Stirling server running a different version of PHP that doesn’t allow functions to have variable numbers of arguments.  The autocomplete function was also not working at edition level and I investigated this.  The issue was being caused by tab characters appearing in edition titles, and I updated my script to ensure these characters are stripped out before the data is formatted as JSON.

There may be further tweaks to be made – I’ll need to hear back from the rest of the team before I know more, but for now I’m up to date with the project.  Next week I intend to get back into some of the larger and more trickier outstanding AND tasks (of which there are, alas, many) and to begin working towards adding the DSL bibliography data into the new version of the API.

Week Beginning 4th January 2021

This was my first week back after the Christmas holidays, and I only worked the Thursday and the Friday.  We’re back in full lockdown and homeschooling again now, so it’s not the best of starts to the new year.  I spent my two days this week catching up with emails and finishing off some outstanding tasks from last year.  I spoke to Joanna Kopaczyk about her new RSE funded project that I need to set up a website for, and I had a chat with the DSL people about the outstanding tasks that still need to be tackled for the Dictionary of the Scots Language.  I also added a few more Burns Suppers to the Supper Map that I created over the past year for Paul Malgrati in Scottish Literature, which was a little time consuming as the data is contained in a spreadsheet featuring more than 70 columns.

I spent the remainder of the week continuing to work on the new Anglo-Norman Dictionary site, which we launched just before Christmas.  The editors, Geert and Heather, had spotted some issues with the site whilst using it so I had a few more things to add to my ‘to do’ list, some of which I ticked off.  One such thing was that entries with headwords that consisted of multiple words weren’t loading.  This required an update to the way the API handles variables passed in URL strings, and after I implemented that such entries then loaded successfully.

A bigger issue was the fact that some citations were not appearing in the entries.  This took some time to investigate but I eventually tracked down the problem.  I’d needed to write a script that reordered all of the citations in every sense in every entry by date, as previously the citations were not in date order.  However, when looking at the entries that had missing citations it would appear that where a sense has more than one citation in the same year only one of these citations was appearing.  This is because within each sense I was placing the citations in an array with the year as the key, e.g:

$citation[“1134”] = citation 1

$citation[“1362”] = citation 2

$citation[“1247”] = citation 3

I was then reordering the array based on the key to get things in year order.  But where there were multiple citations in a single year for a sense this approach wasn’t working as the array key needs to be unique.  So if there were two ‘1134’ citations only one was being retained.  To fix this I updated the reordering script to add a further incrementing number to the key, so if there are two ‘1134’ citations the key for the first is ‘1134-1’ and the second is ‘1134-2’.  This ensures all citations for a year are retained and the sorting by key still works.  After implementing the fix and rerunning the citation ordering script I updated the XML in the online database and the missing citations are now thankfully appearing online.

I ended the week by continuing to work through the ancillary pages of the dictionary, focusing on the ‘Introducing the AND’ pages (https://anglo-norman.net/introducing-the-and/).  I’d managed to get the main content of the pages in place before Christmas, but explanatory notes and links were not working.  There are about 50 explanatory notes in the ‘Magna Carta’ page and I needed to copy all of these from the old site and add them to a Bootstrap dialog pop-up, which was rather time-consuming.  I also had to update the links through to the dictionary entries as although I’d added redirects to ensure the old links worked, some of the links in these pages didn’t feature an entry number where one was required.  For example on the page about food there was a link to ‘pere’ but the dictionary contains three ‘pere’ entries and the correct one is actually the third (the fruit pear).  I still need to fix links and explanatory notes in the two remaining pages of the introduction, which I will try to get sorted next week.

Week Beginning 14th December 2020

This was my last week before the Christmas holidays, and it was a four-day week as I’d taken Friday off to use up some unspent holidays.  Despite only being four days long it was a very hectic week, as I had lots of loose ends to tie up before the launch of the new Anglo-Norman Dictionary website on Wednesday.  This included tweaking the appearance of ‘Edgloss’ tags to ensure they always have brackets (even if they don’t in the XML), updating the forms to add line breaks between parts of speech and updating the source texts pop-ups and source texts page to move the information about the DEAF website.

I also added in a lot of the ancillary page data, including the help text, various essays, the ‘history’ page, copyright and privacy pages, the memorial lectures and the multi-section ‘introduction to the AND’.  I didn’t quite manage to get all of the links working in the latter and I’ll need to return to this next year.  I also overhauled the homepage and footer, adding in the project’s Twitter feed, a new introduction and adding links to Twitter and Facebook to the footer.

I also identified and fixed an error with the label translations, which were sometimes displaying the wrong translation.  My script that extracted the labels was failing to grab the sense ID for subsenses.  This ID is only used to pull out the appropriate translation, but because of the failure the ID of the last main sense was being used instead.  I therefore had to update my script and regenerate the translation data.  I also updated the label search to add in citations as well as translations.  This means the search results page can get very long as both labels and translations are applied at sense level, so we end up with every citation in a matching sense listed, but apparently this is what’s wanted.

I also fixed the display of ‘YBB’ sources, which for some unknown reason are handled differently to all other sources in the system and fixed the issue with deviant forms and their references and parts of speech.

On Wednesday we made the site live, replacing the old site with the new one, which you can now access here:  https://anglo-norman.net/.  It wasn’t entirely straightforward to get the DNS update working, but we got there in the end, and after making some tweaks to paths and adding in Google Analytics the site was ready to use, which is quite a relief.  There is still a lot of work to do on the site, but I’m very happy with the progress I’ve made with the site since I began the redevelopment in October.

Also this week I set up a new website for phase two of the ‘Editing Burns for the 21st Century’ project and upgraded all of the WordPress sites I manage to the most recent version.  I also arranged a meeting with Jane Stuart-Smith to discuss a new project in the New Year, replied to Kirsteen McCue about a proposal she’s finishing off, replied to Simon Taylor about a new place-name project he wants me to be involved with and replied to Carolyn Jess-Cooke about a project of hers that will be starting next year.

That’s all for 2020.  Here’s hoping 2021 is not going to be quite so crazy!

Week Beginning 9th November 2020

I took Friday off this week as I had a dentist’s appointment across town in the West End and I decided to take the opportunity to do some Christmas shopping whilst all the shops in Glasgow are still open (there’s some talk of us having greater Covid restrictions imposed in the next week or so).  I spent a couple of days this week working on the Dictionary of the Scots Language, a project I’ve been meaning to return to for many months but have been too busy with other work to really focus on.  Thankfully in November with the launch of the second edition of the Historical Thesaurus out of the way I have a bit of time to get back into the outstanding DSL issues.

Rhona Alcorn had sent a list of outstanding tasks a while back and I spent some time going through this and commenting on each item.  I then began to work through each item, starting with fixing cross references in our ‘V3’ test site (which features data that the editors have been working on in recent years).  Cross references appear differently in the XML for this version so I needed to update the XSLT in order to make them work correctly.  I then updated the full-text extraction script that prepares data for inclusion in the Solr search engine.  Previously this was stripping out all of the XML tags in order to leave the plain text, but unfortunately there were occasions where the entries contains words separated by tags but no spaces, meaning when the tags were removed the words ended up joined together.  I fixed this by adding a space character before every XML tag before the tags were stripped out.  This resulted in plain text that often contained multiple spaces between words, but thankfully Solr ignores these when it indexes the text.  I asked Raymond of Arts IT Support to upload the new text to the server and tested things out and all worked perfectly.

After this I moved on to creating a new ordering for the ‘browse’ feature.  This new ordering takes into consideration parts of speech and ensures that supplemental entries appear below main entries.  It also correctly positions entries beginning with a yogh.  I’d created a script to generate the new browse order many months ago, so I could just tweak this and then use it to update the database.  After that I needed to make some updates to the V2 and V3 front-ends to use the new ordering fields, which took a little time, but it seems to have worked successfully.  I may need to tweak the ordering further, but will await feedback before I make any changes.

I then moved on to investigating searches for accented characters, that were apparently not working correctly.  I noticed that the htaccess script was not set up to accept accented characters so I updated this.  However, the advanced headword search itself was finding forms with accented characters in them if the non-accented version was passed.  The ‘privace’ example was redirecting to the entry page as only one result was matched, but if you perform a search for ‘*vace’ it finds and displays the accented headword in both V2 and V3 but not the live site.  Therefore I think this issue is now sorted.  However, we should perhaps strip out accents from any submitted search terms as allowing accented characters to be submitted (e.g. for *vacé) gives the impression that we allow accented characters to be searched for distinctly from their unaccented versions and the results including both accented and unaccented might confuse people.

The last DSL issue I looked at involved hiding superscript characters in certain circumstances (after ‘geo’ tags in ‘cref’ tags).  There are 3093 SND entries that include the text ‘</geo><su>’ or ‘</geo> <su>’ and I updated the XSLT file that transforms the XML into HTML to deal with these.  Previously it transformed the <su> tag into the HTML superscript tag <sup>.  I’ve updated it so that it now checks to see what the tag’s preceding sibling is.  If it’s a <geo> tag it now adds the class ‘noSup’ to the generated <sup>.  Currently I’ve set <sup> elements with this class to have a pink background so the editors can check to see how the match is performing, and once they’re happy with it I can update the CSS to hide ‘noSup’ elements.

Other than DSL work I also spent some time continuing to work on the redevelopment of the Anglo-Norman Dictionary and completed an initial version of the label search that I began working on last week.  The search form as discussed last week hasn’t changed, but it’s now possible to submit the search, navigate through the search results, return to the search form to make changes to your selection and view entries.  I have needed to overhaul how the search page works to accommodate the label search, which required some pretty major changes behind the scenes, but hopefully none of the other searches will have been affected by this.  You can select a single label and search for that, e.g. ‘archit.’ and if you then refine your search you will see that the label is ‘remembered’ in the form so you can add to it or remove it, for example if you’re interested in all of the entries that are labelled ‘archit.’ and ‘mil’.  As mentioned last week, adding or changing a citation year resets the boxes as different labels are displayed depending on the years chosen.  The chosen year is remembered by the form if you choose to refine your search and the labels and selected labels and Booleans are pulled in alongside the remembered year.  So for example if you want to find entries that feature a sense labelled ‘agricultural’ or ‘bot.’ that have a citation between 1400 and 1410 you can do this.  On the entry page both semantic and usage labels are now links that lead through to the search results for the label in question.  I’ve currently given both label types a somewhat garish pink colour, but this can be changed, or we could use two different colours for the two types.

Other than these projects, I fixed an issue with the 18th century Glasgow borrowers site (https://18c-borrowing.glasgow.ac.uk/) and made some tweaks to the place-names of Iona site, fixing the banner and creating Gaelic versions of the pages and menu items.  The site is not live yet, but I’m pretty happy with how it’s looking.  Here’s an image of the banner I created:

Also this week I spoke to Kirsteen McCue about the project she’s currently preparing a proposal for and I created a new version of the Burns Suppers map for Paul Malgrati.  This was rather tricky as his data is contained in a spreadsheet that has more than 2,500 rows and more than 90 columns, and it took some time to process this in a way that worked, especially as some fields contained carriage returns which resulted in lines being split where they shouldn’t be when the data was exported.  However, I got there in the end, and next week I hope to develop the filters for the data.

Week Beginning 7th September 2020

This was a pretty busy week, involving lots of different projects.  I set up the systems for a new place-name project focusing on Ayrshire this week, based on the system that I initially developed for the Berwickshire project and has subsequently been used for Kirkcudbrightshire and Mull.  It didn’t take too long to port the system over, but the PI also wanted the system to be populated with data from the GB1900 crowdsourcing project.  This project has transcribed every place-name on the GB1900 Ordnance Survey maps across the whole of the UK and is an amazing collection of data totalling some 2.5 million names.  I had previously extracted a subset of names for the Mull and Ulva project so thankfully had all of the scripts needed to get the information for Ayrshire.  Unfortunately what I didn’t have was the data in a database, as I’d previously extracted it to my PC at work.  This meant that I had to run the extraction script again on my home PC, which took about three days to work through all of the rows in the monstrous CSV file.  Once this was complete I could then extract the names found in the Ayrshire parishes that the project will be dealing with, resulting in almost 4,000 place-names.  However, this wasn’t the end of the process as while the extracted place-names had latitude and longitude they didn’t have grid references or altitude.  My place-names system is set up to automatically generate these values and I could customise the scripts to automatically apply the generated data to each of the 4000 places.  Generating the grid reference was pretty straightforward but grabbing the altitude was less so, as it involved submitting a query to Google Maps and then inserting the returned value into my system using an AJAX call.  I ran into difficulties with my script exceeding the allowed number of Google Map queries and also the maximum number of page requests on our server, resulting in my PC getting blocked by the server and a ‘Forbidden’ error being displayed instead, but with some tweaking I managed to get everything working within the allowed limits.

I also continued to work on the Second Edition of the Historical Thesaurus.  I set up a new version of the website that we will work on for the Second Edition, and created new versions of the database tables that this new site connects to.  I also spent some time thinking about how we will implement some kind of changelog or ‘history’ feature to track changes to the lexemes, their dates and corresponding categories.  I had a Zoom call with Marc and Fraser on Wednesday to discuss the developments and we realised that the date matching spreadsheets I’d generated last week could do with some additional columns from the OED data, namely links through to the entries on the OED website and also a note to say whether the definition contains ‘(a)’ or ‘(also’ as these would suggest the entry has multiple senses that may need a closer analysis of the dates.

I then started to update the new front-end to use the new date structure that we will use for the Second Edition (with dates stored in a separate date table rather than split across almost 20 different date fields in the lexeme table).  I updated the timeline visualisations (mini and full) to use this new date table, and although this took quite some time to get my head around the resulting code is MUCH less complicated than the horrible code I had to write to deal with the old 20-odd date columns.  For example, the code to generate the data for the mini timelines is about 70 lines long now as opposed to over 400 previously.

The timelines use the new data tables in the category browse and the search results.  I also spotted some dates weren’t working properly with the old system but are working properly now.  I then updated the ‘label’ autocomplete in the advanced search to use the labels in the new date table.  What I still need to do is update the search to actually search for the new labels and also to search the new date tables for both ‘simple’ and ‘complex’ year searches.  This might be a little tricky, and I will continue on this next week.

Also this week I gave Gerry McKeever some advice about preserving the data of his Regional Romanticism project, spoke to the DSL people about the wording of the search results page, gave feedback on and wrote some sections for Matthew Creasy’s Chancellor’s Fund proposal, gave feedback to Craig Lamont regarding the structure of a spreadsheet for holding data about the correspondence of Robert Burns and gave some advice to Rob Maslen about the stats for his ‘City of Lost Books’ blog.  I also made a couple of tweaks to the content management system for the Books and Borrowers project based on feedback from the team.

I spent the remainder of the week working on the redevelopment of the Anglo-Norman dictionary.  I updated the search results page to style the parts of speech to make it clearer where one ends and the next begins.  I also reworked the ‘forms’ section to add in a cut-off point for entries that have a huge number of forms.  In such cases the long list of cut off and an ellipsis is added in, together with an ‘expand’ button.  Pressing on this scrolls down the full list of forms and the button is replaced with a ‘collapse’ button.  I also updated the search so that it no longer includes cross references (these are to be used for the ‘Browse’ list only) and the quick search now defaults to an exact match search whether you select an item from the auto-complete or not.  Previously it performed an exact match if you selected an item but defaulted to a partial match if you didn’t.  Now if you search for ‘mes’ (for example) and press enter or the search button your results are for “mes” (exactly).  I suspect most people will select ‘mes’ from the list of options, which already did this, though.  It is also still possible to use the question mark wildcard with an ‘exact’ search, e.g. “m?s” will find 14 entries that have three letter forms beginning with ‘m’ and ending in ‘s’.

I also updated the display of the parts of speech so that they are in order of appearance in the XML rather than alphabetically and I’ve updated the ‘v.a.’ and ‘v.n.’ labels as the editor requested.  I also updated the ‘entry’ page to make the ‘results’ tab load by default when reaching an entry from the search results page or when choosing a different entry in the search results tab.  In addition, the search result navigation buttons no longer appear in the search tab if all the results fit on the page and the ‘clear search’ button now works properly.  Also, on the search results page the pagination options now only appear if there is more than one page of results.

On Friday I began to process the entry XML for display on the entry page, which was pretty slow going, wading through the XSLT file that is used to transform the XML to HTML for display.  Unfortunately I can’t just use the existing XSLT file from the old site because we’re using the editor’s version of the XML and not the system version, and the two are structurally very different in places.

So far I’ve been dealing with forms and have managed to get the forms listed, with grammatical labels displayed where available and commas separating forms and semi-colons separating groups of forms.  Deviant forms are surrounded by brackets.  Where there are lots of forms the area is cut off as with the search results.  I still need to add in references where these appear, which is what I’ll tackle next week.  Hopefully now I’ve started to get my head around the XML a bit progress with the rest of the page will be a little speedier, but there will undoubtedly be many more complexities that will need to be dealt with.

Week Beginning 31st August 2020

I worked on many different projects this week, and the largest amount of my time went into the redevelopment of the Anglo-Norman Dictionary.  I processed a lot of the data this week and have created database tables and written extraction scripts to export labels, parts of speech, forms and cross references from the XML.  The data extracted will be used for search purposes, for display on the website in places such as the search results or will be used to navigate between entries.  The scripts will also be used when updating data in the new content management system for the dictionary when I write it.  I have extracted 85,397 parts of speech, 31,213 cross references, 150,077 forms and their types (lemma / variant / deviant) and 86,269 labels which correspond to one of 157 unique labels (usage or semantic), which I also extracted.

I have also finished work on the quick search feature, which is now fully operational.  This involved creating a new endpoint in the API for processing the search.  This includes the query for the predictive search (i.e. the drop-down list of possible options that appears as you type), which returns any forms that match what you’re typing in and the query for the full quick search, which allows you to use ‘?’ and ‘*’ wildcards (and also “” for an exact match) and returns all of the data about each entry that is needed for the search results page.  For example, if you type in ‘from’ in the ‘Quick Search’ box a drop-down list containing all matching forms will appear.  Note that these are forms not only headwords so they include lemmas but also variants and deviants.  If you select a form that is associated with one single entry then the entry’s page will load.  If you select a form that is associated with more than one entry then the search results page will load.  You can also choose to not select an item from the drop-down list and search for whatever you’re interested in.  For example, enter ‘*ment’ and press enter or the search button to view all of the forms ending in ‘ment’, as the following screenshot demonstrates (note that this is not the final user interface but one purely for test purposes):

With this example you’ll see that the results are paginated, with 100 results per page.  You can browse through the pages using the next and previous buttons or select one of the pages to jump directly to it.  You can bookmark specific results pages too.  Currently the search results display the lemma and homonym number (if applicable) and display whether the entry is an xref or not.  Associated parts of speech appear after the lemma.  Each one currently has a tooltip and we can add in descriptions of what each POS abbreviation means, although these might not be needed.  All of the variant / deviant forms are also displayed as otherwise it can be quite confusing for users if the lemma does not match the term the user entered but a form does.  All associated semantic / usage labels are also displayed.  I’m also intending to add in earliest citation date and possibly translations to the results as well, but I haven’t extracted them yet.

When you click on an entry from the search results this loads the corresponding entry page.  I have updated this to add in tabs to the left-hand column.  In addition to the ‘Browse’ tab there is a ‘Results’ tab and a ‘Log’ tab.  The latter doesn’t contain anything yet, but the former contains the search results.  This allows you to browse up and down the search results in the same way as the regular ‘browse’ feature, selecting another entry.  You can also return to the full results page.  I still need to do some tweaking to this feature, such as ensuring the ‘Results’ tab loads by default if coming from a search result.  The ‘clear’ option also doesn’t currently work properly.  I’ll continue with this next week.

For the Books and Borrowing project I spent a bit of time getting the page images for the Westerkirk library uploaded to the server and the page records created for each corresponding page image.  I also made some final tweaks to the Glasgow Students pilot website that Matthew Sangster and I worked on and this is now live and available here: https://18c-borrowing.glasgow.ac.uk/.

There are three new place-name related projects starting up at the moment and I spent some time creating initial websites for all of these.  I still need to add in the place-name content management systems for two of them, and I’m hoping to find some time to work on this next week.  I also spoke to Joanna Kopaczyk about a website for an RSE proposal she’s currently putting together and gave some advice to some people in Special Collections about a project that they are planning.

On Tuesday I had a Zoom call with the ‘Editing Robert Burns’ people to discuss developing the website for phase two of the Editing Robert Burns project.  We discussed how the website would integrate with the existing website (https://burnsc21.glasgow.ac.uk/) and discussed some of the features that would be present on the new site, such as an interactive map of Burns’ correspondence and a database of forged items.

I also had a meeting with the Historical Thesaurus people on Tuesday and spent some time this week continuing to work on the extraction of dates from the OED data, which will feed into a new second edition of the HT.  I fixed all of the ‘dot’ dates in the HT data.  This is where there isn’t a specific date but a dot is used instead (e.g. 14..) but sometimes a specific year is given in the year attribute (e.g. 1432) but at other times a more general year is given (e.g. 1400).  We worked out a set of rules for dealing with these and I created a script to process them.  I then reworked my script that extracts dates for all lexemes that match a specific date pattern (YYYY-YYYY, where the first year might be Old English and the last year might be ‘Current’) and sent this to Fraser so that the team can decide which of these dates should be used in the new version of the HT.  Next week I’ll begin work on a new version of the HT website that uses an updated dataset so we can compare the original dates with the newly updated ones.

Week Beginning 10th August 2020

I was back at work this week after spending two weeks on holiday, during which time we went to Skye, Crinan and Peebles.  It was really great to see some different places after being cooped up at home for 19 weeks and I feel much better for having been away.  Unfortunately during this week I developed a pretty severe toothache and had to make an emergency appointment with the dentist on Thursday morning.  It turns out I need root canal surgery and am now booked in to have this next Tuesday, but until then I need to try and just cope with the pain, which has been almost unbearable at times, despite regular doses of both ibuprofen and paracetamol.  This did affect my ability work a bit on Thursday afternoon and Friday, but I managed to struggle through.

On my return to work from my holiday on Monday I spent some time catching up with emails that had accumulated whilst I was away, including replying to Roslyn Potter in Scottish Literature about a project website, replying to Jennifer Smith about giving a group of students access to the SCOSYA data and making changes to the Berwickshire Place-names website to make it more attractive to the REF reviewers based on feedback passed on by Jeremy Smith.  I also created a series of high-resolution screenshots of the resource for Carole Hough for a publication, had an email chat with Luca Guariento about linked open data.

I also fixed some issues with the Galloway Glens projects that Thomas Clancy had spotted, including an issue with the place-name element page which was not ordering accented characters properly – all accented characters were being listed at the end rather than with their non-accented versions.  It turned out that while the underlying database orders accented characters correctly, for the elements list I need to get a list of elements used in place-names and a list of elements used in historical forms and then I have to combine these lists and reorder the resulting single list.  This part of the process was not dealing with all accented characters, only a limited set that I’d created for Berwickshire that also dealt with ashes and thorns.  Instead I added in a function taken from WordPress that converts all accented characters to their unaccented equivalent for the purposes of ordering and this ensured the order of the elements list was correct.

The rest of my week was divided between three projects, the first of which was the Books and Borrowing project.  For this I spent some time working with some of the digitised images of the register pages.  We now have access to the images from Westerkirk library and in these records appear in a table that spreads across both recto and verso pages but we have images of the individual pages.  The project RA who is transcribing the records is treating both recto and verso as a single ‘page’ in the system, which makes sense.  We therefore need to stitch the r and v images together into on single image to be associated with this ‘page’.  I downloaded all of the images and have found a way to automatically join two page images together.  However, there is rather a lot of overlap in the images, meaning the book appears to have two joins and some columns are repeated.  I could possibly try to automatically crop the images before joining them, but there is quite a bit of variation in the size of the overlap so this is never going to be perfect and may result in some information getting lost.  The other alternative would be to manually crop and join the images, which I did some experimentation with.  It’s still not perfect due to the angle of the page changing between shots, but it’s a lot better.  The downside with this approach is that someone would have to do the task.  There are about 230 images, so about 115 joins, each one taking 2-3 minutes to create, so maybe about 5 or so hours of effort.  I’ve left it with the PI and Co-I to decide what to do about this.  I also downloaded the images for Volume 1 of the register for Innerpeffray library and created tilesets for these that will allow the images to be zoomed and panned.  I also fixed a bug relating to adding new book items to a record and responded to some feedback about the CMS.

My second major project of the week was the Anglo-Norman Dictionary.  This week I began writing a high-level requirements document for the new AND website that I will be developing.  This mean going through the existing site in detail and considering which features will be retained, how things might be handled better, and how I might develop the site.  I made good progress with the document, and by the end of the week I’d covered the main site.  Next week I need to consider the new API for accessing the data and the staff pages for uploading and publishing new or newly edited entries.  I also responded to a few questions from Heather Pagan of the AND about the searches and read through and gave feedback on a completed draft of the AHRC proposal that the team are hoping to submit next month.

My final major project of the week was the Historical Thesaurus, for which I updated and re-executed by OED Date extraction script based on feedback from Fraser and Marc.  It was a long and complicated process to update the script as there are literally millions of dates and some issues only appear a handful of times, so tracking them down and testing things is tricky.    However, I made the following changes: I added a ‘sortdate_new’ column to the main OED lexeme table that holds the sortdate value from the new XML files, which may differ from the original value.  I’ve done some testing and rather strangely there are many occasions where the new sortdate differs from the old, but the ‘revised’ flag is not set to ‘true’.  I also updated the new OED date table to include a new column where the full date text is contained, as I thought this would be useful for tracing back issues.  E.g. if the OED date is ‘?c1225’ this is stored here.  The actual numeric year in my table now comes from the ‘year’ attribute in the XML instead.  This always contains the numeric value in the OED date, e.g. <q year=”1330″><date>c1330</date></q>.  New lexemes in the data are now getting added into the OED lexeme table and are also having their dates processed.  I’ve added a new column called ‘newaugust2020’ to track these new lexemes.  We’ll possibly have to try and match them up with existing HT lexemes at some point, unless we can consider them all to be ‘new’, meaning they’ll have no matches.  The script also now stores all of the various OE dates, rather than one single OE date of 650 being added for all.  I set the script running on Thursday and by Sunday it had finished executing, resulting in 3,912,109 being added and 4061 new words.

Week Beginning 20th July 2020

Week 19 of Lockdown, and it was a short week for me as the Monday was the Glasgow Fair holiday.  I spent a couple of days this week continuing to add features to the content management system for the Books and Borrowing project.  I have now implemented the ‘normalised occupations’ part of the CMS.  Originally occupations were just going to be a set of keywords, allowing one or more keyword to be associated with a borrower.  However, we have been liaising with another project that has already produced a list of occupations and we have agreed to share their list.  This is slightly different as it is hierarchical, with a top-level ‘parent’ containing multiple main occupations. E.g. ‘Religion and Clergy’ features ‘Bishop’.  However, for our project we needed a third hierarchical level do differentiate types of minister/priest, so I’ve had to add this in too.  I’ve achieved this by means of a parent occupation ID in the database, which is ‘null’ for top-level occupations and contains the ID of the parent category for all other occupations.

I completed work on the page to browse occupations, arranging the hierarchical occupations in a nested structure that features a count of the number of borrowers associated with the occupation to the right of the occupation name.  These are all currently zero, but once some associations are made the numbers will go up and you’ll be able to click on the count to bring up a list of all associated borrowers, with links through to each borrower.  If an occupation has any child occupations a ‘+’ icon appears beside it.  Press on this to view the child occupations, which also have counts.  The counts for ‘parent’ occupations tally up all of the totals for the child occupations, and clicking on one of these counts will display all borrowers assigned to all child occupations.  If an occupation is empty there is a ‘delete’ button beside it.  As the list of occupations is going to be fairly fixed I didn’t add in an ‘edit’ facility – if an occupation needs editing I can do it directly through the database, or it can be deleted and a new version created.  Here’s a screenshot showing some of the occupations in the ‘browse’ page:

I also created facilities to add new occupations.  You can enter an occupation name and optionally specify a parent occupation from a drop-down list.  Doing so will add the new occupation as a child of the selected category, either at the second level if a top level parent is selected (e.g. ‘Agriculture’) or at the third level if a second level parent is selected (e.g. ‘Farmer’).  If you don’t include a parent the occupation will become a new top-level grouping.  I used this feature to upload all of the occupations, and it worked very well.

I then updated the ‘Borrowers’ tab in the ‘Browse Libraries’ page to add ‘Normalised Occupation’ to the list of columns in the table.  The ‘Add’ and ‘Edit’ borrower facilities also now feature ‘Normalised Occupation’, which replicates the nested structure from the ‘browse occupations’ page, only features checkboxes beside each main occupation.  You can select any number of occupations for a borrower and when you press the ‘Upload’ or ‘Edit’ button your choice will be saved.  Deselecting all ticked checkboxes will clear all occupations for the borrower.  If you edit a borrower who has one or more occupations selected, in addition to the relevant checkboxes being ticked, the occupations with their full hierarchies also appear above the list of occupations, so you can easily see what is already selected. I also updated the ‘Add’ and ‘Edit’ borrowing record pages so that whenever a borrower appears in the forms the normalised occupations feature also appears.

I also added in the option to view page images.  Currently the only ledgers that have page images are the three Glasgow ones, but more will be added in due course.  When viewing a page in a ledger that includes a page image you will see the ‘Page Image’ button above the table of records.  Press on this and a new browser tab will open.  It includes a link through to the full-size image of the page if you want to open this in your browser or download it to open in a graphics package.  It also features the ‘zoom and pan’ interface that allows you to look at the image in the same manner as you’d look at a Google Map.  You can also view this full screen by pressing on the button in the top right of the image.

Also this week I made further tweaks to the script I’d written to update lexeme start and end dates in the Historical Thesaurus based on citation dates in the OED.  I’d sent a sample output of 10,000 rows to Fraser last week and he got back to me with some suggestions and observations.  I’m going to have to rerun the script I wrote to extract the more than 3 million citation dates from the OED as some of the data needs to be processed differently, but as this script will take several days to run and I’m on holiday next week this isn’t something I can do right now.  However, I managed to change the way the date matching script runs to fix some bugs and make the various processes easier to track.  I also generated a list of all of the distinct labels in the OED data, with counts of the number of times these appear.  Labels are associated with specific citation dates, thankfully.  Only a handful are actually used lots of times, and many of the others appear to be used as a ‘notes’ field rather than as a more general label.

In addition to the above I also had a further conversation with Heather Pagan about the data management plan for the AND’s new proposal, responded to a query from Kathryn Cooper about the website I set up for her at the end of last year, responded to a couple of separate requests from post-grad students in Scottish Literature, spoke to Thomas Clancy about the start date for his Place-Names of Iona project, which got funded recently, helped with some issues with Matthew Creasy’s Scottish Cosmopolitanism website and spoke to Carole Hough about making a few tweaks to the Berwickshire Place-names website for REF.

I’m going to be on holiday for the next two weeks, so there will be no further updates from me for a while.

Week Beginning 13th July 2020

This was week 18 of Lockdown, which is now definitely easing here.  I’m still working from home, though, and will be for the foreseeable future.  I took Friday off this week, so it was a four-day week for me.  I spent about half of this time on the Books and Borrowing project, during which time I returned to adding features to the content management system, after spending recent weeks importing datasets.  I added a number of indexes to the underlying database which should speed up the loading of certain pages considerably.  E.g. the browse books, borrowers and author pages.  I then updated the ‘Books’ tab when viewing a library (i.e. the page that lists all of the book holdings in the library) so that it now lists the number of book holdings in the library above the table.  The table itself now has separate columns for all additional fields that have been created for book holdings in the library and it is now possible to order the table by any of the headings (pressing on a heading a second time reverses the ordering).  The count of ‘Borrowing records’ for each book in the table is now a button and pressing on it brings up a popup listing all of the borrowing records that are associated with the book holding record, and from this pop-up you can then follow a link to view the borrowing record you’re interested in.  I then made similar changes to the ‘Borrowers’ tab when viewing a library (i.e. the page that lists all of the borrowers the library has). It also now displays the total number of borrowers at the top.  This table already allowed the reordering by any column, so that’s not new, but as above, the ‘Borrowing records’ count is now a link that when clicked on opens a list of all of the borrowing records the borrower is associated with.

The big new feature I implemented this week was borrower cross references.   These can be added via the ‘Borrowers’ tab within a library when adding or editing a borrower on this page.  When adding or editing a borrower there is now a section of the form labelled ‘Cross-references to other borrowers’.  If there are any existing cross references these will appear here, with a checkbox beside each that you can tick if you want to delete the cross reference (the user can tick the box then press ‘Edit’ to edit the borrower and the reference will be deleted).  Any number of new cross references can be added by pressing on the ‘Add a cross-reference’ button (multiple times, if required).  Doing so adds two fields to the form, one for a ‘description’, which is the text that shows how the current borrower links to the referenced borrowing record, and one for ‘referenced borrower’, which is an auto-complete.  Type in a name or part of a name and any borrower that matches in any library will be listed.  The library appears in brackets after the borrower’s name to help differentiate records.  Select a borrower and then when the ‘Add’ or ‘Edit’ button is pressed for the borrower the cross reference will be made.

Cross-references work in both directions – if you add a cross reference from Borrower A to Borrower B you don’t then need to load up the record for Borrower B to add a reference back to Borrower A.  The description text will sit between the borrower whose form you make the cross reference on and the referenced borrower you select, so if you’re on the edit form for Borrower A and link to Borrower B and the description is ‘is the son of’ then the cross reference will appear as ‘Borrower A is the son of Borrower B’.  If you then view Borrower B the cross reference will still be written in this order.  I also updated the table of borrowers to add in a new ‘X-Refs’ column that lists all cross-references for a borrower.

I spent the remainder of my working week completing smaller tasks for a variety of projects, such as updating the spreadsheet output of duplicate child entries for the DSL people, getting an output of the latest version of the Thesaurus of Old English data for Fraser, advising Eleanor Lawson on ‘.ac.uk’ domain names and having a chat with Simon Taylor about the pilot Place-names of Fife project that I worked on with him several years ago.  I also wrote a Data Management Plan for a new AHRC proposal the Anglo-Norman Dictionary people are putting together, which involved a lengthy email correspondence with Heather Pagan at Aberystwyth.

Finally, I returned to the ongoing task of merging data from the Oxford English Dictionary with the Historical Thesaurus.  We are currently attempting to extract citation dates from OED entries in order to update the dates of usage that we have in the HT.  This process uses the new table I recently generated from the OED XML dataset which contains every citation date for every word in the OED (more than 3 million dates).  Fraser had prepared a document listing how he and Marc would like the HT dates to be updated (e.g. if the first OED citation date is earlier than the HT start date by 140 years or more then use the OED citation date as the suggested change).  Each rule was to be given its own type, so that we could check through each type individually to make sure the rules were working ok.

It took about a day to write an initial version of the script, which I ran on the first 10,000 HT lexemes as a test.  I didn’t split the output into different tables depending on the type, but instead exported everything to a spreadsheet so Marc and Fraser could look through it.

In the spreadsheet if there is no ‘type’ for a row it means it didn’t match any of the criteria, but I included these rows anyway so we can check whether there are any other criteria the rows should match.  I also included all the OED citation dates (rather than just the first and last) for reference.  I noted that Fraser’s document doesn’t seem to take labels into consideration.  There are some labels in the data, and sometimes there’s a new label for an OED start or end date when nothing else is different, e.g. htid 1479 ‘Shore-going’:  This row has no ‘type’ but does have new data from the OED.

Another issue I spotted is that as the same ‘type’ variable is set when a start date matches the criteria and then when an end date matches the criteria, the ‘type’ as set during start date is then replaced with the ‘type’ for end date.  I think, therefore, that we might have to split the start and end processes up, or append the end process type to the start process type rather than replacing it (so e.g. type 2-13 rather than type 2 being replaced by type 13).  I also noticed that there are some lexemes where the HT has ‘current’ but the OED has a much earlier last citation date (e.g. htid 73 ‘temporal’ has 9999 in the HT but 1832 in the OED.  Such cases are not currently considered.

Finally, according to the document, Antes and Circas are only considered for update if the OED and HT date is the same, but there are many cases where the start / end OED date is picked to replace the HT date (because it’s different) and it has an ‘a’ or ‘c’ and this would then be lost.  Currently I’m including the ‘a’ or ‘c’ in such cases, but I can remove this if needs be (e.g. HT 37 ‘orb’ has HT start date 1601 (no ‘a’ or ‘c’) but this is to be replaced with OED 1550 that has an ‘a’.  Clearly the script will need to be tweaked based on feedback from Marc and Fraser, but I feel like we’re finally making some decent progress with this after all of the preparatory work that was required to get to this point.

Next Monday is the Glasgow Fair holiday, so I won’t be back to work until the Tuesday.

Week Beginning 6th July 2020

Week 16 of Lockdown and still working from home.  I continued working on the data import for the Books and Borrowers project this week.  I wrote a script to import data from Haddington, which took some time due to the large number of additional fields in the data (15 across Borrowers, Holdings and Borrowings), but are executing it resulted in a further 5,163 borrowing records across 2 ledgers and 494 pages being added, including 1399 book holding records and 717 borrowers.

I then moved onto the datasets from Leighton and Wigtown.  Leighton was a much smaller dataset, with just 193 borrowing records over 18 pages in one ledger and involving 18 borrowers and 71 books.  As before, I have just created book holding records for these (rather than project-wide edition records), although in this case there are authors for books too, which I have also created.  Wigtown was another smaller dataset.  The spreadsheet has three sheets, the first is a list of borrowers, the second a list of borrowings and the third a list of books.  However, no unique identifiers are used to connect the borrowers and books to the information in the borrowings sheet and there’s no other field that matches across the sheets to allow the data to be automatically connected up.  For example, in the Books sheet there is the book ‘History of Edinburgh’ by author ‘Arnot, Hugo’ but in the borrowings tab author surname and forename are split into different columns (so ‘Arnot’ and ‘Hugo’ and book titles don’t match (in this case the book appears as simply ‘Edinburgh’ in the borrowings).  Therefore I’ve not been able to automatically pull in the information from the books sheet.  However, as there are only 59 books in the books sheet it shouldn’t take too much time to manually add the necessary data when created Edition records.  It’s a similar issue with Borrowers in the first sheet – they appear with name in one column (e.g. ‘Douglas, Andrew’) but in the Borrowings sheet the names are split into separate forename and surname columns.  There are also instances of people with the same name (e.g. ‘Stewart, John’) but without unique identifiers there’s no way to differentiate these.  There are only 110 people listed in the Borrowers sheet, and only 43 in the actual borrowing data, so again, it’s probably better if any details that are required are added in manually.

I imported a total of 898 borrowing records for Wigtown.  As there is no page or ledger information in the data I just added these all to one page in a made-up ledger.  It does however mean that the page can take quite a while to load in the CMS.  There are 43 associated borrowers and 53 associated books, which again have been created as Holding records only and have associated authors.  However, there are multiple Book Items created for many of these 53 books – there are actually 224 book items.  This is because the spreadsheet contains a separate ‘Volume’ column and a book may be listed with the same title but a different volume.  In such cases a Holding record is made for the book (e.g. ‘Decline and Fall of Rome’) and an Item is made for each Volume that appears (in this case 12 items for the listed volumes 1-12 across the dataset).  With these datasets imported I have now processed all of the existing data I have access to, other than the Glasgow Professors borrowing records, but these are still being worked on.

I did some other tasks for the project this week as well, including reviewing the digitisation policy document for the project, which lists guidelines for the team to follow when they have to take photos of ledger pages themselves in libraries where no professional digitisation service is available.  I also discussed how borrower occupations will be handled in the system with Katie.

In addition to the Books and Borrowers project I found time to work on a number of other projects this week too.  I wrote a Data Management Plan for an AHRC Networking proposal that Carolyn Jess-Cooke in English Literature is putting together and I had an email conversation with Heather Pagan of the Anglo-Norman Dictionary about the Data Management Plan she wants me to write for a new AHRC proposal that Glasgow will be involved with.  I responded to a query about a place-names project from Thomas Clancy, a query about App certification from Brian McKenna in IT Services and a query about domain name registration from Eleanor Lawson at QMU.  Also (outside of work time) I’ve been helping my brother-in-law set up Beacon Genealogy, through which he offers genealogy and family history research services.

Also this week I worked with Jennifer Smith to make a number of changes to the content of the SCOSYA website (https://scotssyntaxatlas.ac.uk/) to provide more information about the project for REF purposes and I added a new dataset to the interactive map of Burns Suppers that I’m creating for Paul Malgrati in Scottish Literature.  I also went through all of the WordPress sites I manage and upgraded them to the most recent version of WordPress.

Finally, I spent some time writing scripts for the DSL people to help identify child entries in the DOST and SND datasets that haven’t been properly merged with main entries when exported from their editing software.  In such cases the child entries have been added to the main entries, but then they haven’t been removed as separate entries in the output data, meaning the child entries appear twice.  When attempting to process the SND data I discovered there were some errors in the XML file (mismatched tags) that prevented my script from processing the file, so I had to spend some time tracking these down and fixing them.  But once this had been done my script could do through the entire dataset, look for an ID that appeared as a URL in one entry and as an ID of another entry and in such cases pull out the IDs and the full XML of each entry and export it into an HTML table.  There were about 180 duplicate child entries in DOST but a lot more in SND (the DOST file is about 1.5mb, the SND one is about 50mb).  Hopefully once the DSL people have analysed the data we can then strip out the unnecessary child entries and have a better dataset to import into the new editing system the DSL is going to be using.