Week Beginning 26th July 2021

I returned to Glasgow for a more regular week of working from home, after spending a delightful time at my parents’ house in Yorkshire for the past two weeks.  I continued to work on the Comparative Kingship front-ends this week.  I fixed a couple of issues with the content management systems, such as ensuring that the option to limit the list of place-names by parish worked for historical parishes and fixing an issue whereby searching by sources was returning zero results.  Last week I’d contacted Chris Fleet at NLS Maps to ask whether we might be able to incorporate a couple of maps of Ireland that they host into our resource, and Chris got back to me with a very helpful reply, giving us permission to use the maps and also pointing out some changes to the existing map layers that I could make.

I updated the attribution links on the site, and pointed the OS six-inch map links to the NLS’s hosting on AWS.  I also updated these things on the other place-name resources I’ve created too.  We had previously been using a modern OS map layer hosted by the NLS, and Chris pointed out that a more up to date version could now be accessed directly from the OS website (see Chris’s blog post here: https://www.ordnancesurvey.co.uk/newsroom/blog/comparing-past-present-new-os-maps-api-layers).  I followed the instructions and signed up for an OS API key, and it was then a fairly easy process to replace the OS layer with the new one.  I did the same with the other place-name resources too, and its looks pretty good.  See for example how it looks on a map showing placenames beginning with ‘B’ on the Berwickshire site: https://berwickshire-placenames.glasgow.ac.uk/place-names/?p=results&source=browse&reels_name=B*#13/55.7939/-2.2884/resultsTabs-0/code/tileOS//

With these changes and the Irish historical maps in place I continued to work on the Irish front-end.  I added in the parish boundaries for all of the currently required parishes and also added in three-letter acronyms that the researcher Nick Evans had created for each parish.  These are needed to identify the parishes on the map, as full parish names would clutter things up too much.  I then needed to manually positing each of the acronyms on the map, and to do so I updated the Irish map to print the latitude and longitude of a point to the console whenever a mouse click is made.  This made it very easy to grab the coordinates of an ideal location for each acronym.

There were a few issues with the parish boundaries, and Nick wondered whether the boundary shapefiles he was using might work better.  I managed to open the parish boundary shapefile in QGIS, converted the boundary data to WGS84 (latitude / longitude) and then extracted the boundaries as a GeoJSON file that I can use with my system.  I then replaced the previous parish boundaries with the ones from this dataset, but unfortunately something was not right with the positioning.  The northern Irish ones appear to be too far north and east, with the boundary for BNT extending into the sea rather than following the coast and ARM not even including the town of Armoy, as the following screenshot demonstrates:

In QGIS I needed to change the coordinate reference system from TM65 / Irish Grid to WGS84 to give me latitude and longitude values, and I wondered whether this process had caused the error, therefore I loaded the parish data into QGIS again and added an OpenStreetMap base map to it too, and the issue with the positioning is still apparent in the original data, as you can see from the following QGIS screenshot:

I can’t quite tell if the same problem exists with the southern parishes.  I’d positioned the acronyms in the middle of the parishes and they mostly still seem to be in the middle, which suggests these boundaries may be ok, although I’m not sure how some could be wrong while others are correct as everything is joined together.  After consultation with Nick I reverted to the original boundaries, but kept a copy of the other ones in case we want to reinstate them in future.

Also this week I investigated a strange issue with the Anglo-Norman Dictionary, whereby a quick search for ‘resoler’ brings back an ‘autocomplete’ match, but then finds zero results if you click on it.  ‘Resoler’ is a cross-reference entry and works in the ‘browse’ option too.  It seemed very strange that the redirect from the ‘browse’ would work, and also that a quick search for ‘resolut’, which is another variant of ‘resoudre’ was also working.  It turns out that it’s an issue with the XML for the entry for ‘resoudre’.  It lists ‘resolut’ as a variant, but does not include ‘resoler’ as you can see:

<variant gram=”imp.5″>resolvez</variant>

<deviant gram=”imp.5″>resoylez</deviant>

<varref><reference><source siglum=”Alchimie”><loc>380.5</loc></source></reference></varref>

<variant gram=”p.p.”>resolé</variant>

<variant>resolu</variant>

<variant>resolut</variant>

<newvargroup/>

<variant gram=”p.p.pl.”>resolous</variant>

<deviant gram=”p.p.pl.”>resouz</deviant>

<varref><reference><source siglum=”Secr1″><loc>1524</loc></source></reference></varref>

<deviant>resus</deviant>

<varref><reference><source siglum=”Alchimie”><loc>379.1</loc></source></reference></varref>

The search uses the variants / deviants from the XML to figure out which main entry to load from a cross reference.  As ‘resoler’ is not present the system doesn’t know what entry ‘resoler’ refers to and therefore displays no results.  I pointed this out to the editor, who changed the XML to add in the missing variant, which fixed the issue.

Also this week I responded to some feedback on the Data Management Plan for Kirsteen’s project, which took a little time to compile, and spoke to Jennifer Smith about her upcoming follow-on project for SCOSYA, which begins in September and I’ll be heavily involved with.  I also had a chat with Rhona about the ancient DSL server that we should now be able to decommission.

Finally, Gerry Carruthers sent me some further files relating to the International Journal of Scottish Theatre, which he is hoping we will be able to host an archive of at Glasgow.  It consisted of a database dump, which I imported into a local database and had a look at it.  It mostly consists of tables used to manage some sort of editorial system and doesn’t seem to contain the full text of the articles.  Some of the information contained in it may be useful, though – e.g. it stores information about article titles, authors, the issues articles appear in, the original PDF filenames for each article etc.

In addition, the full text of the articles is available as both PDF and HTML in the folder ‘1>articles’.  Each article has a numerically numbered folder (e.g. 109) that contains two folders: ‘public’ and ‘submission’.  ‘public’ contains the PDF version of the article.  ‘submission’ contains two further folders: ‘copyedit’ and ‘layout’.  ‘copyedit’ contains an HTML version of the article while ‘layout’ contains a further PDF version.  It would be possible to use each HTML version as a basis for a WordPress version of the article.  However, some things need to be considered:

Does the HTML version always represent the final published version of the article?  The fact that it exists in folders labelled ‘submission’ and ‘copyedit’ and not ‘public’ suggests that the HTML version is likely to be a work in progress version and editorial changes may have been made to the PDF in the ‘public’ folder that are not present in the HTML version.  Also, there are sometimes multiple HTML versions of the article.  E.g. in the folder ‘1>articles>154>submission>copyedit’ there are two HTML files: ‘164-523-1-CE.htm’ and ‘164-523-2-CE.htm’.  These both contain the full text of the article but have different formatting (and may have differences in the content, but I haven’t checked this).

After looking at the source of the HML versions I realised these have been auto-generated from MS Word.  Word generates really messy, verbose HTML with lots of unnecessary tags and I therefore wanted to see what would happen if I copied and pasted it into WordPress.  My initial experiment was mostly a success, but WordPress treats line breaks in the pasted file as actual line breaks, meaning the text didn’t display as it should.  What I needed to do in my text editor was find and replace all line break characters (\r and \n) with spaces.  I also had to make sure I only copied the contents within the HTML <body> tag rather than the whole text of the file.  After that the process worked quite well.

However, there are other issues with the dataset.  For example, article 138 only has Word files rather than HTML or PDF files and article 142 has images in it, and these are broken in the HTML version of the article.  Any images in articles will probably have to be manually added in during proofreading.  We’ll need to consider whether we’ll have to get someone to manually migrate the data, or whether I can write a script that will handle the bulk of the process.

I had my second vaccination jab on Wednesday this week, which thankfully didn’t hit me as hard as the first one did.  I still felt rather groggy for a couple of days, though.  Next week I’m on holiday again, this time heaving to the Kintyre peninsula to a cottage with no internet or mobile signal, so I’ll be unreachable until the week after next.

Week Beginning 19th July 2021

This was my second and final week staying at my parents’ house in Yorkshire, where I’m working a total of four days over the two weeks.  This week I had an email conversation with Eleanor Lawson about her STAR project, which will be starting very shortly.  We discussed the online presence for the project, which will be split between a new section on the Seeing Speech website and an entirely new website, the project’s data and workflows and my role over the 24 months of the project.  I also created a script to batch process some of the Edinburgh registers for the Books and Borrowing project.  The page images are double spreads and had been given a number for both the recto and the verso (e.g. 1-2, 3-4), but the student registers only ever use the verso page.  I was therefore asked to write a script to renumber all of these (e.g. 1-2 becomes 1, 3-4 becomes 2), which I created and executed on a test version of the site before applying to the live data.

I also continued to make tweaks to the front-ends for the Comparative Kingship project.  I fixed a bug with the Elements glossary of the Irish site, which was loading the Scottish version instead.  I also contacted Chris Fleet at NLS Maps to enquire about using a couple of their historical Irish maps with the site.  I also fixed the ‘to top’ button in the CMSes not working; the buttons now actually scroll the page to the top as they should.  I also fixed some issues relating to parish names no longer being unique in the system (e.g. the parish of Gartly is in the system twice due to it changing county at some point).  This was causing issues with the browse option as data was being grouped by parish name.  Changing the grouping to the parish ID thankfully fixed the issue.

I also had a chat with Ann Fergusson at the DSL about multi-item bibliographical entries in the existing DSL data.  These are being split into individual items, and a new ‘sldid’ attribute in the new data will be used to specify which item in the old entry the new entry corresponds to.  We agreed that I would figure out a way to ensure that these IDs can be used in the new website once I receive the updated data.

My final task of the week was to investigate a problem with Rob Maslen’s City of Lost Books blog (https://thecityoflostbooks.glasgow.ac.uk/) when went offline this week and only displayed a ‘database error’.  Usually when this happens it’s a problem with the MySQL database and it takes down all of the sites on the server, but this time it was only Rob’s site that was being affected.  I tried accessing the WP admin pages and this gave a different error about the database being corrupted.   I needed to update the wordpress config file to add the line define(‘WP_ALLOW_REPAIR’, true); and upon reloading the page WordPress attempted to fix the database.  After doing so it stated that “The wp_options table is not okay. It is reporting the following error: Table is marked as crashed and last repair failed. WordPress will attempt to repair this table… Failed to repair the wp_options table. Error: Wrong block with wrong total length starting at 10356”.  WordPress appeared to regenerate the table, as after this the table existed and was populated with data and the blog went online again and could be logged into.  I’ll have to remember this if it happens again in future.

Next week I’ll be back in Glasgow.

Week Beginning 12th July 2021

I’m down visiting my parents in Yorkshire for the first time in 18 months this week and next, working a total of four days over the two-week period.  This week I mainly focussed on the Irish front-end for the Comparative Kingship place-names project, but also adding in some updates to the Scotland system that I recently set up, such as making the Gaelic forms of the classification codes visible and adding options to browse Gaelic forms of place-names and historical forms to the ‘Browse’ facility and ensuring the other place-name and historical form browses only bring back English forms.

The Irish system is mostly identical to the Scottish system, but I did need to make some changes that took a bit of time to implement.  As the place-names covered appear to be much more geographically spread out, I’ve allowed the map to be zoomed out further.  I’ve also had to remove the modern OS and historical OS map layers as they don’t cover Ireland, so currently there are only three map layers available (the default view, satellite view and satellite view with labels).  The Ordnance Survey of Ireland provides access to some historical map layers here: https://geohive.maps.arcgis.com/apps/webappviewer/index.html?id=9def898f708b47f19a8d8b7088a100c4 but their terms and conditions makes it clear that you can’t use the maps on another online resource.  However, there are a couple of Irish maps on the NLS website, the Bartholomew Quarter-Inch 1940 (https://maps.nls.uk/geo/explore/#zoom=9&lat=53.10286&lon=-7.34481&layers=13&b=1) and the GSGS One-Inch 1941-3 (https://maps.nls.uk/geo/explore/#zoom=9&lat=53.10286&lon=-7.34481&layers=14&b=1) and we could investigate integrating these as the NLS maps people have always been very helpful.

I also updated the map pop-ups to include the new Irish data fields, such as baronies, townlands and the different map types.  Both English and Gaelic forms of things like parishes, baronies and classification codes are displayed throughout the site and on the Record page the ITM figures also appear.  I updated the ‘Browse’ page so that it features baronies and the element glossary should work, but I haven’t tested it out as there is no data yet.  The Advanced search features a selectable list of baronies and currently a simple textbox for townlands.  I may change this to an autocomplete (whereby you start typing and townlands that include the letters appear in a selectable list), or I may leave it as it is, meaning multiple townlands can be searched for and wildcard characters can be used.

I managed to locate downloadable files containing parish boundaries for Ireland here: https://www.townlands.ie/page/download/ and have added these to the data for the two parishes that currently contain data.  I haven’t added in any other parish boundaries yet as there are over 200 parishes in our database I don’t want to have to manually add in the boundaries for all of these if it won’t be necessary.  Also, on the Scotland maps the three-letter acronym appears in the middle of each parish in order to identify it, but the Irish parishes don’t have TLAs so currently don’t have any labels.  The full text of the parish will clutter up the map too much if I use it, so I’m not sure what we could do to label the parishes.

Also this week I responded to some feedback about the Data Management Plan for Kirsteen McCue’s proposal and created a slightly revised version.  I also had an email conversation with Eleanor Lawson about her new speech project and how the web presence for the project may function.  Finally, I made some tweaks to the Dictionary of the Scots Language, updating the layout of the ‘Contact’ page and updating the bibliography page on the new website so that URLs that use the old style IDs will continue to work.  I also had a chat with Rhona Alcorn about some new search options that we are going to add in to the new site before it goes live, although probably not until the autumn.

 

Week Beginning 5th July 2021

After a lovely week’s holiday in the East Neuk of Fife last week I returned to full week of work.  I spent Monday catching up with emails and making some updates to two project websites.  Firstly, for the Anglo-Norman Dictionary I updated the Textbase to add in the additional PDF texts.  As these are not part of the main Textbase I created a separate page that listed and linked to them, and added a reference to the page to the introductory paragraph of the main Textbase page.  Secondly, I made some further updates to the content management system for the Books and Borrowing project.  There was a bug in the ‘clear borrower’ feature that resulted in the normalised occupation fields not getting clears.  This meant that unless a researcher noticed and manually removed the selected occupations it would be very easy to end up with occupations assigned to the wrong borrower.  I implemented a fix for this bug, so all is well now.  I had also been alerted to an issue with the library’s ‘books’ tab.  When limiting the listed books to only those mentioned in a specific register the list of associated borrowing records that appears in a popup was not limiting the records to those in the specified register.  I fixed this as well, and also made a comparable fix to the ‘borrowers’ tab as well.

During the week I also had an email conversation with Kirsteen McCue about her ‘Singing the Nation’ AHRC proposal, and made a new version of the Data Management Plan for her.  I also investigated some anomalies with the stats for the Dictionary of the Scots Language website for Rhona Alcon.  Usage figures were down compared to last year, but it looks like last year may have been a blip caused by Covid, as figures for this year match up pretty well with the figures for years before the dreaded 2020.

On Wednesday I was alerted to an issue with the Historical Thesaurus website, which appeared to be completely inaccessible.  Further investigation revealed that other sites on the server were also all down.  Rather strangely the Arts IT Support team could all access the sites without issue, and I realised that if I turned wifi off on my phone and accessed the site via mobile data I could access the site too.  I had thought it was an issue with my ISP, but Marc Alexander reported that he used a different ISP and could also not access the sites.  Marc pointed me in the direction of two very handy websites that are useful for checking whether websites are online or not.  https://downforeveryoneorjustme.com checks the site and lets you know whether it’s working while https://www.uptrends.com/tools/uptime is a little more in-depth and checks whether the site is available from various locations across the globe.  I’ll need to remember these in future.

The sites were still inaccessible on Thursday morning  and after some Googling I found an answer to someone with a similar issue here: https://webmasters.stackexchange.com/questions/104092/why-is-my-site-showing-as-being-down-for-some-places-and-not-others I asked Arts IT Support to check with central IT Services to see whether any DNS settings had been changed recently or if they know what might be causing the issue, as it turned out to be a more widespread issue than I had thought, and was affecting sites on different servers too.  A quick check of the sites linked to from this site showed that around 20 websites were inaccessible.

Thankfully by Thursday lunchtime the sites had begun to be accessible again, although not for everyone.  I could access them, but Marc Alexander still couldn’t.  By Friday morning all of the sites were fully accessible again from locations around the globe, and Arts IT Support got back to me with a cause for the issue.  Apparently there was some server in the Boyd Orr that controls DNS records for the University and it had gone wrong and sent out garbled instructions to other DNS servers around the world, which knocked out access to our sites, even though the sites themselves were all working perfectly.

I spent the rest of the week working on the front-end for the Scotland data for the Comparative Kingship project, a task that I’d begun before I went away on my holiday.  I managed to complete an initial version of the Scotland front-end, which involved taking the front-end from one of the existing place-names websites (e.g. https://kcb-placenames.glasgow.ac.uk/) and adapting it.  I had to make a number of adaptations, such as ensuring that two parallel interfaces and APIs could function on one site (one for Scotland, one for Ireland), updating a lot of the site text, creating a new, improved menu system and updating the maps so that they defaulted to the new area of research.  I also needed to add in facilities to search, return data for and display new Gaelic fields, e.g. Gaelic versions of place-names and historical forms.  This meant updating the advanced search to add in a new ‘language’ choice option, to enable a user to limit their search to just English or Gaelic place-name forms or historical forms.  This in turn meant updating the API to add in this additional option.

An additional complication came when I attempted to grab the parish boundary data, which for previous project I’d successfully exported from Scottish Government’s Spatial Data website (https://www.spatialdata.gov.scot/geonetwork/srv/eng/catalog.search#/metadata/c1d34a5d-28a7-4944-9892-196ca6b3be0c) via a handy API (https://maps.gov.scot/server/rest/services/ScotGov/AgricultureEnvironment/MapServer/1/query).  However, the parish boundary data was not getting returned with latitude / longitude pairs marking the parish shape, but instead used esriMeters instead.  I found someone else who wanted to covert esriMeters into lat/lng (https://gis.stackexchange.com/questions/54534/how-can-i-convert-esrimeters-to-lat-lng) and one of the responses was that with an ArcGIS service (which the above API appears to be) you should be able to set the ‘output spatial reference’, with the code 4326 being used for WGS84, which would give lat/lng values.  The API form does indeed have an ‘Output Spatial Reference’ field, but unfortunately it doesn’t seem to do anything.  I did lots of further Googling and tried countless different ways of entering the code, but nothing changed the output.

Eventually I gave up and tried an alternative approach.  The site also provides the parish data as an ESRI Shapefile (https://maps.gov.scot/ATOM/shapefiles/SG_AgriculturalParishes_2016.zip) and I wondered whether I could plug this into a desktop GIS package and use it to migrate the coordinates to lat/lng.  I installed the free GIS package QGIS (https://www.qgis.org/en/site/forusers/download.html) and after opening it went to the ‘Layer’ menu, selected ‘Add Layer’, then ‘Add Vector Layer’ then selected the zip file and pressed ‘add’, at which point all of the parish data loaded in, allowing me to select a parish and view the details for it.  What I then needed to do was to find a means of changing the spatial reference and saving a geoJSON file.  After much trial and error I discovered that in the ‘Layer’ menu there is a ‘Save as’ option.  This allowed me to specify the output format (geoJSON) and change the ‘CRS’, which is the ‘Coordinate Reference System’.  In the drop-down list I located ESPG: 4326 / WGS84 and selected it.  I then specified a filename (the folder defaults to a Windows System folder and needs to be updated too) and pressed ‘OK’ and after a long wait the geoJSON file was generated, with latitude and longitude values for all parishes.  Phew!  It was quite a relief to get this working.

With access to a geoJSON file containing parishes with lat/lng pairings I could then find and import the parishes that we needed for the current project, of which there were 28.  It took a bit of time to grab all of these, and I then needed to figure out where I wanted the three-letter acronyms for each parish to be displayed, as well, which I worked out using the National Library of Scotland’s parish boundaries map (https://maps.nls.uk/geo/boundaries/), which helpfully displays lat/lng coordinates for your cursor position in the bottom right.  With all of the parish boundary data in place the infrastructure for the Scotland front-end is now more or less complete and I await feedback from the project team.  I will begin work on the Ireland section next, which will take quite some work as the data fields are quite different.  I’m only going to be working a total of four days over the next two weeks (probably as half-days) so my reports for the next couple of weeks are likely to be a little shorter!

Week Beginning 21st June 2021

This was a four-day week for me as I’m off on Friday and will be off all of next week too.  A big thing I ticked off my ‘to do’ list this week was completing work on the ‘Browse’ facility for the Anglo-Norman Textbase, featuring each text on its own continuous page rather than split into sometimes hundreds of individual pages.  I finished updating the way footnotes work, and they are now renumbered starting at [1] on each page of each text no matter what format they originally had.  All of the issues I’d noted about footnote numbers in my previous couple of blog posts have now been addressed (e.g. numbering out of sequence, numbers getting erroneously repeated).

With the footnotes in place I then we through each of the 77 texts to check their layout, which took quite some time but also raised a few issues that needed to be fixed.  The biggest thing was I needed to regenerate the page number data (used in the ‘jump to page’ feature) as I realised the previously generated data was including all <pb> tags, but some of the texts such as ‘indentures’ use <pb> to mean something else.  For example, ‘<pb ed=”MS” n=”Dorse”/>’ is not an actual page break and there are numerous of these occurrences throughout the text, resulting in lots of ‘Dorse’ options in the ‘jump to page’ list.  Instead I limited the page breaks to just those that have ‘ed=”base”’ in them, e.g. ‘<pb n=”49″ ed=”base”/>’ and this seems to have done the trick.

I also noticed some issues with paragraph and table tags in footnotes causing the notes to display in the wrong place or display only partially, and the ‘dorse’ issue was also resulting in footnotes getting added to the wrong page sometimes.  Thankfully I managed to fix these issues and so as far as I can tell that’s the ‘browse’ facility of the Textbase complete.  The editors don’t want to launch the Textbase until the search facilities have also been developed, so it’s going to be a while until they’re actually available, what with summer holidays and commitments to other projects.

Also this week I continued to work on the Books and Borrowings project, having an email conversation with the digitisers at the NLS about file formats and methods of transferring files, and making further updates to the CMS to add features and make things run quicker.  I managed to reduce the number of database calls on the ‘Books’ tab in the library view again, which should mean the page loads faster.  Previously all book holding records were returned and then a separate query was executed for each to count the number of borrowings whereas I’ve now nested the count query in the initial query.  So for St Andrews with its 7471 books this has cut out 7471 individual queries.

I’d realised that the ‘borrowing records’ count column in this ‘Books’ table isn’t actually a count of borrowing records at all, but a count of the number of book items that have been borrowed for the book holding.  I’ve figured out a way to return a count of borrowing records instead, and I replaced the old way with the new way, so the ‘Borrowing Records’ column now does what it should do.  This means the numbers listed have changed, e.g. ‘Universal History’ now has 177 borrowing records rather than 269 and is no longer the most borrowed book holding at St Andrews.  I also changed the popup so that each borrowing record only appears once (e.g. David Gregory on 1748-6-7 now only has one borrowing record listed).  I added a further ‘Total borrowed items’ column in as well, to hold the information that was previously in the ‘Borrowing Records’ column, and it’s possible to order the table by this column too.  I also noticed that I’d accidentally removed columns displaying additional fields from the table, so I have reinstated these.  For St Andrews this means the ‘Classmark’ column is now back in the table.  I also realised that my new nested count queries were not limiting their counts when a specific register was selected so updated them to take this into consideration too.

Also this week I updated all of the WordPress sites I manage to the latest version and ensured all plugins were updated too.  I then began working on the public interfaces for the Comparative Kingship place-names project, which will have separate interfaces for its Scotland and Ireland data.  So far I’ve modified the existing place-names API so that it works with a database table prefix and got the API working for the Scotland data.  I then began working on the front-end that connects to this API and have managed to get the ‘browse’ option sort of working, although there are still some issues with layout and JavaScript due to the site using a different theme to the other place-names sites.  I’ll continue looking into this once I’m back from my holidays on the 5th of July.

Week Beginning 17th May 2021

I spent a lot of this week continuing with the Anglo-Norman Dictionary, including making some changes to the proofreader feature I created recently.  I tweaked the output of this so that there is now a space between siglum and ‘MS’, ‘edgloss’ now has brackets and there is now a blank paragraph before the ‘summary’ section and also before the ‘cognate refs’ section to split things up a bit.  I also added some characters (~~) before and after the ‘summary’ section to help split things up and added extra spaces before and after sense numbers, and square brackets around them (because background styles, which give the round, black circle are not carried over into Word when the content is copied).  I also added more spaces round the labels, added an extra line break before locutions and made the locution phrase appear in bold.

I also spent some time investigating some issues with the data, for example a meaning was not getting displayed in the summary section of https://anglo-norman.net/entry/chaucer_3 because the part of speech labels didn’t quite match up (one was ‘subst.’, the other was ‘sbst.’) and updated the entry display so that the ‘form section’ at the top of an entry gets displayed even if there is no ‘cognate refs’ section.  My code repositions the ‘formSection’ so it appears before ‘cognateRefs’ and as it was not finding this section it wasn’t repositioning the forms anywhere – instead they just disappeared.  I therefore updated the code to ensure that the forms will only be repositioned if the ‘cognateRefs’ section is present, and this has fixed the matter.

I also responded to a request for data from a researcher at Humboldt-Universität zu Berlin who wanted information on entries that featured specific grammatical labels.  As of yet the advanced search does not include a part of speech search, but I could generate the necessary data from the underlying database.  I also ran a few queries to update further batches of bad dates in the system.

With all of this out of the way I then moved onto a more substantial task – creating a new ‘date builder’ feature for the Dictionary Management System.  The old DMS featured such a tool, which allows the editor to fill in some text boxes and for an XML form of the date (either text, manuscript or both) to be generated, copied and pasted into their XML editor.  The old feature used a mixture of Perl scripts and JavaScript to generate the XML, over several thousand lines of code, but I wanted to handle it all in JavaScript in a (hopefully) more succinct way.

My initial version allowed an editor to add Text and MS dates using the input boxes and then by pressing the ‘Generate XML’ button the ‘XML’ box is populated and the date as it would be displayed on the site is also displayed.  I amalgamated the ‘proof’ and ‘Build XML’ options from the old DMS as it seemed more useful to just do both at the same time.  There is also a ‘clear’ button that does what you’d expect it to do and a ‘log’ that displays feedback about the date.  E.g. if the date doesn’t conform to the expected pattern (yyyy / yyyy-yyyy / yyyy-yy / yyyy-y) or one of the characters isn’t a number or the date after the dash is earlier than the date before the dash then a warning will be displayed here.  The XML area is editable so if needs be the content can be manually tweaked.  There is also a ‘Copy XML’ button to copy the contents of the XML area to the clipboard.

What I didn’t realise was that non-numerical dates also need to be processed using the date builder, so for example ‘s.xiii’, ‘s.xivex’, ‘sxii/xiii’.  I needed to update the date builder to handle seven different centuries which could be joined in a range either by a dash or a slash, and 16 different suffixes, each of which would change how the numerical date should be generated from the century, and all this in addition to the three prefixes ‘a’,’b’ and ‘c’ that also change the generated date.  Getting this to work was all very complicated, but by the end of the week I had a working version, all of which took up less than 500 lines of JavaScript.  Below is a screenshot of the date builder in action:

Also this week I set up some new user accounts for the Books and Borrowing project, I gave Luca Guariento some feedback about an AHRC proposal, I had to deal with the server and database going down a few times and I added a new publication to the SCOSYA website.

I also updated the DSL test site so that cross references in entries don’t use IDs (as found in the XML) but use ‘slugs’ (as we use on the site).  This required me to write a new API endpoint to return slugs from IDs and to update the JavaScript to find and replace cross reference IDs when an entry is loaded.  I also spoke to Rhona about the launch of the new DSL website, which is possibly going to be pushed back a bit now.

Finally, I made some further tweaks to the Comparative Kingship content management systems for Scottish and Irish placenames.  When I set up the two systems I’d forgotten to add the x-refs section into the form.  The code was all there to handle them, but the section wasn’t appearing.  I therefore updated both Scotland and Ireland so x-refs now appear.  I’d also noticed that some of the autogenerated lists that appear when you type into boxes in the Ireland site(e.g. xrefs) were pointing to the Scotland database and therefore bringing back the wrong data and I fixed this too.

I also added all of the sources from the Kirkcudbrightshire system to the Scotland CMS and replaced the Scotland elements database with the one from KCB as well, which required me to check the elements already associated with names to ensure they point to the same data.  Thankfully all did except the existing name ‘Rhynie’, which was newly added and its ID ended up referencing an entirely different element from the KCB database, but I fixed this.  I also fixed a bug with the name and element deletion code that was preventing things for getting deleted.

Week Beginning 26th April 2021

I continued with the import of new data for the Dictionary of the Scots Language this week.  Raymond at Arts IT Support has set up a new collection and had imported the full-text search data into the Solr server, and I tested this out via the new front-end I’d configured to work with the new data source.  I then began working on the import of the bibliographical data, but noticed that the file exported from the DSL’s new editing system didn’t feature an attribute denoting what source dictionary each record is from.  We need this as the bibliography search allows users to limit their search to DOST or SND.  The new IDs all start with ‘bib’ no matter what the source is.  I had thought I could use the ‘oldid’ to extract the source (db = DOST, sb = SND) but I realised there are also composite records where the ‘oldid’ is something like ‘a200’.  In such cases I don’t think I have any data that I can use to distinguish between DOST and SND records.  The person in charge of exporting the data from the new editing system very helpfully agreed to add in a ‘source dictionary’ attribute to all bibliographical records and sent me an updated version of the XML file.  Whilst working with the data I realised that all of the composite records are DOST records anyway, so I didn’t need the ‘sourceDict’ attribute, but I think it’s better to have this explicitly as an attribute as differentiating between dictionaries is important.

I imported all of the bibliographical records into the online system, including the composite ones as these are linked to from dictionary entries and are therefore needed, even though their individual parts are also found separately in the data.  However, I decided to exclude the composite records from the search facilities, otherwise we’d end up with duplicates in the search results.  I updated the API to use the new bibliography tables and I updated the new front-end so that bibliographical searches use the new data.  One thing that needs some further work is the display of individual bibliographies.  These are now generated from the bibliography XML via an XSLT whereas previously they were generated from a variety of different fields in the database.   The display doesn’t completely match up with the display on the live and Sienna versions of the bibliography pages and I’m not sure exactly how the editors would like entries to be displayed.  I’ll need further input from them on this matter, but the import of data from the new editing system has now been completed successfully.  I’d been documenting the process as I worked through it and I sent the documentation and all scripts I wrote to handle the workflow to the editors to be stored for future use.

I also worked on the Books and Borrowing project this week.  I received the last of the digitised images of borrowing registers from Edinburgh (other than one register which needs conservation work), and I uploaded these to the project’s content management system, creating all of the necessary page records.  We have a total of 9,992 page images as JPEG files from Edinburgh, totalling 105GB.  Thank goodness we managed to set up an IIIF server for the image files rather than having to generate and store image tilesets for each of these page images.  Also this week I uploaded the images for 14 borrowing registers from St Andrews and generated page records for each of these.

I had a further conversation with GIS expert Piet Gerrits for the Iona project and made a couple of tweaks to the Comparative Kingship content management systems, but other than that I spent the remainder of the week returning to the Anglo-Norman Dictionary, which I hadn’t worked on since before Easter.  To start with I went back through old emails and documents and wrote a new ‘to do’ list containing all of the outstanding tasks for the project, some 20 items of varying degrees of size and intricacy.  After some communication with the editors I began tackling some of the issues, beginning with the apparent disappearance of <note> tags from certain entries.

In the original editor’s XML (the XML as structured before uploaded into the old DMS) there were ‘edGloss’ notes tagged as ‘<note type=”edgloss” place=”inline”>’ that were migrated to <edGloss> elements during whatever processing happened with the old DMS.  However, there were also occasionally notes tagged as ‘<note place=”inline”>’ that didn’t get transformed and remained tagged as this.

I’m not entirely sure how or where, but at some point during my processing of the data these ‘<note place=”inline”>’ notes have been lost.  It’s very strange as the new DMS import script is based entirely on the scripts I wrote to process the old DMS XML entries, but I tested the DMS import by uploading the old DMS XML version of ‘poer_1’ to the new DMS and the ‘<note place=”inline”>’ have been retained, yet in the live entry for ‘poer_1’ the <note> text is missing.

I searched the database for all entries where the DMS XML as exported from the old DMS system contains the text ‘<note place=”inline”>’ and there are 323 entries, which I added to a spreadsheet and sent to the editors.  It’s likely that the new XML for these entries will need to be manually corrected to reinstate the missing <note> elements.  Some entries (as with ‘poer_1’) have several of these.  II still have the old DMS XML for these so it is at least possible to recover the missing tags.  I wish I could identify exactly when and how the tags were removed, but that would quite likely require many hours of investigation, as I already spent a couple of hours trying to get to the bottom of the issue without success.

Moving on to a different issue, I changed the upload scripts so that the ‘n’ numbers are always fully regenerated automatically when a file is uploaded, as previously there were issues when a mixture of senses with and without ‘n’ numbers were included in an entry.  This means that any existing ‘n’ values are replaced, so it’s no longer possible to manually set the ‘n’ value.  Instead ‘n’ values for senses within a POS will always increment from 1 depending on the order they appear in the file, with ‘n’ being reset to 1 whenever a new POS is encountered.

Main senses in locutions were not being assigned an ‘n’ on upload, and I changed this so that they are assigned an ‘n’ in exactly the same way as regular main senses.  I tested this with the ‘descendre’ entry and it worked, although I encountered an issue.  The final locution main sense (to descend to (by way of inheritance)) had a POS of ‘sbst._inf.’ In its <senseInfo> whereas it should have been (based on the POS of the previous two senses) ‘sbst. Inf.’.  The script was therefore considering this to be a new POS and gave the sense an ‘n’ of 1.  In my test file I updated the POS and re-uploaded the file and the sense was assigned the correct value of 3 to its ‘n’, but we’ll need to investigate why a different form of POS was recorded for this sense.

I also updated the front-end so that locution main senses with an ‘n’ now have the ‘n’ displayed, (e.g. https://anglo-norman.net/entry/descendre) and wrote a script that will automatically add missing ‘n’ attributes to all locution main senses in the system.  I haven’t run this on the live database yet as I need further feedback from the editors before I do.  As the week drew to a close I worked on a method to hide sense numbers in the front-end in case where there was only one sense in a part of speech, but I didn’t manage to get this completed and will continue with it next week.

Week Beginning 19th April 2021

It was a return to a full five-day week this week, after taking some days off to cover the Easter school holidays for the previous two weeks.  The biggest task I tackled this week was to import the data from the Dictionary of the Scots Language’s new editing system into my online system.  I’d received a sample of the data from the company responsible for the new editing system a couple of weeks ago, and we had agreed on a slightly updated structure after that.  Last week I was sent the full dataset and I spent some time working with it this week.  I set up a local version of the online system on my PC and tweaked the existing scripts I’d previously written to import the XML dataset generated by the old editing system.  Thankfully the new XML was not massively different in structure to the old set, and different mostly in the addition of a few new attributes, such as ‘oldid’ that referenced to old ID of each entry, and ‘typeA’ and ‘typeB’, which contain numerical codes that denote which text should be displayed to note when the entry was published.  With changes made to the database to store these attributers and updates to the import script to process them I was ready to go, and all 80,432 DOST and SND entries were successfully imported, including extracting all forms and URLs for use in the system.

I had a conversation with the DSL team about whether my ‘browse order’ would still be required, as the entries now appear to be ordered nicely by their new IDs.  Previously I ran a script to generate the dictionary order based on the alphanumeric characters in the headword and the ‘posnum’ that I generated based on the classification of parts of speech taken from a document written by Thomas Widmann when he worked for the DSL (e.g. all POS beginning ‘n.’ have a ‘posnum’ of 1, all POS beginning ‘ppl. adj.’ have a ‘posnum’ of 8).  Although the new data is now nicely ordered by the new ID field I wanted to check whether I should still be generating and using my browse order columns or whether I should just order things by ID.  I suggested that going forward it will not be possible to use the ID field as browse order, as whenever the editors add a new entry its ID will position it in the wrong place (unless the ID field is not static and is regenerated whenever a new entry is added).  My assumption was correct and we agreed to continue using my generated browse order.

In a related matter my script extracts the headword of each entry from the XML and this is used in my system and also to generate the browse order.  The headword is always taken to be the first <f> of type “form” within <meta> in the <entry>.  However, I noticed that there are five entries that have no <f> of type “form” and are therefore missing a headword, and are appearing first in the ‘browseorder’ because of this.  This is something that still needs to be addressed.

In our conversations, Ann Ferguson mentioned that my browse system wasn’t always getting the correct order where there were multiple identical headwords all within the same generate part of speech.  For example there are multiple noun ‘point’ entries in DOST – n. 1, n. 2 and n. 3.  These were appearing in the ‘browse’ feature with n. 3 first.  This is because (as per Thomas’s document) all entries with a POS starting with ‘n.’ are given a ‘posorder’ of 1.  In cases such as ‘point’ where the headword is the same and there are several entries with a POS beginning ‘n.’ the order is then set to depend on the ID, and ‘Point n.3’ has the lowest ID, so appears first.  I therefore updated the script that generates the browse order so that in such cases entries are ordered alphabetically by POS instead.

I also regenerated the data for the Solr full-text search, but I’ll need Arts IT Support to update this, and they haven’t got back to me yet.  I then migrated all of the new data to the online server and also created a table for the ‘about’ text that will get displayed based on the ‘typeA’ and ‘tyepB’ number in the entry.  I then created a new version of the API that uses the new data and pulls in the necessary ‘about’ data.  When I did this I noticed that some slugs (the identifier that will be used to reference an entry in a URL) are still coming out as old IDs because this is what is found in the <url> elements.  So for example the entry ‘snd00087693’ had the slug ‘snds165’.  After discussion we agreed that in such cases the slug should be the new ID, and I tweaked the import script and regenerated the data to make this the case.  I then updated one of our test front-ends to use the new API, updating the XSLT to ensure that the <meta> tag that now appears in the XML is not displayed and updating bibliographical references and cross references to use the new ‘refid’ attribute.  I also set up the entry page to display the ‘about’ text, although the actual placement and formatting of this text still needs to be decided upon. I then moved on to the bibliographical data, but this is going to take a bit longer to sort out, as previous bib info was imported from a CSV.

Also this week I read through and gave feedback on a data management plan for a proposal Marc Alexander in involved with and created a new version of the DMP for the new metaphor proposal that Wendy Anderson is involved with.  I also gave some advice to Gerry Carruthers about hosting some journal issues at Glasgow.

For the Books and Borrowing project I made some updates to the data of the 18th Century Borrowers pilot project, including fixing some issues with special characters, updating information relating to a few books and merging a couple of book records.  I also continued to upload the page images of the Edinburgh registers, finishing the upload of 16 registers and then generating the page records for all of the pages in the content management system.  I then started on the St Andrews registers.

I also participated in a Zoom call about GIS for the place-names of Iona project, where we discussed the sort of data and maps that would appear in the QGIS system and how this would relate to the online CMS, and also tweaked the Call of Papers page of the website.

Finally, I continued to make updates to the content management systems for the Comparative Kingship project, adding in Irish versions of the classifications and some of the labels, changing some parishes, adding in the languages that are needed for the Irish system and removing the unnecessary place-names that were imported from the GB1900 dataset.  These are things like ‘F.P.’ for ‘footpath’.  A total of 2,276 names, with their parish references, historical forms and links to the OS source were deleted by a little script I wrote for the purpose.  I think I’m up to date with this project for the moment, so next week I intend to continue with the DSL bibliographical data import and to return to working on the Anglo-Norman Dictionary.

Week Beginning 12 April 2021

I’d taken Monday and Thursday off this week to cover some of the school Easter holidays, and I also lost some of Friday as I’d arranged to travel through to the University to pick up some equipment that had been ordered for me.  So I probably only had about two and a half days of actual work this week, which I mostly spent continuing to develop the content management systems for the new Comparative Kingship place-names project.  I created user accounts to enable members of the project team to access the Scottish CMS that I completed last week, and completed work on the 10,000 or so place-names I’d imported from the GB1900 data, setting up a ‘source’ for the map used by this project (OS 6 inch 2nd edition), generating a historical form for each of the names and associating each historical form with the source.  This will mean that the team will be able to make changes to the head names and still have a record of the form that appeared in the GB1900 data.

I then began work on the Irish CMS, which required a number of changes to be made.  This included importing more than 200 parishes across several counties from a spreadsheet, updating the fields previously marked as Scottish Gaelic to Irish and generating new fields for recording ‘Townland’ in English and Irish.  ‘Townland’ also had to be added to the classification codes and a further multi-select option similar to parish needed to be added for ‘Barony’.  OS map names ‘Landranger’ and ‘Explorer’ needed to be changed too, in both the main place-name record and in the sources.

The biggest change, however, was to the location system as Ireland has a different grid reference system to the UK.  A feature of my CMS is that latitude, longitude and altitude are generated automatically from a supplied grid reference, and in order to retain this functionality for the Irish CMS I needed to figure out a method of working with Irish grid references.  In addition, the project team also wanted to store another location coordinate system, the Irish Transverse Mercator (ITM) system, and wanted not only this to be automatically generated from the grid reference, but to be able to supply the ITM field and have all other location fields (including the grid reference) populate automatically.  This required some research to see if there was a tool or online service that I could incorporate into my system.

I discovered that Ordnance Survey Ireland has a tool to convert coordinates here https://gnss.osi.ie/new-converter/ but it doesn’t include grid references (e.g. in the form F 83253 33765) and although there is a downloadable tool that can be used at the command line I really wanted an existing PHP or JavaScript script rather than having to run an executable on the server.  I also found this site: http://batlab.ucd.ie/gridref/ that can generate latitude and longitude from an Irish grid reference, and it also has an API that my scripts could connect to (e.g. http://batlab.ucd.ie/gridref/?reftype=NATGRID&refs=F8325333765) but it doesn’t include ITM coordinates, unfortunately.  Also, I don’t like to rely on third party sites as they can disappear without warning.  This site: https://irish.gridreferencefinder.com/bing.php allows you to enter a grid reference, latitude / longitude or ITM coordinates and view various types of coordinates on a map interface, but it’s not a service a script can easily connect to in order to generate data.

I then found this site: https://www.howtocreate.co.uk/php/gridref.php which offers a downloadable library in PHP or JavaScript that allows latitude, longitude and ITMs to be generated from Irish grid references (and back again, if required).  This is the solution decided to add into the CMS, after a certain amount of trial and error I managed to incorporate the JavaScript version of the library and update my CMS so that upon entering an Irish grid reference the latitude, longitude, altitude (via Google Maps) and ITM coordinates were automatically generated.  I also managed to set up the system so that the other fields were generated automatically if ITM coordinates were manually inputted.  I think all is now working as required with the two systems, and I’ll need to wait until the team accesses and uses the systems to see if further tweaks are required.

I also continued to work on the Books and Borrowing project this week.  I’d been in discussion with the Stirling University IT people about setting up a IIIF server for the project, and I heard this week that they have agreed to this, which is really great news.  Previously in order to allow page images to be zoomed and panned like a Google Map we had to generate and store tilesets of each page image at each zoom level.  It was taking hours to generate the tilesets for each book and days to upload the images to the server, and was requiring a phenomenal amount of storage space on the server.  For example, the tilesets for one of the Edinburgh volumes consisted of around 600,000 files and took up around 14GB of space.  This was in addition to the actual full-size images of the pages (about 250 at around 12MB each).

An IIIF server means we only need to store the full-size images of each page and the server dynamically chops up and serves sections of the image at the desired zoom level whenever anyone uses the zoom and pan image viewer.  It’s a much more efficient system.  However, it does mean I needed to update the ‘Page image’ page of the CMS to use the IIIF server, and it took a little time to get this working.  I’d decided to use the OpenLayers library to access the images, as this is what I’d previously been using for the image tilesets, and it has the ability to work with a IIIF server (see https://openlayers.org/en/latest/examples/iiif.html).  However, it did take some time to get this working, as the example and all of the documentation is fully dependent on the node.js environment, even though the library itself really doesn’t need to be.  I didn’t want to convert my CMS to using node.js and have yet another library to maintain when all I needed was a simple image viewer, so I head to rework the code example linked to above to strip out all of the node dependencies, module syntax and import statements.  For example ‘var options = new IIIFInfo(imageInfo).getTileSourceOptions()’ needed to be changed to ‘var options = new ol.format.IIIFInfo(imageInfo).getTileSourceOptions()’.  As none of this is documented anywhere on the OpenLayers website it took some time to get right, but I got there in the end and the CMS now has an OpenLayers based IIIF image viewer working successfully.

Week Beginning 5th April 2021

This week began with Easter Monday, which was a holiday.  I’d also taken Tuesday and Thursday off to cover some of the Easter school holidays so it was a two-day working week for me.  I spent some of this time continuing to download and process images of library register books for the Books and Borrowing project, including 14 from St Andrews and several further books from Edinburgh.  I was also in communication with one of the people responsible for the Dictionary of the Scots Language’s new editor interface regarding the export of new data from this interface and importing it into the DSL’s website.  I was sent a ZIP file containing a sample of the data for SND and DOST, plus a sample of the bibliographical data, with some information on the structure of the files and some points for discussion.

I looked through all of the files and considered how I might be able to incorporate the data into the systems that I created for the DSL’s website.  I should be able to run the new dictionary XML files through my upload script with only a few minor modifications required.  It’s also really great that the bibliographies and cross references are getting sorted via the new Editor interface.  One point of discussion is that the new editor interface has generated new IDs for the entries, and the old IDs are not included.  I reckoned that it would be good if the old IDs were included in the XML as well, just in case we ever need to match up the current data with older datasets.  I did notice that the old IDs already appeared to be included in the <url> fields, but after discussion we decided that it would be safer to include them as an attribute of the <entry> tag, e.g. <entry oldid=”snd848”> or something like that, which is what will happen when I receive the full dataset.

There are also new labels for entries, stating when and how the entry was prepared.  The actual labels are stored in a spreadsheet and a numerical ID appears in the XML to reference a row in the spreadsheet.  This method of dealing with labels seems fine with me – I can update my system to use the labels from the spreadsheet and display the relevant labels depending on the numerical codes in the entry XML.  I reckon it’s probably better to not store the actual labels in the XML as this saves space and makes it easier to change the label text, if required, as it’s only then stored in a single place.

The bibliographies are looking good in the sample data, but I pointed out that it might be handy to have a reference of the old bibliographical IDs in the XML, if that’s possible.  There were also spurious xmlns=”” attributes in the new XML, but these shouldn’t pose any problems and I said that it’s ok to leave them in.  Once I receive the full dataset with some tweaks (e.g. the inclusion of old IDs) then I will do some further work on this.

I spent most of the rest of my available time working on the new Comparative Kingship place-names systems.  I completed work on the Scotland CMS, including adding in the required parishes and former parishes.  This means my place-name system has now been fully modernised and uses the Bootstrap framework throughout, which looks a lot better and works more effectively on all screen dimensions.

I also imported the data from GB1900 for the relevant parishes.  There are more than 10,000 names, although a lot of these could be trimmed out – lots of ‘F.P.’ for footpath etc.  It’s likely that the parishes listed are rather broader than the study will be.  All the names in and around St Andrews are in there, for example.  In order to generate altitude for each of the names imported from GB1900 I had to run a script I’d written that passes the latitude and longitude for each name in turn to Google Maps, which then returns elevation data.  I had to limit the frequency of submissions to one every few seconds otherwise Google blocks access, so it took rather a long time for the altitudes of more than 10,000 names to be gathered, but the process completed successfully.

Also this week I dealt with an issue with the SCOTS corpus, which had broken (the database had gone offline) and helped Raymond at Arts IT Support to investigate why the Anglo-Norman Dictionary server had been blocking uploads to the dictionary management system when thousands of files were added to the upload form.  It turns out that while the Glasgow IP address range was added into the whitelist the VPN’s IP address range wasn’t, which is why uploads were being blocked.

Next week I’m also taking a couple of days off to cover the Easter School holidays, and will no doubt continue with the DSL and Comparative Kingship projects then.