Week Beginning 26th July 2021

I returned to Glasgow for a more regular week of working from home, after spending a delightful time at my parents’ house in Yorkshire for the past two weeks.  I continued to work on the Comparative Kingship front-ends this week.  I fixed a couple of issues with the content management systems, such as ensuring that the option to limit the list of place-names by parish worked for historical parishes and fixing an issue whereby searching by sources was returning zero results.  Last week I’d contacted Chris Fleet at NLS Maps to ask whether we might be able to incorporate a couple of maps of Ireland that they host into our resource, and Chris got back to me with a very helpful reply, giving us permission to use the maps and also pointing out some changes to the existing map layers that I could make.

I updated the attribution links on the site, and pointed the OS six-inch map links to the NLS’s hosting on AWS.  I also updated these things on the other place-name resources I’ve created too.  We had previously been using a modern OS map layer hosted by the NLS, and Chris pointed out that a more up to date version could now be accessed directly from the OS website (see Chris’s blog post here: https://www.ordnancesurvey.co.uk/newsroom/blog/comparing-past-present-new-os-maps-api-layers).  I followed the instructions and signed up for an OS API key, and it was then a fairly easy process to replace the OS layer with the new one.  I did the same with the other place-name resources too, and its looks pretty good.  See for example how it looks on a map showing placenames beginning with ‘B’ on the Berwickshire site: https://berwickshire-placenames.glasgow.ac.uk/place-names/?p=results&source=browse&reels_name=B*#13/55.7939/-2.2884/resultsTabs-0/code/tileOS//

With these changes and the Irish historical maps in place I continued to work on the Irish front-end.  I added in the parish boundaries for all of the currently required parishes and also added in three-letter acronyms that the researcher Nick Evans had created for each parish.  These are needed to identify the parishes on the map, as full parish names would clutter things up too much.  I then needed to manually positing each of the acronyms on the map, and to do so I updated the Irish map to print the latitude and longitude of a point to the console whenever a mouse click is made.  This made it very easy to grab the coordinates of an ideal location for each acronym.

There were a few issues with the parish boundaries, and Nick wondered whether the boundary shapefiles he was using might work better.  I managed to open the parish boundary shapefile in QGIS, converted the boundary data to WGS84 (latitude / longitude) and then extracted the boundaries as a GeoJSON file that I can use with my system.  I then replaced the previous parish boundaries with the ones from this dataset, but unfortunately something was not right with the positioning.  The northern Irish ones appear to be too far north and east, with the boundary for BNT extending into the sea rather than following the coast and ARM not even including the town of Armoy, as the following screenshot demonstrates:

In QGIS I needed to change the coordinate reference system from TM65 / Irish Grid to WGS84 to give me latitude and longitude values, and I wondered whether this process had caused the error, therefore I loaded the parish data into QGIS again and added an OpenStreetMap base map to it too, and the issue with the positioning is still apparent in the original data, as you can see from the following QGIS screenshot:

I can’t quite tell if the same problem exists with the southern parishes.  I’d positioned the acronyms in the middle of the parishes and they mostly still seem to be in the middle, which suggests these boundaries may be ok, although I’m not sure how some could be wrong while others are correct as everything is joined together.  After consultation with Nick I reverted to the original boundaries, but kept a copy of the other ones in case we want to reinstate them in future.

Also this week I investigated a strange issue with the Anglo-Norman Dictionary, whereby a quick search for ‘resoler’ brings back an ‘autocomplete’ match, but then finds zero results if you click on it.  ‘Resoler’ is a cross-reference entry and works in the ‘browse’ option too.  It seemed very strange that the redirect from the ‘browse’ would work, and also that a quick search for ‘resolut’, which is another variant of ‘resoudre’ was also working.  It turns out that it’s an issue with the XML for the entry for ‘resoudre’.  It lists ‘resolut’ as a variant, but does not include ‘resoler’ as you can see:

<variant gram=”imp.5″>resolvez</variant>

<deviant gram=”imp.5″>resoylez</deviant>

<varref><reference><source siglum=”Alchimie”><loc>380.5</loc></source></reference></varref>

<variant gram=”p.p.”>resolé</variant>

<variant>resolu</variant>

<variant>resolut</variant>

<newvargroup/>

<variant gram=”p.p.pl.”>resolous</variant>

<deviant gram=”p.p.pl.”>resouz</deviant>

<varref><reference><source siglum=”Secr1″><loc>1524</loc></source></reference></varref>

<deviant>resus</deviant>

<varref><reference><source siglum=”Alchimie”><loc>379.1</loc></source></reference></varref>

The search uses the variants / deviants from the XML to figure out which main entry to load from a cross reference.  As ‘resoler’ is not present the system doesn’t know what entry ‘resoler’ refers to and therefore displays no results.  I pointed this out to the editor, who changed the XML to add in the missing variant, which fixed the issue.

Also this week I responded to some feedback on the Data Management Plan for Kirsteen’s project, which took a little time to compile, and spoke to Jennifer Smith about her upcoming follow-on project for SCOSYA, which begins in September and I’ll be heavily involved with.  I also had a chat with Rhona about the ancient DSL server that we should now be able to decommission.

Finally, Gerry Carruthers sent me some further files relating to the International Journal of Scottish Theatre, which he is hoping we will be able to host an archive of at Glasgow.  It consisted of a database dump, which I imported into a local database and had a look at it.  It mostly consists of tables used to manage some sort of editorial system and doesn’t seem to contain the full text of the articles.  Some of the information contained in it may be useful, though – e.g. it stores information about article titles, authors, the issues articles appear in, the original PDF filenames for each article etc.

In addition, the full text of the articles is available as both PDF and HTML in the folder ‘1>articles’.  Each article has a numerically numbered folder (e.g. 109) that contains two folders: ‘public’ and ‘submission’.  ‘public’ contains the PDF version of the article.  ‘submission’ contains two further folders: ‘copyedit’ and ‘layout’.  ‘copyedit’ contains an HTML version of the article while ‘layout’ contains a further PDF version.  It would be possible to use each HTML version as a basis for a WordPress version of the article.  However, some things need to be considered:

Does the HTML version always represent the final published version of the article?  The fact that it exists in folders labelled ‘submission’ and ‘copyedit’ and not ‘public’ suggests that the HTML version is likely to be a work in progress version and editorial changes may have been made to the PDF in the ‘public’ folder that are not present in the HTML version.  Also, there are sometimes multiple HTML versions of the article.  E.g. in the folder ‘1>articles>154>submission>copyedit’ there are two HTML files: ‘164-523-1-CE.htm’ and ‘164-523-2-CE.htm’.  These both contain the full text of the article but have different formatting (and may have differences in the content, but I haven’t checked this).

After looking at the source of the HML versions I realised these have been auto-generated from MS Word.  Word generates really messy, verbose HTML with lots of unnecessary tags and I therefore wanted to see what would happen if I copied and pasted it into WordPress.  My initial experiment was mostly a success, but WordPress treats line breaks in the pasted file as actual line breaks, meaning the text didn’t display as it should.  What I needed to do in my text editor was find and replace all line break characters (\r and \n) with spaces.  I also had to make sure I only copied the contents within the HTML <body> tag rather than the whole text of the file.  After that the process worked quite well.

However, there are other issues with the dataset.  For example, article 138 only has Word files rather than HTML or PDF files and article 142 has images in it, and these are broken in the HTML version of the article.  Any images in articles will probably have to be manually added in during proofreading.  We’ll need to consider whether we’ll have to get someone to manually migrate the data, or whether I can write a script that will handle the bulk of the process.

I had my second vaccination jab on Wednesday this week, which thankfully didn’t hit me as hard as the first one did.  I still felt rather groggy for a couple of days, though.  Next week I’m on holiday again, this time heaving to the Kintyre peninsula to a cottage with no internet or mobile signal, so I’ll be unreachable until the week after next.