I returned to Glasgow for a more regular week of working from home, after spending a delightful time at my parents’ house in Yorkshire for the past two weeks. I continued to work on the Comparative Kingship front-ends this week. I fixed a couple of issues with the content management systems, such as ensuring that the option to limit the list of place-names by parish worked for historical parishes and fixing an issue whereby searching by sources was returning zero results. Last week I’d contacted Chris Fleet at NLS Maps to ask whether we might be able to incorporate a couple of maps of Ireland that they host into our resource, and Chris got back to me with a very helpful reply, giving us permission to use the maps and also pointing out some changes to the existing map layers that I could make.
I updated the attribution links on the site, and pointed the OS six-inch map links to the NLS’s hosting on AWS. I also updated these things on the other place-name resources I’ve created too. We had previously been using a modern OS map layer hosted by the NLS, and Chris pointed out that a more up to date version could now be accessed directly from the OS website (see Chris’s blog post here: https://www.ordnancesurvey.co.uk/newsroom/blog/comparing-past-present-new-os-maps-api-layers). I followed the instructions and signed up for an OS API key, and it was then a fairly easy process to replace the OS layer with the new one. I did the same with the other place-name resources too, and its looks pretty good. See for example how it looks on a map showing placenames beginning with ‘B’ on the Berwickshire site: https://berwickshire-placenames.glasgow.ac.uk/place-names/?p=results&source=browse&reels_name=B*#13/55.7939/-2.2884/resultsTabs-0/code/tileOS//
With these changes and the Irish historical maps in place I continued to work on the Irish front-end. I added in the parish boundaries for all of the currently required parishes and also added in three-letter acronyms that the researcher Nick Evans had created for each parish. These are needed to identify the parishes on the map, as full parish names would clutter things up too much. I then needed to manually positing each of the acronyms on the map, and to do so I updated the Irish map to print the latitude and longitude of a point to the console whenever a mouse click is made. This made it very easy to grab the coordinates of an ideal location for each acronym.
There were a few issues with the parish boundaries, and Nick wondered whether the boundary shapefiles he was using might work better. I managed to open the parish boundary shapefile in QGIS, converted the boundary data to WGS84 (latitude / longitude) and then extracted the boundaries as a GeoJSON file that I can use with my system. I then replaced the previous parish boundaries with the ones from this dataset, but unfortunately something was not right with the positioning. The northern Irish ones appear to be too far north and east, with the boundary for BNT extending into the sea rather than following the coast and ARM not even including the town of Armoy, as the following screenshot demonstrates:
In QGIS I needed to change the coordinate reference system from TM65 / Irish Grid to WGS84 to give me latitude and longitude values, and I wondered whether this process had caused the error, therefore I loaded the parish data into QGIS again and added an OpenStreetMap base map to it too, and the issue with the positioning is still apparent in the original data, as you can see from the following QGIS screenshot:
I can’t quite tell if the same problem exists with the southern parishes. I’d positioned the acronyms in the middle of the parishes and they mostly still seem to be in the middle, which suggests these boundaries may be ok, although I’m not sure how some could be wrong while others are correct as everything is joined together. After consultation with Nick I reverted to the original boundaries, but kept a copy of the other ones in case we want to reinstate them in future.
Also this week I investigated a strange issue with the Anglo-Norman Dictionary, whereby a quick search for ‘resoler’ brings back an ‘autocomplete’ match, but then finds zero results if you click on it. ‘Resoler’ is a cross-reference entry and works in the ‘browse’ option too. It seemed very strange that the redirect from the ‘browse’ would work, and also that a quick search for ‘resolut’, which is another variant of ‘resoudre’ was also working. It turns out that it’s an issue with the XML for the entry for ‘resoudre’. It lists ‘resolut’ as a variant, but does not include ‘resoler’ as you can see:
The search uses the variants / deviants from the XML to figure out which main entry to load from a cross reference. As ‘resoler’ is not present the system doesn’t know what entry ‘resoler’ refers to and therefore displays no results. I pointed this out to the editor, who changed the XML to add in the missing variant, which fixed the issue.
Also this week I responded to some feedback on the Data Management Plan for Kirsteen’s project, which took a little time to compile, and spoke to Jennifer Smith about her upcoming follow-on project for SCOSYA, which begins in September and I’ll be heavily involved with. I also had a chat with Rhona about the ancient DSL server that we should now be able to decommission.
Finally, Gerry Carruthers sent me some further files relating to the International Journal of Scottish Theatre, which he is hoping we will be able to host an archive of at Glasgow. It consisted of a database dump, which I imported into a local database and had a look at it. It mostly consists of tables used to manage some sort of editorial system and doesn’t seem to contain the full text of the articles. Some of the information contained in it may be useful, though – e.g. it stores information about article titles, authors, the issues articles appear in, the original PDF filenames for each article etc.
In addition, the full text of the articles is available as both PDF and HTML in the folder ‘1>articles’. Each article has a numerically numbered folder (e.g. 109) that contains two folders: ‘public’ and ‘submission’. ‘public’ contains the PDF version of the article. ‘submission’ contains two further folders: ‘copyedit’ and ‘layout’. ‘copyedit’ contains an HTML version of the article while ‘layout’ contains a further PDF version. It would be possible to use each HTML version as a basis for a WordPress version of the article. However, some things need to be considered:
Does the HTML version always represent the final published version of the article? The fact that it exists in folders labelled ‘submission’ and ‘copyedit’ and not ‘public’ suggests that the HTML version is likely to be a work in progress version and editorial changes may have been made to the PDF in the ‘public’ folder that are not present in the HTML version. Also, there are sometimes multiple HTML versions of the article. E.g. in the folder ‘1>articles>154>submission>copyedit’ there are two HTML files: ‘164-523-1-CE.htm’ and ‘164-523-2-CE.htm’. These both contain the full text of the article but have different formatting (and may have differences in the content, but I haven’t checked this).
After looking at the source of the HML versions I realised these have been auto-generated from MS Word. Word generates really messy, verbose HTML with lots of unnecessary tags and I therefore wanted to see what would happen if I copied and pasted it into WordPress. My initial experiment was mostly a success, but WordPress treats line breaks in the pasted file as actual line breaks, meaning the text didn’t display as it should. What I needed to do in my text editor was find and replace all line break characters (\r and \n) with spaces. I also had to make sure I only copied the contents within the HTML <body> tag rather than the whole text of the file. After that the process worked quite well.
However, there are other issues with the dataset. For example, article 138 only has Word files rather than HTML or PDF files and article 142 has images in it, and these are broken in the HTML version of the article. Any images in articles will probably have to be manually added in during proofreading. We’ll need to consider whether we’ll have to get someone to manually migrate the data, or whether I can write a script that will handle the bulk of the process.
I had my second vaccination jab on Wednesday this week, which thankfully didn’t hit me as hard as the first one did. I still felt rather groggy for a couple of days, though. Next week I’m on holiday again, this time heaving to the Kintyre peninsula to a cottage with no internet or mobile signal, so I’ll be unreachable until the week after next.
This was my second and final week staying at my parents’ house in Yorkshire, where I’m working a total of four days over the two weeks. This week I had an email conversation with Eleanor Lawson about her STAR project, which will be starting very shortly. We discussed the online presence for the project, which will be split between a new section on the Seeing Speech website and an entirely new website, the project’s data and workflows and my role over the 24 months of the project. I also created a script to batch process some of the Edinburgh registers for the Books and Borrowing project. The page images are double spreads and had been given a number for both the recto and the verso (e.g. 1-2, 3-4), but the student registers only ever use the verso page. I was therefore asked to write a script to renumber all of these (e.g. 1-2 becomes 1, 3-4 becomes 2), which I created and executed on a test version of the site before applying to the live data.
I also continued to make tweaks to the front-ends for the Comparative Kingship project. I fixed a bug with the Elements glossary of the Irish site, which was loading the Scottish version instead. I also contacted Chris Fleet at NLS Maps to enquire about using a couple of their historical Irish maps with the site. I also fixed the ‘to top’ button in the CMSes not working; the buttons now actually scroll the page to the top as they should. I also fixed some issues relating to parish names no longer being unique in the system (e.g. the parish of Gartly is in the system twice due to it changing county at some point). This was causing issues with the browse option as data was being grouped by parish name. Changing the grouping to the parish ID thankfully fixed the issue.
I also had a chat with Ann Fergusson at the DSL about multi-item bibliographical entries in the existing DSL data. These are being split into individual items, and a new ‘sldid’ attribute in the new data will be used to specify which item in the old entry the new entry corresponds to. We agreed that I would figure out a way to ensure that these IDs can be used in the new website once I receive the updated data.
My final task of the week was to investigate a problem with Rob Maslen’s City of Lost Books blog (https://thecityoflostbooks.glasgow.ac.uk/) when went offline this week and only displayed a ‘database error’. Usually when this happens it’s a problem with the MySQL database and it takes down all of the sites on the server, but this time it was only Rob’s site that was being affected. I tried accessing the WP admin pages and this gave a different error about the database being corrupted. I needed to update the wordpress config file to add the line define(‘WP_ALLOW_REPAIR’, true); and upon reloading the page WordPress attempted to fix the database. After doing so it stated that “The wp_options table is not okay. It is reporting the following error: Table is marked as crashed and last repair failed. WordPress will attempt to repair this table… Failed to repair the wp_options table. Error: Wrong block with wrong total length starting at 10356”. WordPress appeared to regenerate the table, as after this the table existed and was populated with data and the blog went online again and could be logged into. I’ll have to remember this if it happens again in future.
Next week I’ll be back in Glasgow.
I’m down visiting my parents in Yorkshire for the first time in 18 months this week and next, working a total of four days over the two-week period. This week I mainly focussed on the Irish front-end for the Comparative Kingship place-names project, but also adding in some updates to the Scotland system that I recently set up, such as making the Gaelic forms of the classification codes visible and adding options to browse Gaelic forms of place-names and historical forms to the ‘Browse’ facility and ensuring the other place-name and historical form browses only bring back English forms.
The Irish system is mostly identical to the Scottish system, but I did need to make some changes that took a bit of time to implement. As the place-names covered appear to be much more geographically spread out, I’ve allowed the map to be zoomed out further. I’ve also had to remove the modern OS and historical OS map layers as they don’t cover Ireland, so currently there are only three map layers available (the default view, satellite view and satellite view with labels). The Ordnance Survey of Ireland provides access to some historical map layers here: https://geohive.maps.arcgis.com/apps/webappviewer/index.html?id=9def898f708b47f19a8d8b7088a100c4 but their terms and conditions makes it clear that you can’t use the maps on another online resource. However, there are a couple of Irish maps on the NLS website, the Bartholomew Quarter-Inch 1940 (https://maps.nls.uk/geo/explore/#zoom=9&lat=53.10286&lon=-7.34481&layers=13&b=1) and the GSGS One-Inch 1941-3 (https://maps.nls.uk/geo/explore/#zoom=9&lat=53.10286&lon=-7.34481&layers=14&b=1) and we could investigate integrating these as the NLS maps people have always been very helpful.
I also updated the map pop-ups to include the new Irish data fields, such as baronies, townlands and the different map types. Both English and Gaelic forms of things like parishes, baronies and classification codes are displayed throughout the site and on the Record page the ITM figures also appear. I updated the ‘Browse’ page so that it features baronies and the element glossary should work, but I haven’t tested it out as there is no data yet. The Advanced search features a selectable list of baronies and currently a simple textbox for townlands. I may change this to an autocomplete (whereby you start typing and townlands that include the letters appear in a selectable list), or I may leave it as it is, meaning multiple townlands can be searched for and wildcard characters can be used.
I managed to locate downloadable files containing parish boundaries for Ireland here: https://www.townlands.ie/page/download/ and have added these to the data for the two parishes that currently contain data. I haven’t added in any other parish boundaries yet as there are over 200 parishes in our database I don’t want to have to manually add in the boundaries for all of these if it won’t be necessary. Also, on the Scotland maps the three-letter acronym appears in the middle of each parish in order to identify it, but the Irish parishes don’t have TLAs so currently don’t have any labels. The full text of the parish will clutter up the map too much if I use it, so I’m not sure what we could do to label the parishes.
Also this week I responded to some feedback about the Data Management Plan for Kirsteen McCue’s proposal and created a slightly revised version. I also had an email conversation with Eleanor Lawson about her new speech project and how the web presence for the project may function. Finally, I made some tweaks to the Dictionary of the Scots Language, updating the layout of the ‘Contact’ page and updating the bibliography page on the new website so that URLs that use the old style IDs will continue to work. I also had a chat with Rhona Alcorn about some new search options that we are going to add in to the new site before it goes live, although probably not until the autumn.
After a lovely week’s holiday in the East Neuk of Fife last week I returned to full week of work. I spent Monday catching up with emails and making some updates to two project websites. Firstly, for the Anglo-Norman Dictionary I updated the Textbase to add in the additional PDF texts. As these are not part of the main Textbase I created a separate page that listed and linked to them, and added a reference to the page to the introductory paragraph of the main Textbase page. Secondly, I made some further updates to the content management system for the Books and Borrowing project. There was a bug in the ‘clear borrower’ feature that resulted in the normalised occupation fields not getting clears. This meant that unless a researcher noticed and manually removed the selected occupations it would be very easy to end up with occupations assigned to the wrong borrower. I implemented a fix for this bug, so all is well now. I had also been alerted to an issue with the library’s ‘books’ tab. When limiting the listed books to only those mentioned in a specific register the list of associated borrowing records that appears in a popup was not limiting the records to those in the specified register. I fixed this as well, and also made a comparable fix to the ‘borrowers’ tab as well.
During the week I also had an email conversation with Kirsteen McCue about her ‘Singing the Nation’ AHRC proposal, and made a new version of the Data Management Plan for her. I also investigated some anomalies with the stats for the Dictionary of the Scots Language website for Rhona Alcon. Usage figures were down compared to last year, but it looks like last year may have been a blip caused by Covid, as figures for this year match up pretty well with the figures for years before the dreaded 2020.
On Wednesday I was alerted to an issue with the Historical Thesaurus website, which appeared to be completely inaccessible. Further investigation revealed that other sites on the server were also all down. Rather strangely the Arts IT Support team could all access the sites without issue, and I realised that if I turned wifi off on my phone and accessed the site via mobile data I could access the site too. I had thought it was an issue with my ISP, but Marc Alexander reported that he used a different ISP and could also not access the sites. Marc pointed me in the direction of two very handy websites that are useful for checking whether websites are online or not. https://downforeveryoneorjustme.com checks the site and lets you know whether it’s working while https://www.uptrends.com/tools/uptime is a little more in-depth and checks whether the site is available from various locations across the globe. I’ll need to remember these in future.
The sites were still inaccessible on Thursday morning and after some Googling I found an answer to someone with a similar issue here: https://webmasters.stackexchange.com/questions/104092/why-is-my-site-showing-as-being-down-for-some-places-and-not-others I asked Arts IT Support to check with central IT Services to see whether any DNS settings had been changed recently or if they know what might be causing the issue, as it turned out to be a more widespread issue than I had thought, and was affecting sites on different servers too. A quick check of the sites linked to from this site showed that around 20 websites were inaccessible.
Thankfully by Thursday lunchtime the sites had begun to be accessible again, although not for everyone. I could access them, but Marc Alexander still couldn’t. By Friday morning all of the sites were fully accessible again from locations around the globe, and Arts IT Support got back to me with a cause for the issue. Apparently there was some server in the Boyd Orr that controls DNS records for the University and it had gone wrong and sent out garbled instructions to other DNS servers around the world, which knocked out access to our sites, even though the sites themselves were all working perfectly.
I spent the rest of the week working on the front-end for the Scotland data for the Comparative Kingship project, a task that I’d begun before I went away on my holiday. I managed to complete an initial version of the Scotland front-end, which involved taking the front-end from one of the existing place-names websites (e.g. https://kcb-placenames.glasgow.ac.uk/) and adapting it. I had to make a number of adaptations, such as ensuring that two parallel interfaces and APIs could function on one site (one for Scotland, one for Ireland), updating a lot of the site text, creating a new, improved menu system and updating the maps so that they defaulted to the new area of research. I also needed to add in facilities to search, return data for and display new Gaelic fields, e.g. Gaelic versions of place-names and historical forms. This meant updating the advanced search to add in a new ‘language’ choice option, to enable a user to limit their search to just English or Gaelic place-name forms or historical forms. This in turn meant updating the API to add in this additional option.
An additional complication came when I attempted to grab the parish boundary data, which for previous project I’d successfully exported from Scottish Government’s Spatial Data website (https://www.spatialdata.gov.scot/geonetwork/srv/eng/catalog.search#/metadata/c1d34a5d-28a7-4944-9892-196ca6b3be0c) via a handy API (https://maps.gov.scot/server/rest/services/ScotGov/AgricultureEnvironment/MapServer/1/query). However, the parish boundary data was not getting returned with latitude / longitude pairs marking the parish shape, but instead used esriMeters instead. I found someone else who wanted to covert esriMeters into lat/lng (https://gis.stackexchange.com/questions/54534/how-can-i-convert-esrimeters-to-lat-lng) and one of the responses was that with an ArcGIS service (which the above API appears to be) you should be able to set the ‘output spatial reference’, with the code 4326 being used for WGS84, which would give lat/lng values. The API form does indeed have an ‘Output Spatial Reference’ field, but unfortunately it doesn’t seem to do anything. I did lots of further Googling and tried countless different ways of entering the code, but nothing changed the output.
Eventually I gave up and tried an alternative approach. The site also provides the parish data as an ESRI Shapefile (https://maps.gov.scot/ATOM/shapefiles/SG_AgriculturalParishes_2016.zip) and I wondered whether I could plug this into a desktop GIS package and use it to migrate the coordinates to lat/lng. I installed the free GIS package QGIS (https://www.qgis.org/en/site/forusers/download.html) and after opening it went to the ‘Layer’ menu, selected ‘Add Layer’, then ‘Add Vector Layer’ then selected the zip file and pressed ‘add’, at which point all of the parish data loaded in, allowing me to select a parish and view the details for it. What I then needed to do was to find a means of changing the spatial reference and saving a geoJSON file. After much trial and error I discovered that in the ‘Layer’ menu there is a ‘Save as’ option. This allowed me to specify the output format (geoJSON) and change the ‘CRS’, which is the ‘Coordinate Reference System’. In the drop-down list I located ESPG: 4326 / WGS84 and selected it. I then specified a filename (the folder defaults to a Windows System folder and needs to be updated too) and pressed ‘OK’ and after a long wait the geoJSON file was generated, with latitude and longitude values for all parishes. Phew! It was quite a relief to get this working.
With access to a geoJSON file containing parishes with lat/lng pairings I could then find and import the parishes that we needed for the current project, of which there were 28. It took a bit of time to grab all of these, and I then needed to figure out where I wanted the three-letter acronyms for each parish to be displayed, as well, which I worked out using the National Library of Scotland’s parish boundaries map (https://maps.nls.uk/geo/boundaries/), which helpfully displays lat/lng coordinates for your cursor position in the bottom right. With all of the parish boundary data in place the infrastructure for the Scotland front-end is now more or less complete and I await feedback from the project team. I will begin work on the Ireland section next, which will take quite some work as the data fields are quite different. I’m only going to be working a total of four days over the next two weeks (probably as half-days) so my reports for the next couple of weeks are likely to be a little shorter!
This was a four-day week for me as I’m off on Friday and will be off all of next week too. A big thing I ticked off my ‘to do’ list this week was completing work on the ‘Browse’ facility for the Anglo-Norman Textbase, featuring each text on its own continuous page rather than split into sometimes hundreds of individual pages. I finished updating the way footnotes work, and they are now renumbered starting at  on each page of each text no matter what format they originally had. All of the issues I’d noted about footnote numbers in my previous couple of blog posts have now been addressed (e.g. numbering out of sequence, numbers getting erroneously repeated).
With the footnotes in place I then we through each of the 77 texts to check their layout, which took quite some time but also raised a few issues that needed to be fixed. The biggest thing was I needed to regenerate the page number data (used in the ‘jump to page’ feature) as I realised the previously generated data was including all <pb> tags, but some of the texts such as ‘indentures’ use <pb> to mean something else. For example, ‘<pb ed=”MS” n=”Dorse”/>’ is not an actual page break and there are numerous of these occurrences throughout the text, resulting in lots of ‘Dorse’ options in the ‘jump to page’ list. Instead I limited the page breaks to just those that have ‘ed=”base”’ in them, e.g. ‘<pb n=”49″ ed=”base”/>’ and this seems to have done the trick.
I also noticed some issues with paragraph and table tags in footnotes causing the notes to display in the wrong place or display only partially, and the ‘dorse’ issue was also resulting in footnotes getting added to the wrong page sometimes. Thankfully I managed to fix these issues and so as far as I can tell that’s the ‘browse’ facility of the Textbase complete. The editors don’t want to launch the Textbase until the search facilities have also been developed, so it’s going to be a while until they’re actually available, what with summer holidays and commitments to other projects.
Also this week I continued to work on the Books and Borrowings project, having an email conversation with the digitisers at the NLS about file formats and methods of transferring files, and making further updates to the CMS to add features and make things run quicker. I managed to reduce the number of database calls on the ‘Books’ tab in the library view again, which should mean the page loads faster. Previously all book holding records were returned and then a separate query was executed for each to count the number of borrowings whereas I’ve now nested the count query in the initial query. So for St Andrews with its 7471 books this has cut out 7471 individual queries.
I’d realised that the ‘borrowing records’ count column in this ‘Books’ table isn’t actually a count of borrowing records at all, but a count of the number of book items that have been borrowed for the book holding. I’ve figured out a way to return a count of borrowing records instead, and I replaced the old way with the new way, so the ‘Borrowing Records’ column now does what it should do. This means the numbers listed have changed, e.g. ‘Universal History’ now has 177 borrowing records rather than 269 and is no longer the most borrowed book holding at St Andrews. I also changed the popup so that each borrowing record only appears once (e.g. David Gregory on 1748-6-7 now only has one borrowing record listed). I added a further ‘Total borrowed items’ column in as well, to hold the information that was previously in the ‘Borrowing Records’ column, and it’s possible to order the table by this column too. I also noticed that I’d accidentally removed columns displaying additional fields from the table, so I have reinstated these. For St Andrews this means the ‘Classmark’ column is now back in the table. I also realised that my new nested count queries were not limiting their counts when a specific register was selected so updated them to take this into consideration too.
I spent a lot of this week continuing with the Anglo-Norman Dictionary, including making some changes to the proofreader feature I created recently. I tweaked the output of this so that there is now a space between siglum and ‘MS’, ‘edgloss’ now has brackets and there is now a blank paragraph before the ‘summary’ section and also before the ‘cognate refs’ section to split things up a bit. I also added some characters (~~) before and after the ‘summary’ section to help split things up and added extra spaces before and after sense numbers, and square brackets around them (because background styles, which give the round, black circle are not carried over into Word when the content is copied). I also added more spaces round the labels, added an extra line break before locutions and made the locution phrase appear in bold.
I also spent some time investigating some issues with the data, for example a meaning was not getting displayed in the summary section of https://anglo-norman.net/entry/chaucer_3 because the part of speech labels didn’t quite match up (one was ‘subst.’, the other was ‘sbst.’) and updated the entry display so that the ‘form section’ at the top of an entry gets displayed even if there is no ‘cognate refs’ section. My code repositions the ‘formSection’ so it appears before ‘cognateRefs’ and as it was not finding this section it wasn’t repositioning the forms anywhere – instead they just disappeared. I therefore updated the code to ensure that the forms will only be repositioned if the ‘cognateRefs’ section is present, and this has fixed the matter.
I also responded to a request for data from a researcher at Humboldt-Universität zu Berlin who wanted information on entries that featured specific grammatical labels. As of yet the advanced search does not include a part of speech search, but I could generate the necessary data from the underlying database. I also ran a few queries to update further batches of bad dates in the system.
My initial version allowed an editor to add Text and MS dates using the input boxes and then by pressing the ‘Generate XML’ button the ‘XML’ box is populated and the date as it would be displayed on the site is also displayed. I amalgamated the ‘proof’ and ‘Build XML’ options from the old DMS as it seemed more useful to just do both at the same time. There is also a ‘clear’ button that does what you’d expect it to do and a ‘log’ that displays feedback about the date. E.g. if the date doesn’t conform to the expected pattern (yyyy / yyyy-yyyy / yyyy-yy / yyyy-y) or one of the characters isn’t a number or the date after the dash is earlier than the date before the dash then a warning will be displayed here. The XML area is editable so if needs be the content can be manually tweaked. There is also a ‘Copy XML’ button to copy the contents of the XML area to the clipboard.
Also this week I set up some new user accounts for the Books and Borrowing project, I gave Luca Guariento some feedback about an AHRC proposal, I had to deal with the server and database going down a few times and I added a new publication to the SCOSYA website.
Finally, I made some further tweaks to the Comparative Kingship content management systems for Scottish and Irish placenames. When I set up the two systems I’d forgotten to add the x-refs section into the form. The code was all there to handle them, but the section wasn’t appearing. I therefore updated both Scotland and Ireland so x-refs now appear. I’d also noticed that some of the autogenerated lists that appear when you type into boxes in the Ireland site(e.g. xrefs) were pointing to the Scotland database and therefore bringing back the wrong data and I fixed this too.
I also added all of the sources from the Kirkcudbrightshire system to the Scotland CMS and replaced the Scotland elements database with the one from KCB as well, which required me to check the elements already associated with names to ensure they point to the same data. Thankfully all did except the existing name ‘Rhynie’, which was newly added and its ID ended up referencing an entirely different element from the KCB database, but I fixed this. I also fixed a bug with the name and element deletion code that was preventing things for getting deleted.
I headed into the University for the first time this year on Wednesday this week to collect a new iPad that I’d ordered and to get some files from my office. It was great to see the old place again, but it did take quite a chunk out of my day to travel there and back, especially as I’m still home-schooling either a morning or an afternoon each day at the moment too.
As with last week, I mainly divided my time this week between the Dictionary of the Scots Language, the Anglo-Norman Dictionary and the Books and Borrowing project, with a few other bits and bobs added in as well. For the DSL I retrieved the source code for my original Scots School Dictionary app from my office so we can host this somewhere on the DSL website. This is because the DSL have commissioned someone else to make a new School Dictionary app, which launched this week, but doesn’t include an ‘English to Scots’ feature as the old app does, so we’re going to make the old app available as a website for those people who miss the feature. I also made a few minor tweaks to the main DSL site, and then focussed on adding bibliography search facilities to the new version of the API, a task that I’d begun last week.
I created a new table for the bibliographical data that includes the various fields used for DOST (note, author, editor, date, longtitle etc) and a field for the XML data used for SND. I then created two further tables for searching, one that contains every author and editor name for each item (for DOST there may be different names in the author, editor, longauthor and longeditor fields while for SND there may be any number of <author> tags) and the other containing every title for each item (DOST may have different text in title and longtitle while SND items can have any number of <title> tags). These tables allow you to search for any variant author, editor or title and find the item.
I also created two additional fields in the bibliography table that contain the ‘display author’ and ‘display title’. These are the forms that get displayed in the search results before you click on an item to open the full bibliographical entry. I then updated the V4 API to add in facilities to search and retrieve the bibliographies. I didn’t have the time to connect to this API and to implement the search on the Sienna test site, which is something I hope to do next week, but the logic behind the search and display of bibliographies is all there. There is a predictive search that will be used to generate the autocomplete list, similar to how the live site currently works: You will be able to select whether your search is for authors, titles or both and when you start typing in some text a list of matching items will appear, e.g. typing in ‘ham’ for authors in both dictionaries will display the following all items containing ‘ham’ and when you select an item this will then perform a search for the specific text. You will then be able to click on an item to view the full bibliography. This is a bit different to how the live site currently works, as with these if you enter ‘ham’ and select (for example) ‘Hamilton, J,’ from the autocomplete list you are taken directly to a page that lists all of the items for the author. However, we can’t do that any more as we no longer have unique identifiers that group bibliographical items by author. I may be able to do something similar with the page that comes up when you select an author, but this would have to rely on the name to group items together and a name may not be unique.
For the AND I made some tweaks to the website, such as adding a link to the search page if you type some text into the ‘jump to entry’ option and no matching entries are found. I then spent the rest of my time continuing to develop the new content management system, specifically the pages for managing source texts. I finished work on this, adding in facilities to add, edit, browse and delete source texts from the database. I then migrated the DTD to the new site, which is referenced by the editors’ XML editor when they work on the entry XML files. The DTD on the old server referenced several lists of things that are then used to populate drop-down lists of options in the XML editor. I migrated these too, making them dynamically generated from the underlying database rather than statis lists, meaning when (for example) new source texts are added to the CMS these will automatically become available when using the XML editor.
For the Books and Borrowing project I participated in the project’s Zoom call on Monday to discuss the project’s CMS and how to amalgamate the various duplicate author records that resulted from data uploads from different libraries. After the call I made some required changes to the CMS, such as making the editor’s notes fields visible by default again, and worked on the duplicate authors matching script to add in further outputs when comparing the author names with Levenshtein ratings of 1 and 2. I also reviewed some content that was sent to us from another library.
Also this week I responded to an email from James Caudle in Scottish Literature about a potential project he’s setting up, made a couple of changes to the Scots Language Policy website, made some tweaks to the menu structure for the Scots Syntax Atlas project and gave some advice to a post-grad student who had contacted me about setting up a corpus.
I worked on many different projects this week, with most of my time being split between the Dictionary of the Scots Language, the Anglo-Norman Dictionary, the Books and Borrowing project and the Scots Language Policy project. For the DSL I began investigating adding the bibliographical data to the new API and developing bibliographical search facilities. Ann Ferguson had sent me spreadsheets containing the current bibliographical data for DOST and SND and I migrated this data into a database and began to think about how the data needs to be processed in order to be used on the website. At the moment links to bibliographies from SND entries are not appearing in the new version of the API, while DOST bibliographical links do appear but don’t lead anywhere. Fixing the latter should be fairly straightforward but the former looks to be a bit trickier.
For SND for the live site using the original V1 API it looks like the bibliographical links are stored in a database table and these are then injected into the XML entries whenever an entry is displayed. A column in the table contains the order the citation appears in the entry and this is how the system knows which bibliographical ID to assign to which link in the entry. This raises some questions about what happens when an entry is edited. If the order of the citations in the XML is changed, or a new citation is added then all of the links to the bibliographies will be out of sync. Plus, unless the database table is edited no new bibliographical links will ever display. It is possible that the data in bibliographical links table is already out of date and we are going to need to try and find a way to add these bibliographical links into the actual XML entries rather than retaining the old system of storing them separately and then injected then each time the entry is requested. I emailed Ann for further discussion about these points. Also this week I made a few updates to the live DSL website, changing the logos that are used and making ‘Dictionary’ in the title plural.
For the AND this week I added in the missing academic articles that Geert had managed to track down and then began focusing on updating the source texts and working with the commentaries for the R data. The commentaries were sent to me in two Word files, and although we had hoped to be able to work out a mechanism for automatically extracting these and adding them to their corresponding entries it looks like this will be very difficult to achieve with any accuracy. I concluded that I could split the entries up in Geert’s document based on the ‘**’ characters between commentaries and possibly split Heather’s up based on blank lines. I could possibly retain the formatting (bold, italic, superscript text etc) and convert this to HTML, although even this would be tricky, time consuming and error-prone. The commentaries include links to other entries in bold, and I would possibly be able to automatically add in links to other entries based on entries appearing in bold in the commentaries, but again this would be highly error-prone as bold text is used for things other than entries, and sometimes the entry number follows a hash while at other times it’s superscript. It would also be difficult to automatically ascertain which entry a commentary belongs to as there is some inconsistency here too – e.g. the commentary for ‘remuement’ is listed as ‘[remuement]??’ and there are other occasions where the entry doesn’t appear on its own on a line – e.g. ‘Retaillement xref with recelement’ and ‘Reverdure—Geert says to omit’. Then there are commentaries that are all crossed out, e.g. ‘resteot’. We decided that attempting to automatically process the commentaries would not be feasible and instead the editors would add them to the entry XML files manually, adding the tags for bold, italic, superscript and other formatting as required. Geert added commentaries to two entries to see how this would work and it worked very well.
For the source texts, we had originally discussed the editors editing these via a spreadsheet that I’d generated from the online data last year, but I decided it would be better if I just start work on the new online Dictionary Management System (DMS) and create the means of adding, listing and editing the source texts as the first thing that can be managed via the new DMS. This seemed preferable to establishing a new, temporary workflow that may take some time to set up and may end up not being used for very long. I therefore created the login and initial pages for the DMS (by repurposing earlier content management systems I’d created). I then set up database tables for holding the new source text data, which includes multiple potential items for each source and a range of new fields that the original source text data does not contain. With this in place I created the DMS pages for browsing the source texts and deleting them, and I’m midway through writing the scripts for editing existing and adding new source texts. I aim to have this finished next week.
For the Books and Borrowing project I continued to make refinements to the CMS, namely reducing the number of books and borrowers from 500 to 200 to speed up page loads, adding in the day of the week that books were borrowed and returned, based on the date information already in the system, removing tab characters for edition titles as these were causing some issues for the system, replacing the editor’s notes rich text box with a plain text area to save space on the edit page and adding a new field to the borrowing record that allows the editor to note when certain items appear for display only and should otherwise be overlooked, for example when generating stats. This is to be used for duplicate lines and lines that are crossed out. I also had a look through the new sample data from Craigston that was sent to us this week.
For the Scots Language Policy project I set up the project’s website, including the user interface, adding in fonts, plugins, initial page structure, site graphics, logos etc. Also this week I fixed an issue with song downloads on the Burns website (the plugin the controls the song downloads is very old and had broken. I needed to install a newer version and upgrade the song data for the downloads to work again. I also continued my email conversation with Rachel Fletcher about a project she’s putting together and created a user account to allow Simon Taylor to access the Ayr Placenames CMS.
This was my last week before the Christmas holidays, and it was a four-day week as I’d taken Friday off to use up some unspent holidays. Despite only being four days long it was a very hectic week, as I had lots of loose ends to tie up before the launch of the new Anglo-Norman Dictionary website on Wednesday. This included tweaking the appearance of ‘Edgloss’ tags to ensure they always have brackets (even if they don’t in the XML), updating the forms to add line breaks between parts of speech and updating the source texts pop-ups and source texts page to move the information about the DEAF website.
I also added in a lot of the ancillary page data, including the help text, various essays, the ‘history’ page, copyright and privacy pages, the memorial lectures and the multi-section ‘introduction to the AND’. I didn’t quite manage to get all of the links working in the latter and I’ll need to return to this next year. I also overhauled the homepage and footer, adding in the project’s Twitter feed, a new introduction and adding links to Twitter and Facebook to the footer.
I also identified and fixed an error with the label translations, which were sometimes displaying the wrong translation. My script that extracted the labels was failing to grab the sense ID for subsenses. This ID is only used to pull out the appropriate translation, but because of the failure the ID of the last main sense was being used instead. I therefore had to update my script and regenerate the translation data. I also updated the label search to add in citations as well as translations. This means the search results page can get very long as both labels and translations are applied at sense level, so we end up with every citation in a matching sense listed, but apparently this is what’s wanted.
I also fixed the display of ‘YBB’ sources, which for some unknown reason are handled differently to all other sources in the system and fixed the issue with deviant forms and their references and parts of speech.
On Wednesday we made the site live, replacing the old site with the new one, which you can now access here: https://anglo-norman.net/. It wasn’t entirely straightforward to get the DNS update working, but we got there in the end, and after making some tweaks to paths and adding in Google Analytics the site was ready to use, which is quite a relief. There is still a lot of work to do on the site, but I’m very happy with the progress I’ve made with the site since I began the redevelopment in October.
Also this week I set up a new website for phase two of the ‘Editing Burns for the 21st Century’ project and upgraded all of the WordPress sites I manage to the most recent version. I also arranged a meeting with Jane Stuart-Smith to discuss a new project in the New Year, replied to Kirsteen McCue about a proposal she’s finishing off, replied to Simon Taylor about a new place-name project he wants me to be involved with and replied to Carolyn Jess-Cooke about a project of hers that will be starting next year.
That’s all for 2020. Here’s hoping 2021 is not going to be quite so crazy!
I took Friday off again this week as I needed to go and collect a new pair of glasses from my opticians in the West End, which is quite a trek from my house. Although I’d taken the day off I ended up working for about three hours as on Thursday Fraser Dallachy emailed me to ask about the location of the semantically tagged EEBO dataset that we’d worked on a couple of years ago. I didn’t have this at home but I was fairly certain I had it on a computer in my office so I decided to take the opportunity to pop in and locate the data. I managed to find a 10Gb tar.gz file containing the data on my desktop PC, along with the unzipped contents (more than 25,000 files) in another folder. I’d taken an empty external hard drive with me and began the process of copying the data, which took hours. I’d also remembered that I’d developed a website where the tagged data could be searched and that this was on the old Historical Thesaurus server, but unfortunately it no longer seemed to be accessible. I also couldn’t seem to find the code or data for it on my desktop PC, but I remembered the previously I’d set up one of the four old desktop PCs I have sitting in my office as a server and the system was running on this. It took me a while to get the old PC connected and working, but I managed to get it to boot up. It didn’t have a GUI installed so everything needed to be done at the command line, but I located the code and the database. I had planned to copy this to a USB stick, but the server wasn’t recognising USB drives (in either NTFS or FAT format) so I couldn’t actually get the data off the machine. I decided therefore to install Ubuntu Linux on a bootable USB stick and to get the old machine to boot into this rather than run the operating system on the hard drive. Thankfully this worked and I could then access the PC’s hard drive from the GUI that ran from the USB stick. I was able to locate the code and the data and copy them onto the external hard drive, which I then left somewhere that Fraser would be able to access it. Not a bad bit of work for a supposed holiday.
As with previous week’s I split my time mostly between the Anglo-Norman Dictionary and the Dictionary of the Scots Language. For the AND I finally updated the user interface. I added in the AND logo and updated the colour schemes to reflect the colours used in the logo. I’m afraid the colours used in the logo seem to be straight out of a late 1990s website so unfortunately the new interface has that sort of feel about it too. The header area now has a white background as the logo needs a white background to work. The ‘quick search’ option is now better positioned and there is a new bar for the navigation buttons. The active navigation button and other site buttons are now the ‘A’ red, panels are generally the ‘N’ blue and the footer is the ‘D’ green. The main body is now slightly grey so that the entry area stands out from it. I replaced the header font (Cinzel) with a Cormorant Garamond as this more closely resembles the font used in the logo.
The left-hand panel has been reworked so that entries are smaller and their dates and right-aligned. I also added stripes to make it easier to keep your eye on an entry and its date. The fixed header that appears when you scroll down a longer entry now features the AND logo/ The ‘Top’ button that appears when you scroll down a long entry now appears to the right so it doesn’t interfere with the left-hand panel. The footer now only features the logos for Aberystwyth and AHRC and these appear on the right, with links to some pages on the left.
I have also updated the appearance of the ‘Try and Advanced Search’ button so it only appears on the ‘quick search’ results page (which is what should have happened originally). I also removed the semantic tags that were added from the XML but need to be edited out of the XML. I have also ticked a few more things off my ‘to do’ list, including replacing underscores with spaces in parts of speech and language tags and replacing ‘v.a.’ and ‘v.n’ as requested. I also updated the autocomplete styles (when you type into the quick search box) so they fit in with the site a bit better.
I then began looking into reordering the citations in the entries so they appear in date order within their senses, but I remembered that Geert wanted some dates to be batch processed and realised that this should be attempted first. I had a conversation with Geert about this, but the information he sent wasn’t well structured enough to be used and it looks like the batch updating of dates will need to wait until after the launch. Instead I moved on to updating the source text pop-ups in the entry. These feature links to the DEAF website and a link to search the AND entries for all others that feature the source.
On the old site the DEAF links linked through to another page on the old site that included the DEAF text and then linked through to the DEAF website. I figured it would be better to cut out this middle stage and link directly through to DEAF. This meant figuring out which DEAF page should be linked to and formatting the link so their page jumps to the right place. I also added in a note about the link under it.
This was pretty straightforward but the ‘AND Citations’ link was not. On the old site clicking on this link ran a search that displayed the citations. We had nothing comparable to this developed for the new site. So I needed to update the citation search to allow the user to search based on the sigla (source text). This in turn meant updating my citations table to add a field for holding the citation siglum and regenerating the citations and citation search words and then updating the API to allow a citation search to be limited by a siglum ID. I then updated the ‘Citations’ tab of the ‘Advanced Search’ page to add a new box for ‘citation siglum’. This is an autocomplete box – you type some text and a list of matching sigla are displayed, from which you can select one. This in turn meant updating the API to allow the sigla to be queried for this autocomplete. But for example type in ‘a-n’ into the box and a list of all sigla containing this are displayed. Select ‘A-N Falconry’ and you can then find all entries where this siglum appears. You can also combine this with citation text and date (although the latter won’t be much use).
I’ve also tweaked the search results tab on the entry page so that the up and down buttons don’t appear if you’re at the top or bottom of the results, and I’ve ensured if you’re looking at an entry towards the end of the results a sufficient number of results before the one you’re looking at appear. I’ve also ensured that the entry lemma and hom appear in the <title> of the web page (in the browser tab) so you can easily tell which tab contains which entry. Here’s a screenshot of the new interface:
For the DSL I spent some time answering emails about a variety of issues. I also completed my work on the issue of accents in the search, updating the search forms so that any accented characters that a user adds in are converted to their non-accented version before the search runs, ensuring someone searching for ‘Privacé’ will find all ‘privace’ in the full text. I also tweaked the wording of the search results to remove the ‘supplementary’ text from it as all supplementary items have now either been amalgamated or turned into main entries. I also put in redirects from all of the URLs for the deleted child entries to their corresponding main entries. This was rather time consuming to do as I needed to go through each deleted child entry, get each of its URLs, then get the main URL of the corresponding main entry, add these to a new database table and then add a new endpoint to the V4 API that accepts a child URL, then checks the database for any main URL and returns this. Then I needed to update the entry page so that the URL is passed to this new redirect checking API endpoint and if it matches a deleted item the page needs to redirect to the proper URL.
Also this week I had a conversation with Wendy Anderson about updates to the Mapping Metaphor website. I had thought these would just be some simple tweaks to the text of existing pages, but instead the site structure needs to be updated, which might prove to be tricky. I’m hoping to be able to find the time to do this next week.
Finally, I continued to work on the Burns Supper map, adding in the remaining filters. I also fixed a few dates, added in the introductory text and a ‘favicon’. I still need to work on the layout a bit, which I’ll hopefully do next week, but the bulk of the work for the map is now complete.