Week Beginning 3rd May 2021

This was a four-day week due to May Day on Monday, and it was a week I spent almost exclusively on work for the Anglo-Norman Dictionary.  I started off by tackling the issue of sense numbers displaying when there was only one main sense in a part of speech for an entry.  This meant that a ‘1’ appeared next to the sense unnecessarily.  This required some additional processing of the entry in JavaScript before display, which took a bit of time to implement, as I needed to iterate through the ‘entrySense’ elements using a jQuery ‘each’ loop, then look at the following ‘entrySense’ element to get its ‘n’ value.  I hadn’t used an ‘each’ loop in this way before and didn’t know quite how to reference the following element, but finally figured out you can use the iterator with an array in combination with the element selector, in this case var nxt = $(‘.dictionaryPanel > .entrySense’)[i+1]; grabs the next ‘entrySense’ element.  With the code in place entries now display numbers in their nice round black circles in all places except for parts of speech that only have one sense, as you can see with the ‘v.refl.’ and ‘p.pr. as a.’ sections of this page: https://anglo-norman.net/entry/descendre.

I then spent some time investigating why part of speech in the <senseInfo> element of senses sometimes used underscores and other times used spaces.  This discrepancy was messing up the numbering of senses, as this depends on the POS, with the number resetting to 1 when a new POS is encountered.  If the POS is sometimes recorded as ‘p.p._as_a.’ (for example) and other times as ‘p.p. as a.’ then the code thinks these are different parts of speech and resets the counter to 1.  I looked at the DTD, which sets the rules for creating or editing the XML files and it uses the underscore form of POS.  However, this rule only applies to the ‘type’ attribute of the <pos> element and not to the ‘pos’ attribute of the <senseInfo> element.  After investigating it turned out that these ‘pos’ attributes that the numbering system relies on are not manually added in by the editors, but are added in by my scripts at the point of upload.  The reason I set up my script to add these in is because the old systems also added these in automatically during the conversion of the editors’ XML into the XML published via the old Dictionary Management System.  However, this old system refactored the POS, replacing underscores with spaces and thus storing two different formats of POS within the XML.  My upload scripts didn’t do this but instead kept things consistent, and this meant that when an entry was edited to add a new sense the new sense was added with the underscore form of POS, but the existing senses still had the space form of POS.

There were two possible ways I could fix this, I could either write a script that regenerates the <senseInfo> pos for every sense and subsense in every entry, replacing all existing ‘pos’ with the value of the preceding <pos type=””> (i.e. removing all old space forms of POS and ensuring all POS references were consistent); or I could adapt my upload script so that the assignment of <senseInfo> pos treats both ‘underscore’ and ‘space’ versions as the same.  I decided on the former approach and wrote a script to first identify and then update all of the dictionary entries.

The script goes through each entry and finds all that have a <senseInfo> pos with a space in.  There are 2,538 such entries.  I then adapted the script so that for each <senseInfo> in an entry all spaces are changed to underscores and the result is then compared with the preceding <pos> type.  I set the script to output content if there was a mismatch between the <senseInfo> pos and the <pos> type, because when I set the script to update it will use the value from <pos> type, so as to ensure consistency.  The script identified 41 entries where there was a mismatch between <senseInfo> pos and the preceding <pos> type.  These were often due to a question mark being added to the <senseInfo> pos, e.g. ‘a. as s. ?’ vs ‘a._as_s._’, but there were also some where the POS was completely different, e.g. ‘sbst. inf.’ and ‘v.n.’.  I spoke to the editor Geert about this and it turned out that these were due to a locution being moved in the XML without having the pos value updated.  Geert fixed these and I ran the update to bring all of the POS references into alignment.

My final AND task was to look into some issues regarding the variant and deviant section of the entry (where alternative forms of the headword are listed).  Legiturs in this section were not getting displayed, plus there were several formatting issues that needed addressed, such as brackets not appearing in the right place and line breaks not worked as they should.  This was a very difficult task to tackle as there is so much variety to the structure of this section, and the XML is not laid out in the most logical of manners, for example references are not added as part of a <variant> or <deviant> tag but are added after the corresponding tag as a sibling <varref> element.  This really complicates navigating through the variants and deviants as there may be any number of varrefs at the same level.  However, I managed to address the issues with this section, ensuring the legiturs appeared, repositioning semi-colons outside of the <deviant> brackets, ensuring line breaks always occur when a new POS is encountered and don’t occur anywhere else, ensuring multiple occurrences of the same POS label don’t get displayed and fixing the issue with double closing brackets sometimes appearing.  It’s likely that there will be other issues with this section, as the content and formatting is so varied, but for now that’s all issues sorted.

The only other project I worked for this week was the Iona place-names project, for which I helped the RA Sofia with the formatting of this month’s ‘name of the month’ feature (https://iona-placenames.glasgow.ac.uk/names-of-the-month/).  Next week I’ll continue with the outstanding AND tasks, of which there are still several.

Week Beginning 26th April 2021

I continued with the import of new data for the Dictionary of the Scots Language this week.  Raymond at Arts IT Support has set up a new collection and had imported the full-text search data into the Solr server, and I tested this out via the new front-end I’d configured to work with the new data source.  I then began working on the import of the bibliographical data, but noticed that the file exported from the DSL’s new editing system didn’t feature an attribute denoting what source dictionary each record is from.  We need this as the bibliography search allows users to limit their search to DOST or SND.  The new IDs all start with ‘bib’ no matter what the source is.  I had thought I could use the ‘oldid’ to extract the source (db = DOST, sb = SND) but I realised there are also composite records where the ‘oldid’ is something like ‘a200’.  In such cases I don’t think I have any data that I can use to distinguish between DOST and SND records.  The person in charge of exporting the data from the new editing system very helpfully agreed to add in a ‘source dictionary’ attribute to all bibliographical records and sent me an updated version of the XML file.  Whilst working with the data I realised that all of the composite records are DOST records anyway, so I didn’t need the ‘sourceDict’ attribute, but I think it’s better to have this explicitly as an attribute as differentiating between dictionaries is important.

I imported all of the bibliographical records into the online system, including the composite ones as these are linked to from dictionary entries and are therefore needed, even though their individual parts are also found separately in the data.  However, I decided to exclude the composite records from the search facilities, otherwise we’d end up with duplicates in the search results.  I updated the API to use the new bibliography tables and I updated the new front-end so that bibliographical searches use the new data.  One thing that needs some further work is the display of individual bibliographies.  These are now generated from the bibliography XML via an XSLT whereas previously they were generated from a variety of different fields in the database.   The display doesn’t completely match up with the display on the live and Sienna versions of the bibliography pages and I’m not sure exactly how the editors would like entries to be displayed.  I’ll need further input from them on this matter, but the import of data from the new editing system has now been completed successfully.  I’d been documenting the process as I worked through it and I sent the documentation and all scripts I wrote to handle the workflow to the editors to be stored for future use.

I also worked on the Books and Borrowing project this week.  I received the last of the digitised images of borrowing registers from Edinburgh (other than one register which needs conservation work), and I uploaded these to the project’s content management system, creating all of the necessary page records.  We have a total of 9,992 page images as JPEG files from Edinburgh, totalling 105GB.  Thank goodness we managed to set up an IIIF server for the image files rather than having to generate and store image tilesets for each of these page images.  Also this week I uploaded the images for 14 borrowing registers from St Andrews and generated page records for each of these.

I had a further conversation with GIS expert Piet Gerrits for the Iona project and made a couple of tweaks to the Comparative Kingship content management systems, but other than that I spent the remainder of the week returning to the Anglo-Norman Dictionary, which I hadn’t worked on since before Easter.  To start with I went back through old emails and documents and wrote a new ‘to do’ list containing all of the outstanding tasks for the project, some 20 items of varying degrees of size and intricacy.  After some communication with the editors I began tackling some of the issues, beginning with the apparent disappearance of <note> tags from certain entries.

In the original editor’s XML (the XML as structured before uploaded into the old DMS) there were ‘edGloss’ notes tagged as ‘<note type=”edgloss” place=”inline”>’ that were migrated to <edGloss> elements during whatever processing happened with the old DMS.  However, there were also occasionally notes tagged as ‘<note place=”inline”>’ that didn’t get transformed and remained tagged as this.

I’m not entirely sure how or where, but at some point during my processing of the data these ‘<note place=”inline”>’ notes have been lost.  It’s very strange as the new DMS import script is based entirely on the scripts I wrote to process the old DMS XML entries, but I tested the DMS import by uploading the old DMS XML version of ‘poer_1’ to the new DMS and the ‘<note place=”inline”>’ have been retained, yet in the live entry for ‘poer_1’ the <note> text is missing.

I searched the database for all entries where the DMS XML as exported from the old DMS system contains the text ‘<note place=”inline”>’ and there are 323 entries, which I added to a spreadsheet and sent to the editors.  It’s likely that the new XML for these entries will need to be manually corrected to reinstate the missing <note> elements.  Some entries (as with ‘poer_1’) have several of these.  II still have the old DMS XML for these so it is at least possible to recover the missing tags.  I wish I could identify exactly when and how the tags were removed, but that would quite likely require many hours of investigation, as I already spent a couple of hours trying to get to the bottom of the issue without success.

Moving on to a different issue, I changed the upload scripts so that the ‘n’ numbers are always fully regenerated automatically when a file is uploaded, as previously there were issues when a mixture of senses with and without ‘n’ numbers were included in an entry.  This means that any existing ‘n’ values are replaced, so it’s no longer possible to manually set the ‘n’ value.  Instead ‘n’ values for senses within a POS will always increment from 1 depending on the order they appear in the file, with ‘n’ being reset to 1 whenever a new POS is encountered.

Main senses in locutions were not being assigned an ‘n’ on upload, and I changed this so that they are assigned an ‘n’ in exactly the same way as regular main senses.  I tested this with the ‘descendre’ entry and it worked, although I encountered an issue.  The final locution main sense (to descend to (by way of inheritance)) had a POS of ‘sbst._inf.’ In its <senseInfo> whereas it should have been (based on the POS of the previous two senses) ‘sbst. Inf.’.  The script was therefore considering this to be a new POS and gave the sense an ‘n’ of 1.  In my test file I updated the POS and re-uploaded the file and the sense was assigned the correct value of 3 to its ‘n’, but we’ll need to investigate why a different form of POS was recorded for this sense.

I also updated the front-end so that locution main senses with an ‘n’ now have the ‘n’ displayed, (e.g. https://anglo-norman.net/entry/descendre) and wrote a script that will automatically add missing ‘n’ attributes to all locution main senses in the system.  I haven’t run this on the live database yet as I need further feedback from the editors before I do.  As the week drew to a close I worked on a method to hide sense numbers in the front-end in case where there was only one sense in a part of speech, but I didn’t manage to get this completed and will continue with it next week.

Week Beginning 19th April 2021

It was a return to a full five-day week this week, after taking some days off to cover the Easter school holidays for the previous two weeks.  The biggest task I tackled this week was to import the data from the Dictionary of the Scots Language’s new editing system into my online system.  I’d received a sample of the data from the company responsible for the new editing system a couple of weeks ago, and we had agreed on a slightly updated structure after that.  Last week I was sent the full dataset and I spent some time working with it this week.  I set up a local version of the online system on my PC and tweaked the existing scripts I’d previously written to import the XML dataset generated by the old editing system.  Thankfully the new XML was not massively different in structure to the old set, and different mostly in the addition of a few new attributes, such as ‘oldid’ that referenced to old ID of each entry, and ‘typeA’ and ‘typeB’, which contain numerical codes that denote which text should be displayed to note when the entry was published.  With changes made to the database to store these attributers and updates to the import script to process them I was ready to go, and all 80,432 DOST and SND entries were successfully imported, including extracting all forms and URLs for use in the system.

I had a conversation with the DSL team about whether my ‘browse order’ would still be required, as the entries now appear to be ordered nicely by their new IDs.  Previously I ran a script to generate the dictionary order based on the alphanumeric characters in the headword and the ‘posnum’ that I generated based on the classification of parts of speech taken from a document written by Thomas Widmann when he worked for the DSL (e.g. all POS beginning ‘n.’ have a ‘posnum’ of 1, all POS beginning ‘ppl. adj.’ have a ‘posnum’ of 8).  Although the new data is now nicely ordered by the new ID field I wanted to check whether I should still be generating and using my browse order columns or whether I should just order things by ID.  I suggested that going forward it will not be possible to use the ID field as browse order, as whenever the editors add a new entry its ID will position it in the wrong place (unless the ID field is not static and is regenerated whenever a new entry is added).  My assumption was correct and we agreed to continue using my generated browse order.

In a related matter my script extracts the headword of each entry from the XML and this is used in my system and also to generate the browse order.  The headword is always taken to be the first <f> of type “form” within <meta> in the <entry>.  However, I noticed that there are five entries that have no <f> of type “form” and are therefore missing a headword, and are appearing first in the ‘browseorder’ because of this.  This is something that still needs to be addressed.

In our conversations, Ann Ferguson mentioned that my browse system wasn’t always getting the correct order where there were multiple identical headwords all within the same generate part of speech.  For example there are multiple noun ‘point’ entries in DOST – n. 1, n. 2 and n. 3.  These were appearing in the ‘browse’ feature with n. 3 first.  This is because (as per Thomas’s document) all entries with a POS starting with ‘n.’ are given a ‘posorder’ of 1.  In cases such as ‘point’ where the headword is the same and there are several entries with a POS beginning ‘n.’ the order is then set to depend on the ID, and ‘Point n.3’ has the lowest ID, so appears first.  I therefore updated the script that generates the browse order so that in such cases entries are ordered alphabetically by POS instead.

I also regenerated the data for the Solr full-text search, but I’ll need Arts IT Support to update this, and they haven’t got back to me yet.  I then migrated all of the new data to the online server and also created a table for the ‘about’ text that will get displayed based on the ‘typeA’ and ‘tyepB’ number in the entry.  I then created a new version of the API that uses the new data and pulls in the necessary ‘about’ data.  When I did this I noticed that some slugs (the identifier that will be used to reference an entry in a URL) are still coming out as old IDs because this is what is found in the <url> elements.  So for example the entry ‘snd00087693’ had the slug ‘snds165’.  After discussion we agreed that in such cases the slug should be the new ID, and I tweaked the import script and regenerated the data to make this the case.  I then updated one of our test front-ends to use the new API, updating the XSLT to ensure that the <meta> tag that now appears in the XML is not displayed and updating bibliographical references and cross references to use the new ‘refid’ attribute.  I also set up the entry page to display the ‘about’ text, although the actual placement and formatting of this text still needs to be decided upon. I then moved on to the bibliographical data, but this is going to take a bit longer to sort out, as previous bib info was imported from a CSV.

Also this week I read through and gave feedback on a data management plan for a proposal Marc Alexander in involved with and created a new version of the DMP for the new metaphor proposal that Wendy Anderson is involved with.  I also gave some advice to Gerry Carruthers about hosting some journal issues at Glasgow.

For the Books and Borrowing project I made some updates to the data of the 18th Century Borrowers pilot project, including fixing some issues with special characters, updating information relating to a few books and merging a couple of book records.  I also continued to upload the page images of the Edinburgh registers, finishing the upload of 16 registers and then generating the page records for all of the pages in the content management system.  I then started on the St Andrews registers.

I also participated in a Zoom call about GIS for the place-names of Iona project, where we discussed the sort of data and maps that would appear in the QGIS system and how this would relate to the online CMS, and also tweaked the Call of Papers page of the website.

Finally, I continued to make updates to the content management systems for the Comparative Kingship project, adding in Irish versions of the classifications and some of the labels, changing some parishes, adding in the languages that are needed for the Irish system and removing the unnecessary place-names that were imported from the GB1900 dataset.  These are things like ‘F.P.’ for ‘footpath’.  A total of 2,276 names, with their parish references, historical forms and links to the OS source were deleted by a little script I wrote for the purpose.  I think I’m up to date with this project for the moment, so next week I intend to continue with the DSL bibliographical data import and to return to working on the Anglo-Norman Dictionary.

Week Beginning 12 April 2021

I’d taken Monday and Thursday off this week to cover some of the school Easter holidays, and I also lost some of Friday as I’d arranged to travel through to the University to pick up some equipment that had been ordered for me.  So I probably only had about two and a half days of actual work this week, which I mostly spent continuing to develop the content management systems for the new Comparative Kingship place-names project.  I created user accounts to enable members of the project team to access the Scottish CMS that I completed last week, and completed work on the 10,000 or so place-names I’d imported from the GB1900 data, setting up a ‘source’ for the map used by this project (OS 6 inch 2nd edition), generating a historical form for each of the names and associating each historical form with the source.  This will mean that the team will be able to make changes to the head names and still have a record of the form that appeared in the GB1900 data.

I then began work on the Irish CMS, which required a number of changes to be made.  This included importing more than 200 parishes across several counties from a spreadsheet, updating the fields previously marked as Scottish Gaelic to Irish and generating new fields for recording ‘Townland’ in English and Irish.  ‘Townland’ also had to be added to the classification codes and a further multi-select option similar to parish needed to be added for ‘Barony’.  OS map names ‘Landranger’ and ‘Explorer’ needed to be changed too, in both the main place-name record and in the sources.

The biggest change, however, was to the location system as Ireland has a different grid reference system to the UK.  A feature of my CMS is that latitude, longitude and altitude are generated automatically from a supplied grid reference, and in order to retain this functionality for the Irish CMS I needed to figure out a method of working with Irish grid references.  In addition, the project team also wanted to store another location coordinate system, the Irish Transverse Mercator (ITM) system, and wanted not only this to be automatically generated from the grid reference, but to be able to supply the ITM field and have all other location fields (including the grid reference) populate automatically.  This required some research to see if there was a tool or online service that I could incorporate into my system.

I discovered that Ordnance Survey Ireland has a tool to convert coordinates here https://gnss.osi.ie/new-converter/ but it doesn’t include grid references (e.g. in the form F 83253 33765) and although there is a downloadable tool that can be used at the command line I really wanted an existing PHP or JavaScript script rather than having to run an executable on the server.  I also found this site: http://batlab.ucd.ie/gridref/ that can generate latitude and longitude from an Irish grid reference, and it also has an API that my scripts could connect to (e.g. http://batlab.ucd.ie/gridref/?reftype=NATGRID&refs=F8325333765) but it doesn’t include ITM coordinates, unfortunately.  Also, I don’t like to rely on third party sites as they can disappear without warning.  This site: https://irish.gridreferencefinder.com/bing.php allows you to enter a grid reference, latitude / longitude or ITM coordinates and view various types of coordinates on a map interface, but it’s not a service a script can easily connect to in order to generate data.

I then found this site: https://www.howtocreate.co.uk/php/gridref.php which offers a downloadable library in PHP or JavaScript that allows latitude, longitude and ITMs to be generated from Irish grid references (and back again, if required).  This is the solution decided to add into the CMS, after a certain amount of trial and error I managed to incorporate the JavaScript version of the library and update my CMS so that upon entering an Irish grid reference the latitude, longitude, altitude (via Google Maps) and ITM coordinates were automatically generated.  I also managed to set up the system so that the other fields were generated automatically if ITM coordinates were manually inputted.  I think all is now working as required with the two systems, and I’ll need to wait until the team accesses and uses the systems to see if further tweaks are required.

I also continued to work on the Books and Borrowing project this week.  I’d been in discussion with the Stirling University IT people about setting up a IIIF server for the project, and I heard this week that they have agreed to this, which is really great news.  Previously in order to allow page images to be zoomed and panned like a Google Map we had to generate and store tilesets of each page image at each zoom level.  It was taking hours to generate the tilesets for each book and days to upload the images to the server, and was requiring a phenomenal amount of storage space on the server.  For example, the tilesets for one of the Edinburgh volumes consisted of around 600,000 files and took up around 14GB of space.  This was in addition to the actual full-size images of the pages (about 250 at around 12MB each).

An IIIF server means we only need to store the full-size images of each page and the server dynamically chops up and serves sections of the image at the desired zoom level whenever anyone uses the zoom and pan image viewer.  It’s a much more efficient system.  However, it does mean I needed to update the ‘Page image’ page of the CMS to use the IIIF server, and it took a little time to get this working.  I’d decided to use the OpenLayers library to access the images, as this is what I’d previously been using for the image tilesets, and it has the ability to work with a IIIF server (see https://openlayers.org/en/latest/examples/iiif.html).  However, it did take some time to get this working, as the example and all of the documentation is fully dependent on the node.js environment, even though the library itself really doesn’t need to be.  I didn’t want to convert my CMS to using node.js and have yet another library to maintain when all I needed was a simple image viewer, so I head to rework the code example linked to above to strip out all of the node dependencies, module syntax and import statements.  For example ‘var options = new IIIFInfo(imageInfo).getTileSourceOptions()’ needed to be changed to ‘var options = new ol.format.IIIFInfo(imageInfo).getTileSourceOptions()’.  As none of this is documented anywhere on the OpenLayers website it took some time to get right, but I got there in the end and the CMS now has an OpenLayers based IIIF image viewer working successfully.

Week Beginning 5th April 2021

This week began with Easter Monday, which was a holiday.  I’d also taken Tuesday and Thursday off to cover some of the Easter school holidays so it was a two-day working week for me.  I spent some of this time continuing to download and process images of library register books for the Books and Borrowing project, including 14 from St Andrews and several further books from Edinburgh.  I was also in communication with one of the people responsible for the Dictionary of the Scots Language’s new editor interface regarding the export of new data from this interface and importing it into the DSL’s website.  I was sent a ZIP file containing a sample of the data for SND and DOST, plus a sample of the bibliographical data, with some information on the structure of the files and some points for discussion.

I looked through all of the files and considered how I might be able to incorporate the data into the systems that I created for the DSL’s website.  I should be able to run the new dictionary XML files through my upload script with only a few minor modifications required.  It’s also really great that the bibliographies and cross references are getting sorted via the new Editor interface.  One point of discussion is that the new editor interface has generated new IDs for the entries, and the old IDs are not included.  I reckoned that it would be good if the old IDs were included in the XML as well, just in case we ever need to match up the current data with older datasets.  I did notice that the old IDs already appeared to be included in the <url> fields, but after discussion we decided that it would be safer to include them as an attribute of the <entry> tag, e.g. <entry oldid=”snd848”> or something like that, which is what will happen when I receive the full dataset.

There are also new labels for entries, stating when and how the entry was prepared.  The actual labels are stored in a spreadsheet and a numerical ID appears in the XML to reference a row in the spreadsheet.  This method of dealing with labels seems fine with me – I can update my system to use the labels from the spreadsheet and display the relevant labels depending on the numerical codes in the entry XML.  I reckon it’s probably better to not store the actual labels in the XML as this saves space and makes it easier to change the label text, if required, as it’s only then stored in a single place.

The bibliographies are looking good in the sample data, but I pointed out that it might be handy to have a reference of the old bibliographical IDs in the XML, if that’s possible.  There were also spurious xmlns=”” attributes in the new XML, but these shouldn’t pose any problems and I said that it’s ok to leave them in.  Once I receive the full dataset with some tweaks (e.g. the inclusion of old IDs) then I will do some further work on this.

I spent most of the rest of my available time working on the new Comparative Kingship place-names systems.  I completed work on the Scotland CMS, including adding in the required parishes and former parishes.  This means my place-name system has now been fully modernised and uses the Bootstrap framework throughout, which looks a lot better and works more effectively on all screen dimensions.

I also imported the data from GB1900 for the relevant parishes.  There are more than 10,000 names, although a lot of these could be trimmed out – lots of ‘F.P.’ for footpath etc.  It’s likely that the parishes listed are rather broader than the study will be.  All the names in and around St Andrews are in there, for example.  In order to generate altitude for each of the names imported from GB1900 I had to run a script I’d written that passes the latitude and longitude for each name in turn to Google Maps, which then returns elevation data.  I had to limit the frequency of submissions to one every few seconds otherwise Google blocks access, so it took rather a long time for the altitudes of more than 10,000 names to be gathered, but the process completed successfully.

Also this week I dealt with an issue with the SCOTS corpus, which had broken (the database had gone offline) and helped Raymond at Arts IT Support to investigate why the Anglo-Norman Dictionary server had been blocking uploads to the dictionary management system when thousands of files were added to the upload form.  It turns out that while the Glasgow IP address range was added into the whitelist the VPN’s IP address range wasn’t, which is why uploads were being blocked.

Next week I’m also taking a couple of days off to cover the Easter School holidays, and will no doubt continue with the DSL and Comparative Kingship projects then.

Week Beginning 29th March 2021

This was a four-day week due to Good Friday.  I spent a couple of these days working on a new place-names project called Comparative Kingship that involves Aberdeen University.  I had several email exchanges with members of the project team about how the website and content management systems for the project should be structured and set up the subdomain where everything will reside.  This is a slightly different project as it will involve place-name surveys in Scotland and Ireland that will be recorded in separate systems.  This is because slightly different data needs to be recorded for each survey, and Ireland has a different grid reference system to Scotland.  For these reasons I’ll need to adapt my existing CMS that I’ve used on several other place-name projects, which will take a little time.  I decided to take the opportunity to modernise the CMS whilst redeveloping it.  I created the original version of the CMS back in 2016, with elements of the interface based on older projects than this, and the interface now looks pretty dated and doesn’t work so well on touchscreens.  I’m migrating the user interface to the Bootstrap user interface framework, which looks more modern and works a lot better on a variety of screen sizes.  It is going to take some time to complete this migration, as I need to update all of the forms used in the CMS, but I made good progress this week and I’m probably about half-way through the process.  After this I’ll still need to update the systems to reflect the differences in the Scottish and Irish data, which will probably take several more days, especially if I need to adapt the system of automatically generating latitude, longitude and altitude from a grid reference to work with Irish grid references.

I also continued with the development of the Dictionary Management System for the Anglo-Norman Dictionary, fixing some issues relating to how sense numbers are generated (but uncovering further issues that still need to be addressed) and fixing a bug whereby older ‘history’ entries were not getting associated with new versions of entries that were uploaded.  I also created a simple XML preview facility, which allows the editor to paste their entry XML into a text area and for this to then be rendered as it would appear in the live site.  I also made a large change to how the ‘upload XML entries’ feature works.  Previously editors could attach any number of individual XML files to the form (even thousands) and these would then get uploaded.  However, I encountered an issue with the server rejecting so many file uploads in such a short period of time and blocking access to the PC that sent the files.  To get around this I investigated allowing a ZIP file containing XML files to be uploaded instead.  Upon upload my script would then extract the ZIP and process all of the XML files contained therein.  It turns out that this approach worked very well – no more issues with the server rejecting files and the processing is much speedier as it all happens in a batch rather than the script being called each time a single file is uploaded.  I tested the ZIP approach by zipping up all 3,179 XML files from the recent R data update and the Zip file was uploaded and processed in a few seconds, with all entries making their way into the holding area.  However, with this approach there is no feedback in the ‘Upload Log’ until the server-side script has finished processing all of the files in the ZIP, at which point all updates appear in the log at the same time, so there may be a wait of maybe 20-30 seconds (if it’s a big ZIP file) before it looks like anything has happened.  Despite this I’d say that with this update the DMS should now be able to handle full letter updates.

Also this week I added a ‘name of the month’ feature to the homepage of the Iona place-names project (https://iona-placenames.glasgow.ac.uk/) and continued to process the register images for the Books and Borrowing project.  I also spoke to Marc Alexander about Data Management Plans for a new project he’s involved with.

Week Beginning 22nd March 2021

I continued to develop the ‘Dictionary Management System’ for the Anglo-Norman Dictionary this week, following on with the work I began last week to allow the editors to drag and drop sets of entry XML files into the system.  I updated the form to add in another option underneath the selection of phase statement called ‘Phase Statements for existing records’.  Here the editor can choose whether to retain existing statements or replace them.  If ‘retain’ is selected then any XML entries attached to the form that either have an existing entry ID in their filename or have a slug that matches an existing entry in the system will retain whatever phase statement the existing entry has, no matter what phase statement is selected in the form.  The phase statement selected in the form will still be applied to any XML entries attached to the form that don’t have an existing entry in the system.  Selecting ‘replace existing statements’ will ignore all phase statements of existing entries and will overwrite them with whatever phase statement is selected in the form.  I also updated the system so that it extracts the earliest date for an entry at the point of upload.  I added two new columns to the holding area (for earliest date and the date that is displayed for this) and have ensured that the display date appears on the ‘review’ page too.  In addition, I added in an option to download the XML of an entry in the holding area, if it needs further work.

I ran a large-scale upload test, comprising of around 3,200 XML files from the ‘R’ data to see how the system would cope with this, but unfortunately I ran into difficulties with the server rejecting too many requests in a short space of time and only about 600 of the files made it through.  I asked Arts IT Support to see whether the server limits can be removed for this script, but haven’t heard anything back yet.  I ran into a similar issue when processing files for the Mull and Ulva place-names project in January last year and Raymond was able to update the whitelist for the Apache module mod_evasive that was blocking such uploads and I’m hoping he’ll be able to do something similar this time.  Alternatively, I’ll need to try and throttle the speed of uploads in the browser.

In the meantime, I continued with the scripts for publishing entries that had been uploaded to the holding area, using a test version of the site that I set up on my local PC to avoid messing up the live database.  I updated the ‘holding area’ page quite significantly.  At the top of the page is a box for publishing selected items, and beneath this is the table containing the holding items.  Each row now features a checkbox, and there is an option above the table to select / deselect all rows on the page (so currently up to 200 entries can be published in one batch as 200 is the page limit).    The ‘preview’ button has been replaced with an ‘eye’ icon but the preview page works in the same way as before.  I was intending to add the ‘publish’ options to this page but I’ve moved this to the holding area page instead to allow multiple entries to be selected for publication at any one time.

Selecting one or more items for publication and then pressing the ‘publish selected holdings’ button runs some JavaScript that grabs the ID of each holding item and then submits this to a script on the server via AJAX, and the server-side script then processes each selected item for publication in turn.  I limited the processing of this to one item per second to hopefully avoid the server rejecting requests.  Rather a lot happens when an item is published: The holding item is copied to the live entry table and then its XML is analysed to extract and store for search purposes: Citations, attestation dates and word counts of every word in each citation; translations and word counts of every word in each translation; semantic and usage labels (including adding new labels to the system if the XML contains new ones); word forms and their types (lemma, variant, deviant); parts of speech; cross references in xref entries.

If there is an existing live entry that matches the current entry (either because of the stored ‘Existing ID’ or because it has the same slug as the holding item) then this entry is deactivated in the database, its XML is copied to the ‘history’ table and associated with the new item record and all search data for the live entry as mentioned above is deleted.  At this point the holding item record is deleted and the server-side script finishes executing, returning its output to the JavaScript, which then adds a row to the ‘publication log’ on the holding entries page; decreases the count of the number of holding entries on the page by one and removes the row containing the holding item from the table on the page.

Once all of the selected items are published there is one final task that the page performs, which is to completely regenerate the cross references data.  This is something that unfortunately needs to be done after each batch (even if it’s only one record) because cross references rely on database IDs and when a new version of an existing entry is published it receives a new ID.  This means any existing cross references to that item will no longer work.  The publication log will state that the regeneration is taking place and then after about 30 seconds another statement will say it is complete.  I tested this process on my local PC, publishing single items, a few items and entire pages (200 items) at a time and all seemed to be working fine so I then copied the new scripts to the server.

Also this week I continued with the processing of library registers for the Books and Borrowing project.  These are coming in rather quickly now and I’m getting a bit of a backlog.  This is because I have to download the image files, then process then to generate tilesets, and then upload all of the images and their tilesets to the server.  It’s the tilesets that are the real sticking point, as these consist of thousands of small files.  I’m only getting an upload speed of about 70KB/s and I’m having to upload many gigabytes of data.  I did a test where I zipped up some of the images and uploaded this zip file instead and was getting a speed of around 900KB/s and as it looks like I can get command-line access to the server I’m going to investigate whether zipping up the files, then uploading them then unzipping them will be a quicker process.  I also had to spend some time sorting out connection issues to the server as the Stirling VPN wasn’t letting me connect.  It turned out that they had switched to multi-factor authentication and I needed to set this up before I could continue.

Also this week I wrote a summary of the work I’ve done so far for the Place-names of Iona project for a newsletter they’re putting together, spoke to people about the new ‘Comparative Kingship’ place-names project I’m going to be involved with, spoke to the Scots Language Policy people about setting up a mailing list for the project(it turns out that the University has software to handle this, available here: https://www.gla.ac.uk/myglasgow/it/emaillists/) and fixed an issue relating to the display of citations that have multiple dates for the DSL.

Week Beginning 15th March 2021

My son returned to school on Monday this week, marking an end to the home-schooling that began after the Christmas holidays.  It’s quite a relief to no longer have to split my day between working and home-schooling after so long.  This week I continued with some Data Management Plan related activities, completing a DMP for the metaphor project involving Duncan of Jordanstone College of Art and Design in Dundee and drafting a third version of the DMP for Kirsteen McCue’s proposal following a Zoom call with her on Wednesday.

I also spent some further time on the Books and Borrowing project, creating tilesets and page records for several new volumes.  In fact, we ran out of space on the server.  The project is digitising around 20,000 pages of library records from 1750-1830 and we’re approaching 5,000 pages so far.  I’d originally suggested that we’d need about 60GB of server space for the images (3MB per image x 20,000).  However, the JPEGS we’ve been receiving from the digitisation units have been generated at maximum quality / minimum compression and are around 9MB each, so my estimates were out.  Dropping the JPEG quality setting down from 12 to 10 would result in 3MB files so I could do this to save space if required.  However, there is another issue.  The tilesets I’m generating for each image so that they can be zoomed and panned like a Google Map are taking up as much as 18MB per image.  So we may need a minimum of 540GB of space (possibly 600GB to be safe): 9×20,000 for the JPEGs plus 18×20,000 for the tilesets.  This is an awful lot of space, and storing image tilesets isn’t actually necessary these days of an IIIF server (https://iiif.io/about/) could be set up.  IIIF is now well established as the best means of hosting images online and it would be hugely useful to use.  Rather than generating and hosting thousands of tilesets at different zoom levels we could store just one image per page on the server and it would serve up the necessary subsection at the required zoom level based on the request from the client.  This issue is that people in charge of servers don’t’ like having to support new software.  I entered into discussions with Stirling’s IT people about the possibility of setting up an IIIF server, and these talks are currently ongoing, so in the meantime I still need to generate the tilesets.

Also this week I discussed a couple of issues with the Thesaurus of Old English with Jane Roberts.  A search was bringing back some word results but when loading the category browser no content was being displayed.  Some investigations uncovered that these words were in subcategories of ’02.03.03.03.01’ but there was no main category with that number in the system.  A subcategory needs a main category in order to display in the tree browser and as none was available nothing was displaying.  Looking at the underlying database I discovered that while there was no ’02.03.03.03.01’ main category there were two ’02.03.03.03.01|01’ subcategories: ‘A native people’ and ‘Natives of a country’.  I bumped the former up from subcategory to main category and the search results then worked.

I spent the rest of the week continuing with the development of the Anglo-Norman Dictionary.  I made the new bibliography pages live this week (https://anglo-norman.net/bibliography/), which also involved updating the ‘cited source’ popup in the entry page so that it displays all of the new information.  For example, go to this page: https://anglo-norman.net/entry/abanduner  and click on the ‘A-N Med’ link to see a record with multiple items in it.  I also updated the advanced search for citations so that the ‘Citation siglum’ drop-down list uses the new data too.

After that I continued to update the Dictionary Management System.  I updated the ‘View / Download Entry’ page so that the ‘Phase’ of the entry can be updated if necessary.  In the ‘Phase’ section of the page all of the phases are now listed as radio buttons, with the entry’s phase checked.  If you need to change the entry’s phase you can select a different radio button and press the ‘Update Phase’ button.  I also added facilities to manage phase statements via the DMS.  In the menu there’s now an ‘Add Phase’ button, through which you can add a new phase, and a ‘Browse Phases’ button which lists all of the active phases, the number of entries assigned to each, and an option to edit the phase statement.  If there’s a phase statement that has no associated entries you’ll find an option to delete it here too.

I’m still working on the facilities to upload and manage XML entry files via the DMS.  I’ve added in a new menu item labelled ‘Upload Entries’ that when pressed on loads a page through which you can upload entry XML files.  There’s a text box where you can supply the lead editor initials to be added to the batch of files you upload (any files that already have a ‘lead’ attribute will not be affected) and an option to select the phase statement that should be applied to the batch of files.  Below this area is a section where you can either click to open a file browser and select files to upload or drag and drop files from Windows Explorer (or other file browser).  When files are attached they will be processed, with the results shown in the ‘Update log’ section below the upload area.  Uploaded files are kept entirely separate from the live dictionary until they’ve been reviewed and approved (I haven’t written these sections yet).  The upload process will generate all of the missing attributes I mentioned last week – ‘lead’ initials, the various ID fields, POS, sense numbers etc.  If any of these are present the system won’t overwrite them so it should be able to handle various versions of files.  The system does not validate the XML files – the editors will need to ensure that the XML is valid before it is uploaded.  However, the ‘preview’ option (see below) will quickly let you know if your file is invalid as the entry won’t display properly.  Note also that you can change the ‘lead’ and the phase statement between batches – you can drag and drop a set of files with one lead and statement selected, then change these and upload another batch.  You can of course choose to upload a single file too.

When XML files are uploaded, in the ‘update log’ there will be links directly through to a preview of the entry, but you can also find all entries that have been uploaded but not yet published on the website in the ‘Holding Area’, which is linked to in the DMS menu.  There are currently two test files in this.  The holding area lists the information about the XML entries that have been uploaded but not yet published, such as the IDs, the slug, the phase statement etc.  There is also an option to delete the holding entry.  The last two columns in the table are links to any live entry.  There are two columns.  The first links to the entry as specified by the numerical ID in the XML filename, which will be present in the filename of all XML files exported via the DMS’s ‘Download Entry’ option.  This is the ‘existing ID’ column in the table.  The second linking column is based on the ‘slug’ of the holding entry (generated from the ‘lemma’ in the XML).  The ‘slug’ is unique in the data so if a holding entry has a link in this column it means it will overwrite this entry if it’s made live.  For XML files exported view the DMS and them uploaded both ‘live entry’ links should be the same, unless the editor has changed the lemma.  For new entries both these columns should be blank.

The ‘Review’ button opens up a preview of the uploaded holding entry in the interface of the live site.  This allows the editors to proofread the new entry to ensure that the XML is valid and that everything looks right.  You can return to the holding area from this page by pressing on the button in the left-hand column.  Note that this is just a preview – it’s not ‘live’ and no-one else can see it.

There’s still a lot I need to do.  I’ll be adding in an option to publish an entry in the holding area, at which point all of the data needed for searching will be generated and stored and the existing live entry (if there is one) will be moved to the ‘history’ table.  I also maybe need to extract the earliest date information to display in the preview and in the holding area.  This information is only extracted when the data for searching is generated, but I guess it would be good to see it in the holding area / preview too.  I also need to add in a preview of cross reference entries as these don’t display yet.  I should probably also add in an option to allow the editors to view / download the holding entry XML as they might want to check how the upload process has changed this.  So still lots to tackle over the coming weeks.

Week Beginning 8th March 2021

It was another Data Management Plan heavy week this week.  I created an initial version of a DMP for Kirsteen McCue’s project at the start of the week and then participated in a Zoom call with Kirsteen and other members of the proposed team on Thursday where the plan was discussed.  I also continued to think through the technical aspects of the metaphor-related proposal involving Wendy and colleagues at Duncan Jordanstone College of Art and Design at Dundee and reviewed another DMP that Katherine Forsyth in Celtic had asked me to look at.

Other than issues I arranged for Joanna Kopaczyk’s ‘The Future of Scots’ project website to be moved to its top-level ‘ac.uk’ domain, and it can now be found here: https://scotslanguagepolicy.ac.uk/.  Marc Alexander had also contacted me about a weird bug he’d encountered in the Historical Thesaurus.  One of the category pages was failing to display properly and after investigation I figured out that it was an issue with the timeline data for one of the words on this page, which was causing the JavaScript to break.  I pulled out the JSON embedded in the page and the data for the word seemed to be missing a closing ‘}’, which was causing the error.  It turned out that someone had entered the dates the wrong way round for the word.  It was listed as ‘a1400-c1386’.  My dates system had plucked out the dates and correctly ordered them, but this meant the system was left with ‘1400’ with a joining ‘-‘ and then nothing after it, which resulted in the JSON being malformed.  I swapped the dates around (both in the new dates table and in the display date) and everything started working as it should again.  It was a relief to know that it was an issue with the data rather than my code.

Also this week I spent a bit of time working on the Books and Borrowing project, generating more page image tilesets and their corresponding pages for two more of the Edinburgh ledgers and adding an ‘Events’ page to the project website and giving more members of the project team permission to edit the site.  I also had an email chat with Thomas Clancy about the Iona project and created a ‘Call for Papers’ page including submission form on the project website (it’s not live yet, though).

I spent the rest of my week continuing to work on the Anglo-Norman Dictionary.  We received the excellent news this week that our AHRC application for funding to complete the remaining letters of the dictionary (and carry out more development work) was successful.  This week I mage some further tweaks to the new blog pages, adding in the first image in the blog post to the right of the blog snippet on the blog summary page.  I also made the new blog pages live, and you can now access them here: https://anglo-norman.net/blog/.

I also made some updates to the bibliography system based on requests from the editors to separate out the display of links to the DEAF website from the actual URLs (previously just the URLs were displayed).  I updated the database, the DMS and the new bibliography page to add in a new ‘DEAF link text’ field for both main source text records and items within source text records.  I copied the contents of the DEAF field into this new field for all records, I updated the DMS to add in the new fields when adding / editing sources and I updated the new bibliography page so that the text that gets displayed for the DEAF link uses the new field, whereas the actual link through to the DEAF website uses the original field.

I also continued to work on the facilities to upload batches of new or updated entry XML files to the DMS.  I created a new ‘holding’ table for uploaded entries and created a page that allows the user to drag and drop XML files into the browser, with files then getting uploaded, processed and added into this new table.  This uses a handy JavaScript library called Dropzone (https://www.dropzonejs.com/) that I previously used for the Scots Syntax Atlas CMS.  The initial version of the upload is working well, but I needed to know exactly how uploaded files should be fully processed before I could proceed further, which required some lengthy email exchanges with the editors.

The scripts I written when uploading the new ‘R’ dataset needed to make changes to the data to bring it into line with the data already in the system as the ‘R’ data didn’t include some attributes that were necessary for the system to work with the XML files, namely:

In the <main_entry> tag the attribute ‘lead’, which is used to display the editor’s initials in the front end (e.g. “gdw”) and the ‘id’ attribute, which although not used to uniquely identify the entries in my new system is still used in the XML for things like cross-references and therefore is required and must be unique.  In the <sense> tag the attribute ‘n’, which increments from 1 within each part of speech and is used to identify senses in the front-end.  In the <senseInfo> tag the ID attribute, which is used in the citation and translation searches and the POS attribute which is used to generate the summary information at the top of each entry page.  In the <attestation> tag the ID attribute, which is used in the citation search.

We needed to decide how these will be handled in future – whether they will be manually added to the XML as the editors work on them or whether the upload script needs to add them in at the point of upload.  We also needed to consider updates to existing entries.  If an editor downloads an entry and then works on it (e.g. adding in a new sense or attestation) then the exported file will already include all of the above attributes, except for any new sections that are added.  In such cases should the new sections have the attributes added manually, or do I need to ensure my script checks for the existence of the attributes and only adds the missing ones as required?

We decided that I’d set up the systems to automatically check for the existence of the attributes and add them in if they’re not already present.  It will take more time to develop such a system but it will make it more robust and hopefully will result in fewer errors.  I’ll also add an option to specify the ‘lead’ initials for the batch of files that are being uploaded, but this will not overwrite the ‘lead’ attribute for any XML files in the batch that already have the attribute specified.

I’ll hopefully get a chance to work on this next week.  Thankfully this is the last week of home-schooling for us so I should have a bit more time from next week onwards.

Week Beginning 1st March 2021

There was quite a bit of work to be done for the Books and Borrowing project this week.  Several more ledgers had been digitised and needed tilesets and page records generated for them.  The former requires the processing and upload of many gigabytes of images files which takes quite some time to complete, especially as the upload speed from my home computer to the server never gets beyond about 56Kb per second.  However, I just end up leaving my PC on overnight and generally the upload has completed by the morning.  Generating page records generally involves me updating a script to change image filename parts and page numbers, and to specify the first and last page and the script does the rest, but there are some quirks that need to be sorted out manually.  For the Wigtown data some of the images were not sequentially numbered, which meant I couldn’t rely on my script to generate the correct page structure.  For one of the Edinburgh ledgers the RA has already manually created some pages and had added more than a hundred borrowing records to them so I had to figure out a way to incorporate these.  The page images are a double spread (so two pages per image) but the pages the RA had made were individual, so what I needed to do was to remove the manual pages, generate a new set and then update the page references for each of the borrowing records so they appeared on the correct new page.

Also this week I continued to migrate the blogs over to the new Anglo-Norman Dictionary website, a process which I managed to complete.  The new blog isn’t live yet, as I asked for feedback from the Editors before I replaced the link to the old blog site, and there are a couple of potential tweaks that I need to make before we’re ready to go.  I also had a chat with the Dictionary of the Scots Language people about migrating to a new DNS provider and the impact this might have on email addresses.

The rest of my week was spent working on proposals for two new projects, one for Kirsteen McCue and the other for Wendy Anderson.  This involved reading through all of the documentation, making notes and beginning to write the required Data Management Plans.  For Wendy’s proposal we also had a Zoom meeting with partners in Dundee and for Kirsteen’s proposal I had an email discussion with partners at the British Library.  There’s not really much more I can say about the work I’m doing for these projects, but I’ll be continuing to work on the respective DMPs next week.