Week Beginning 5th April 2021

This week began with Easter Monday, which was a holiday.  I’d also taken Tuesday and Thursday off to cover some of the Easter school holidays so it was a two-day working week for me.  I spent some of this time continuing to download and process images of library register books for the Books and Borrowing project, including 14 from St Andrews and several further books from Edinburgh.  I was also in communication with one of the people responsible for the Dictionary of the Scots Language’s new editor interface regarding the export of new data from this interface and importing it into the DSL’s website.  I was sent a ZIP file containing a sample of the data for SND and DOST, plus a sample of the bibliographical data, with some information on the structure of the files and some points for discussion.

I looked through all of the files and considered how I might be able to incorporate the data into the systems that I created for the DSL’s website.  I should be able to run the new dictionary XML files through my upload script with only a few minor modifications required.  It’s also really great that the bibliographies and cross references are getting sorted via the new Editor interface.  One point of discussion is that the new editor interface has generated new IDs for the entries, and the old IDs are not included.  I reckoned that it would be good if the old IDs were included in the XML as well, just in case we ever need to match up the current data with older datasets.  I did notice that the old IDs already appeared to be included in the <url> fields, but after discussion we decided that it would be safer to include them as an attribute of the <entry> tag, e.g. <entry oldid=”snd848”> or something like that, which is what will happen when I receive the full dataset.

There are also new labels for entries, stating when and how the entry was prepared.  The actual labels are stored in a spreadsheet and a numerical ID appears in the XML to reference a row in the spreadsheet.  This method of dealing with labels seems fine with me – I can update my system to use the labels from the spreadsheet and display the relevant labels depending on the numerical codes in the entry XML.  I reckon it’s probably better to not store the actual labels in the XML as this saves space and makes it easier to change the label text, if required, as it’s only then stored in a single place.

The bibliographies are looking good in the sample data, but I pointed out that it might be handy to have a reference of the old bibliographical IDs in the XML, if that’s possible.  There were also spurious xmlns=”” attributes in the new XML, but these shouldn’t pose any problems and I said that it’s ok to leave them in.  Once I receive the full dataset with some tweaks (e.g. the inclusion of old IDs) then I will do some further work on this.

I spent most of the rest of my available time working on the new Comparative Kingship place-names systems.  I completed work on the Scotland CMS, including adding in the required parishes and former parishes.  This means my place-name system has now been fully modernised and uses the Bootstrap framework throughout, which looks a lot better and works more effectively on all screen dimensions.

I also imported the data from GB1900 for the relevant parishes.  There are more than 10,000 names, although a lot of these could be trimmed out – lots of ‘F.P.’ for footpath etc.  It’s likely that the parishes listed are rather broader than the study will be.  All the names in and around St Andrews are in there, for example.  In order to generate altitude for each of the names imported from GB1900 I had to run a script I’d written that passes the latitude and longitude for each name in turn to Google Maps, which then returns elevation data.  I had to limit the frequency of submissions to one every few seconds otherwise Google blocks access, so it took rather a long time for the altitudes of more than 10,000 names to be gathered, but the process completed successfully.

Also this week I dealt with an issue with the SCOTS corpus, which had broken (the database had gone offline) and helped Raymond at Arts IT Support to investigate why the Anglo-Norman Dictionary server had been blocking uploads to the dictionary management system when thousands of files were added to the upload form.  It turns out that while the Glasgow IP address range was added into the whitelist the VPN’s IP address range wasn’t, which is why uploads were being blocked.

Next week I’m also taking a couple of days off to cover the Easter School holidays, and will no doubt continue with the DSL and Comparative Kingship projects then.

Week Beginning 29th March 2021

This was a four-day week due to Good Friday.  I spent a couple of these days working on a new place-names project called Comparative Kingship that involves Aberdeen University.  I had several email exchanges with members of the project team about how the website and content management systems for the project should be structured and set up the subdomain where everything will reside.  This is a slightly different project as it will involve place-name surveys in Scotland and Ireland that will be recorded in separate systems.  This is because slightly different data needs to be recorded for each survey, and Ireland has a different grid reference system to Scotland.  For these reasons I’ll need to adapt my existing CMS that I’ve used on several other place-name projects, which will take a little time.  I decided to take the opportunity to modernise the CMS whilst redeveloping it.  I created the original version of the CMS back in 2016, with elements of the interface based on older projects than this, and the interface now looks pretty dated and doesn’t work so well on touchscreens.  I’m migrating the user interface to the Bootstrap user interface framework, which looks more modern and works a lot better on a variety of screen sizes.  It is going to take some time to complete this migration, as I need to update all of the forms used in the CMS, but I made good progress this week and I’m probably about half-way through the process.  After this I’ll still need to update the systems to reflect the differences in the Scottish and Irish data, which will probably take several more days, especially if I need to adapt the system of automatically generating latitude, longitude and altitude from a grid reference to work with Irish grid references.

I also continued with the development of the Dictionary Management System for the Anglo-Norman Dictionary, fixing some issues relating to how sense numbers are generated (but uncovering further issues that still need to be addressed) and fixing a bug whereby older ‘history’ entries were not getting associated with new versions of entries that were uploaded.  I also created a simple XML preview facility, which allows the editor to paste their entry XML into a text area and for this to then be rendered as it would appear in the live site.  I also made a large change to how the ‘upload XML entries’ feature works.  Previously editors could attach any number of individual XML files to the form (even thousands) and these would then get uploaded.  However, I encountered an issue with the server rejecting so many file uploads in such a short period of time and blocking access to the PC that sent the files.  To get around this I investigated allowing a ZIP file containing XML files to be uploaded instead.  Upon upload my script would then extract the ZIP and process all of the XML files contained therein.  It turns out that this approach worked very well – no more issues with the server rejecting files and the processing is much speedier as it all happens in a batch rather than the script being called each time a single file is uploaded.  I tested the ZIP approach by zipping up all 3,179 XML files from the recent R data update and the Zip file was uploaded and processed in a few seconds, with all entries making their way into the holding area.  However, with this approach there is no feedback in the ‘Upload Log’ until the server-side script has finished processing all of the files in the ZIP, at which point all updates appear in the log at the same time, so there may be a wait of maybe 20-30 seconds (if it’s a big ZIP file) before it looks like anything has happened.  Despite this I’d say that with this update the DMS should now be able to handle full letter updates.

Also this week I added a ‘name of the month’ feature to the homepage of the Iona place-names project (https://iona-placenames.glasgow.ac.uk/) and continued to process the register images for the Books and Borrowing project.  I also spoke to Marc Alexander about Data Management Plans for a new project he’s involved with.

Week Beginning 22nd March 2021

I continued to develop the ‘Dictionary Management System’ for the Anglo-Norman Dictionary this week, following on with the work I began last week to allow the editors to drag and drop sets of entry XML files into the system.  I updated the form to add in another option underneath the selection of phase statement called ‘Phase Statements for existing records’.  Here the editor can choose whether to retain existing statements or replace them.  If ‘retain’ is selected then any XML entries attached to the form that either have an existing entry ID in their filename or have a slug that matches an existing entry in the system will retain whatever phase statement the existing entry has, no matter what phase statement is selected in the form.  The phase statement selected in the form will still be applied to any XML entries attached to the form that don’t have an existing entry in the system.  Selecting ‘replace existing statements’ will ignore all phase statements of existing entries and will overwrite them with whatever phase statement is selected in the form.  I also updated the system so that it extracts the earliest date for an entry at the point of upload.  I added two new columns to the holding area (for earliest date and the date that is displayed for this) and have ensured that the display date appears on the ‘review’ page too.  In addition, I added in an option to download the XML of an entry in the holding area, if it needs further work.

I ran a large-scale upload test, comprising of around 3,200 XML files from the ‘R’ data to see how the system would cope with this, but unfortunately I ran into difficulties with the server rejecting too many requests in a short space of time and only about 600 of the files made it through.  I asked Arts IT Support to see whether the server limits can be removed for this script, but haven’t heard anything back yet.  I ran into a similar issue when processing files for the Mull and Ulva place-names project in January last year and Raymond was able to update the whitelist for the Apache module mod_evasive that was blocking such uploads and I’m hoping he’ll be able to do something similar this time.  Alternatively, I’ll need to try and throttle the speed of uploads in the browser.

In the meantime, I continued with the scripts for publishing entries that had been uploaded to the holding area, using a test version of the site that I set up on my local PC to avoid messing up the live database.  I updated the ‘holding area’ page quite significantly.  At the top of the page is a box for publishing selected items, and beneath this is the table containing the holding items.  Each row now features a checkbox, and there is an option above the table to select / deselect all rows on the page (so currently up to 200 entries can be published in one batch as 200 is the page limit).    The ‘preview’ button has been replaced with an ‘eye’ icon but the preview page works in the same way as before.  I was intending to add the ‘publish’ options to this page but I’ve moved this to the holding area page instead to allow multiple entries to be selected for publication at any one time.

Selecting one or more items for publication and then pressing the ‘publish selected holdings’ button runs some JavaScript that grabs the ID of each holding item and then submits this to a script on the server via AJAX, and the server-side script then processes each selected item for publication in turn.  I limited the processing of this to one item per second to hopefully avoid the server rejecting requests.  Rather a lot happens when an item is published: The holding item is copied to the live entry table and then its XML is analysed to extract and store for search purposes: Citations, attestation dates and word counts of every word in each citation; translations and word counts of every word in each translation; semantic and usage labels (including adding new labels to the system if the XML contains new ones); word forms and their types (lemma, variant, deviant); parts of speech; cross references in xref entries.

If there is an existing live entry that matches the current entry (either because of the stored ‘Existing ID’ or because it has the same slug as the holding item) then this entry is deactivated in the database, its XML is copied to the ‘history’ table and associated with the new item record and all search data for the live entry as mentioned above is deleted.  At this point the holding item record is deleted and the server-side script finishes executing, returning its output to the JavaScript, which then adds a row to the ‘publication log’ on the holding entries page; decreases the count of the number of holding entries on the page by one and removes the row containing the holding item from the table on the page.

Once all of the selected items are published there is one final task that the page performs, which is to completely regenerate the cross references data.  This is something that unfortunately needs to be done after each batch (even if it’s only one record) because cross references rely on database IDs and when a new version of an existing entry is published it receives a new ID.  This means any existing cross references to that item will no longer work.  The publication log will state that the regeneration is taking place and then after about 30 seconds another statement will say it is complete.  I tested this process on my local PC, publishing single items, a few items and entire pages (200 items) at a time and all seemed to be working fine so I then copied the new scripts to the server.

Also this week I continued with the processing of library registers for the Books and Borrowing project.  These are coming in rather quickly now and I’m getting a bit of a backlog.  This is because I have to download the image files, then process then to generate tilesets, and then upload all of the images and their tilesets to the server.  It’s the tilesets that are the real sticking point, as these consist of thousands of small files.  I’m only getting an upload speed of about 70KB/s and I’m having to upload many gigabytes of data.  I did a test where I zipped up some of the images and uploaded this zip file instead and was getting a speed of around 900KB/s and as it looks like I can get command-line access to the server I’m going to investigate whether zipping up the files, then uploading them then unzipping them will be a quicker process.  I also had to spend some time sorting out connection issues to the server as the Stirling VPN wasn’t letting me connect.  It turned out that they had switched to multi-factor authentication and I needed to set this up before I could continue.

Also this week I wrote a summary of the work I’ve done so far for the Place-names of Iona project for a newsletter they’re putting together, spoke to people about the new ‘Comparative Kingship’ place-names project I’m going to be involved with, spoke to the Scots Language Policy people about setting up a mailing list for the project(it turns out that the University has software to handle this, available here: https://www.gla.ac.uk/myglasgow/it/emaillists/) and fixed an issue relating to the display of citations that have multiple dates for the DSL.

Week Beginning 15th March 2021

My son returned to school on Monday this week, marking an end to the home-schooling that began after the Christmas holidays.  It’s quite a relief to no longer have to split my day between working and home-schooling after so long.  This week I continued with some Data Management Plan related activities, completing a DMP for the metaphor project involving Duncan of Jordanstone College of Art and Design in Dundee and drafting a third version of the DMP for Kirsteen McCue’s proposal following a Zoom call with her on Wednesday.

I also spent some further time on the Books and Borrowing project, creating tilesets and page records for several new volumes.  In fact, we ran out of space on the server.  The project is digitising around 20,000 pages of library records from 1750-1830 and we’re approaching 5,000 pages so far.  I’d originally suggested that we’d need about 60GB of server space for the images (3MB per image x 20,000).  However, the JPEGS we’ve been receiving from the digitisation units have been generated at maximum quality / minimum compression and are around 9MB each, so my estimates were out.  Dropping the JPEG quality setting down from 12 to 10 would result in 3MB files so I could do this to save space if required.  However, there is another issue.  The tilesets I’m generating for each image so that they can be zoomed and panned like a Google Map are taking up as much as 18MB per image.  So we may need a minimum of 540GB of space (possibly 600GB to be safe): 9×20,000 for the JPEGs plus 18×20,000 for the tilesets.  This is an awful lot of space, and storing image tilesets isn’t actually necessary these days of an IIIF server (https://iiif.io/about/) could be set up.  IIIF is now well established as the best means of hosting images online and it would be hugely useful to use.  Rather than generating and hosting thousands of tilesets at different zoom levels we could store just one image per page on the server and it would serve up the necessary subsection at the required zoom level based on the request from the client.  This issue is that people in charge of servers don’t’ like having to support new software.  I entered into discussions with Stirling’s IT people about the possibility of setting up an IIIF server, and these talks are currently ongoing, so in the meantime I still need to generate the tilesets.

Also this week I discussed a couple of issues with the Thesaurus of Old English with Jane Roberts.  A search was bringing back some word results but when loading the category browser no content was being displayed.  Some investigations uncovered that these words were in subcategories of ’02.03.03.03.01’ but there was no main category with that number in the system.  A subcategory needs a main category in order to display in the tree browser and as none was available nothing was displaying.  Looking at the underlying database I discovered that while there was no ’02.03.03.03.01’ main category there were two ’02.03.03.03.01|01’ subcategories: ‘A native people’ and ‘Natives of a country’.  I bumped the former up from subcategory to main category and the search results then worked.

I spent the rest of the week continuing with the development of the Anglo-Norman Dictionary.  I made the new bibliography pages live this week (https://anglo-norman.net/bibliography/), which also involved updating the ‘cited source’ popup in the entry page so that it displays all of the new information.  For example, go to this page: https://anglo-norman.net/entry/abanduner  and click on the ‘A-N Med’ link to see a record with multiple items in it.  I also updated the advanced search for citations so that the ‘Citation siglum’ drop-down list uses the new data too.

After that I continued to update the Dictionary Management System.  I updated the ‘View / Download Entry’ page so that the ‘Phase’ of the entry can be updated if necessary.  In the ‘Phase’ section of the page all of the phases are now listed as radio buttons, with the entry’s phase checked.  If you need to change the entry’s phase you can select a different radio button and press the ‘Update Phase’ button.  I also added facilities to manage phase statements via the DMS.  In the menu there’s now an ‘Add Phase’ button, through which you can add a new phase, and a ‘Browse Phases’ button which lists all of the active phases, the number of entries assigned to each, and an option to edit the phase statement.  If there’s a phase statement that has no associated entries you’ll find an option to delete it here too.

I’m still working on the facilities to upload and manage XML entry files via the DMS.  I’ve added in a new menu item labelled ‘Upload Entries’ that when pressed on loads a page through which you can upload entry XML files.  There’s a text box where you can supply the lead editor initials to be added to the batch of files you upload (any files that already have a ‘lead’ attribute will not be affected) and an option to select the phase statement that should be applied to the batch of files.  Below this area is a section where you can either click to open a file browser and select files to upload or drag and drop files from Windows Explorer (or other file browser).  When files are attached they will be processed, with the results shown in the ‘Update log’ section below the upload area.  Uploaded files are kept entirely separate from the live dictionary until they’ve been reviewed and approved (I haven’t written these sections yet).  The upload process will generate all of the missing attributes I mentioned last week – ‘lead’ initials, the various ID fields, POS, sense numbers etc.  If any of these are present the system won’t overwrite them so it should be able to handle various versions of files.  The system does not validate the XML files – the editors will need to ensure that the XML is valid before it is uploaded.  However, the ‘preview’ option (see below) will quickly let you know if your file is invalid as the entry won’t display properly.  Note also that you can change the ‘lead’ and the phase statement between batches – you can drag and drop a set of files with one lead and statement selected, then change these and upload another batch.  You can of course choose to upload a single file too.

When XML files are uploaded, in the ‘update log’ there will be links directly through to a preview of the entry, but you can also find all entries that have been uploaded but not yet published on the website in the ‘Holding Area’, which is linked to in the DMS menu.  There are currently two test files in this.  The holding area lists the information about the XML entries that have been uploaded but not yet published, such as the IDs, the slug, the phase statement etc.  There is also an option to delete the holding entry.  The last two columns in the table are links to any live entry.  There are two columns.  The first links to the entry as specified by the numerical ID in the XML filename, which will be present in the filename of all XML files exported via the DMS’s ‘Download Entry’ option.  This is the ‘existing ID’ column in the table.  The second linking column is based on the ‘slug’ of the holding entry (generated from the ‘lemma’ in the XML).  The ‘slug’ is unique in the data so if a holding entry has a link in this column it means it will overwrite this entry if it’s made live.  For XML files exported view the DMS and them uploaded both ‘live entry’ links should be the same, unless the editor has changed the lemma.  For new entries both these columns should be blank.

The ‘Review’ button opens up a preview of the uploaded holding entry in the interface of the live site.  This allows the editors to proofread the new entry to ensure that the XML is valid and that everything looks right.  You can return to the holding area from this page by pressing on the button in the left-hand column.  Note that this is just a preview – it’s not ‘live’ and no-one else can see it.

There’s still a lot I need to do.  I’ll be adding in an option to publish an entry in the holding area, at which point all of the data needed for searching will be generated and stored and the existing live entry (if there is one) will be moved to the ‘history’ table.  I also maybe need to extract the earliest date information to display in the preview and in the holding area.  This information is only extracted when the data for searching is generated, but I guess it would be good to see it in the holding area / preview too.  I also need to add in a preview of cross reference entries as these don’t display yet.  I should probably also add in an option to allow the editors to view / download the holding entry XML as they might want to check how the upload process has changed this.  So still lots to tackle over the coming weeks.

Week Beginning 8th March 2021

It was another Data Management Plan heavy week this week.  I created an initial version of a DMP for Kirsteen McCue’s project at the start of the week and then participated in a Zoom call with Kirsteen and other members of the proposed team on Thursday where the plan was discussed.  I also continued to think through the technical aspects of the metaphor-related proposal involving Wendy and colleagues at Duncan Jordanstone College of Art and Design at Dundee and reviewed another DMP that Katherine Forsyth in Celtic had asked me to look at.

Other than issues I arranged for Joanna Kopaczyk’s ‘The Future of Scots’ project website to be moved to its top-level ‘ac.uk’ domain, and it can now be found here: https://scotslanguagepolicy.ac.uk/.  Marc Alexander had also contacted me about a weird bug he’d encountered in the Historical Thesaurus.  One of the category pages was failing to display properly and after investigation I figured out that it was an issue with the timeline data for one of the words on this page, which was causing the JavaScript to break.  I pulled out the JSON embedded in the page and the data for the word seemed to be missing a closing ‘}’, which was causing the error.  It turned out that someone had entered the dates the wrong way round for the word.  It was listed as ‘a1400-c1386’.  My dates system had plucked out the dates and correctly ordered them, but this meant the system was left with ‘1400’ with a joining ‘-‘ and then nothing after it, which resulted in the JSON being malformed.  I swapped the dates around (both in the new dates table and in the display date) and everything started working as it should again.  It was a relief to know that it was an issue with the data rather than my code.

Also this week I spent a bit of time working on the Books and Borrowing project, generating more page image tilesets and their corresponding pages for two more of the Edinburgh ledgers and adding an ‘Events’ page to the project website and giving more members of the project team permission to edit the site.  I also had an email chat with Thomas Clancy about the Iona project and created a ‘Call for Papers’ page including submission form on the project website (it’s not live yet, though).

I spent the rest of my week continuing to work on the Anglo-Norman Dictionary.  We received the excellent news this week that our AHRC application for funding to complete the remaining letters of the dictionary (and carry out more development work) was successful.  This week I mage some further tweaks to the new blog pages, adding in the first image in the blog post to the right of the blog snippet on the blog summary page.  I also made the new blog pages live, and you can now access them here: https://anglo-norman.net/blog/.

I also made some updates to the bibliography system based on requests from the editors to separate out the display of links to the DEAF website from the actual URLs (previously just the URLs were displayed).  I updated the database, the DMS and the new bibliography page to add in a new ‘DEAF link text’ field for both main source text records and items within source text records.  I copied the contents of the DEAF field into this new field for all records, I updated the DMS to add in the new fields when adding / editing sources and I updated the new bibliography page so that the text that gets displayed for the DEAF link uses the new field, whereas the actual link through to the DEAF website uses the original field.

I also continued to work on the facilities to upload batches of new or updated entry XML files to the DMS.  I created a new ‘holding’ table for uploaded entries and created a page that allows the user to drag and drop XML files into the browser, with files then getting uploaded, processed and added into this new table.  This uses a handy JavaScript library called Dropzone (https://www.dropzonejs.com/) that I previously used for the Scots Syntax Atlas CMS.  The initial version of the upload is working well, but I needed to know exactly how uploaded files should be fully processed before I could proceed further, which required some lengthy email exchanges with the editors.

The scripts I written when uploading the new ‘R’ dataset needed to make changes to the data to bring it into line with the data already in the system as the ‘R’ data didn’t include some attributes that were necessary for the system to work with the XML files, namely:

In the <main_entry> tag the attribute ‘lead’, which is used to display the editor’s initials in the front end (e.g. “gdw”) and the ‘id’ attribute, which although not used to uniquely identify the entries in my new system is still used in the XML for things like cross-references and therefore is required and must be unique.  In the <sense> tag the attribute ‘n’, which increments from 1 within each part of speech and is used to identify senses in the front-end.  In the <senseInfo> tag the ID attribute, which is used in the citation and translation searches and the POS attribute which is used to generate the summary information at the top of each entry page.  In the <attestation> tag the ID attribute, which is used in the citation search.

We needed to decide how these will be handled in future – whether they will be manually added to the XML as the editors work on them or whether the upload script needs to add them in at the point of upload.  We also needed to consider updates to existing entries.  If an editor downloads an entry and then works on it (e.g. adding in a new sense or attestation) then the exported file will already include all of the above attributes, except for any new sections that are added.  In such cases should the new sections have the attributes added manually, or do I need to ensure my script checks for the existence of the attributes and only adds the missing ones as required?

We decided that I’d set up the systems to automatically check for the existence of the attributes and add them in if they’re not already present.  It will take more time to develop such a system but it will make it more robust and hopefully will result in fewer errors.  I’ll also add an option to specify the ‘lead’ initials for the batch of files that are being uploaded, but this will not overwrite the ‘lead’ attribute for any XML files in the batch that already have the attribute specified.

I’ll hopefully get a chance to work on this next week.  Thankfully this is the last week of home-schooling for us so I should have a bit more time from next week onwards.

Week Beginning 1st March 2021

There was quite a bit of work to be done for the Books and Borrowing project this week.  Several more ledgers had been digitised and needed tilesets and page records generated for them.  The former requires the processing and upload of many gigabytes of images files which takes quite some time to complete, especially as the upload speed from my home computer to the server never gets beyond about 56Kb per second.  However, I just end up leaving my PC on overnight and generally the upload has completed by the morning.  Generating page records generally involves me updating a script to change image filename parts and page numbers, and to specify the first and last page and the script does the rest, but there are some quirks that need to be sorted out manually.  For the Wigtown data some of the images were not sequentially numbered, which meant I couldn’t rely on my script to generate the correct page structure.  For one of the Edinburgh ledgers the RA has already manually created some pages and had added more than a hundred borrowing records to them so I had to figure out a way to incorporate these.  The page images are a double spread (so two pages per image) but the pages the RA had made were individual, so what I needed to do was to remove the manual pages, generate a new set and then update the page references for each of the borrowing records so they appeared on the correct new page.

Also this week I continued to migrate the blogs over to the new Anglo-Norman Dictionary website, a process which I managed to complete.  The new blog isn’t live yet, as I asked for feedback from the Editors before I replaced the link to the old blog site, and there are a couple of potential tweaks that I need to make before we’re ready to go.  I also had a chat with the Dictionary of the Scots Language people about migrating to a new DNS provider and the impact this might have on email addresses.

The rest of my week was spent working on proposals for two new projects, one for Kirsteen McCue and the other for Wendy Anderson.  This involved reading through all of the documentation, making notes and beginning to write the required Data Management Plans.  For Wendy’s proposal we also had a Zoom meeting with partners in Dundee and for Kirsteen’s proposal I had an email discussion with partners at the British Library.  There’s not really much more I can say about the work I’m doing for these projects, but I’ll be continuing to work on the respective DMPs next week.

Week Beginning 22nd February 2021

I had a couple of Zoom meetings this week, then first on Monday was with the Historical Thesaurus team and members of the Oxford English Dictionary’s team to discuss how our two datasets will be aligned and updated in future.  It was an interesting meeting, but there’s still a lot of uncertainty regarding how the datasets can be tracked and connected as future updates are made, at least some of which will probably only become apparent when we get new data to integrate.

My second Zoom meeting was on Tuesday with the Place-Names of Iona project to discuss how we will be working with the QGIS package that team members will be using to access some of the archaeological data and Lidar maps, and also to discuss the issue of 10 digit grid references and the potential change from the old OSGB-36 means of generating latitude and longitude from grid references to the new WGS84 method.  It was a productive meeting and we decided that we would switch over to WGS84 and I would update the CMS to incorporate the new library for generating latitude and longitude from grid references.

I spent some time later in the week implementing this change, meaning that when a member of the project team adds or edits a place-name and supplies a grid reference the latitude and longitude generated use the new system.  As I mentioned a couple of weeks ago, the new library (see  http://www.movable-type.co.uk/scripts/latlong-os-gridref.html) allows 6, 8 or 10 digit grid references to be used and is JavaScript based, meaning as soon as the user enters the grid reference the latitude and longitude are generated.  I updated my scripts so that these values immediately appear in the relevant boxes in the form, and also integrated the Google Maps service that generates altitude data from the latitude and longitude, populating the altitude box in the form and also displaying a Google Map showing the exact location that the entered grid reference has produced if further tweaks are required.  I’m pretty happy with how the new system is working out.

Also this week I continued to work on the Books and Borrowing project, generating image tilesets for the scans of several volumes of ledgers from Edinburgh University Library and writing scripts to generate pages in the Content Management System, creating ‘next’ and ‘previous’ links as required and associating the relevant images.  I also had an email correspondence about some of the querying methods we will develop for the data, such as collocation information.

I also gave some feedback on a data management plan for a project I’m involved with, had a chat with Wendy Anderson about a possible future project she’s trying to set up and spent some time making updates to the underlying data of the Interactive Map of Burns Suppers that launched last month.  I didn’t have the time to do a huge amount of work on the Anglo-Norman Dictionary this week, but I still managed to migrate some of the project’s old blog posts to our new site over the course of the week.

Finally, I made some updates to the bibliography system for the Dictionary of the Scots Language, updating the new system so it works in a similar manner to the live site.  I added ‘Author’ and ‘Title’ to the drop-down items when searching for both to help differentiate them and a search for an item when the user ignores the drop-down options and manually submits the search now works as it does in the live site.  I also fixed the issue with selecting ‘Montgomerie, Norah & William’ resulting in a 404 error.  This was caused by the ampersand.  There were some issues with other non-alphanumeric characters that I’ve fixed too, including slashes and apostrophes.

Week Beginning 15th February 2021

I spent quite a bit of time this week continuing to work on the Anglo-Norman Dictionary, creating a new ‘bibliography’ page that will replace the existing ‘source texts’ page and uses the new source text management scripts that I added to the new content management system recently.  This required rather a lot of updates as I needed to update the API to use the new source texts table and also to incorporate source text items as required, which took some time.  I then created the new ‘bibliography’ page which uses the new source text data.  There is new introductory text and each item features the new fields as requested by the editors.  ‘Dean’ references always appear, the title and author are in bold and ‘details’ and ‘notes’ appear when present.  If a source text has one or more items these are listed in numeric order, in a slightly smaller font and indented.  Brackets for page numbers are added in.  I also had to change the way the source texts were ordered as previously the list was ordered by the ‘slug’ but with the updates to the data it sometimes happens that the ‘slug’ doesn’t begin with the same letter as the siglum text and this was messing up the order and the alphabetical buttons.  Now the list is ordered by the siglum text stripped of any tags and all seems to be working fine.  I will still need to update the links from dictionary items to the bibliography when the new page goes live, and update the search facilities too, but I’ll leave this until we’re ready to launch the new page.

During the week I made a number of further tweaks to the new bibliography page based on feedback from the editors.  One big thing was to change the way the page was split up in order to allow links to specific bibliographical items to be added.  Previously the selection of a section of the bibliography based on the initial letter was handled in the browser via JavaScript.  This made it fast to switch between letters, but meant that it was not possible to easily link to a specific section of the bibliography.  I changed this so that the selection was handled on the server side.  This does mean that each time a letter is pressed on the whole page needs to reload, which is a bit slower, but it also means you can bookmark a specific letter, e.g. bibliographies beginning with ‘T’.  It also means it’s possible to link to a specific item within a page.  Each item in the page has an ID in the HTML consisting of ‘bib-‘ plus the item’s slug.  To link to this section of the page you can add a link consisting of the page URL, a hash, and then this ID.  Then when the page loads it will jump down to the relevant section.

I also had to change the way items within bibliographical entries were ordered.  These were previously ordered on the ‘numeral’ field, which contained a Roman numeral.  I’d written a bit of a hack to ensure that these were ordered correctly up to 20, but it turns out that there are some entries with more than 60 items, and some of them have non-standard numerals, such as ‘IXa’.  I decided that it would be too complicated to use the ‘numeral’ field for ordering as the contents are likely to be too inconsistent for a computer to automatically order successfully.  I therefore created a new ‘itemorder’ column in the database that holds a numerical value that decides the order of the items.  I wrote a little script that populates this field for the items already in the system and for any bibliographical entry with 20 or fewer items the order should be correct without manual intervention.  For the handful of entries with more than 20 items the editors will have to manually update the order.  I updated the DMS so that the new ‘item order’ field appears when you add or edit items, and this will need to be used for each item to rearrange the items into the order they should be in.  The new bibliography page uses the new itemorder field so updates are reflected on this page.

I also needed to update the system to correctly process multiple DEAF links, which I’d forgotten to do previously, made some changes to the ordering of items (e.g. so that entries with a number appear before entries with the same text but without a number) and added in an option to hide certain fields by adding a special character into the field.  Also for the AND I updated the XML of an entry and continued to migrate blog posts from the old blog to our new system.

I then began work on the pages of the CMS that will be used for uploading, viewing and downloading entries.  I added an option to the CMS that allows the editors to choose an entry to view all of the data stored about it and to download its XML for editing.  This consists of a textbox into which an entry’s slug can be entered.  After entering the slug and pressing ‘Go’ a page loads that lists all of the data stored about the entry in the system, such as its ID, the ID from the old system, last editor and date of last edit.  You can also access the XML of the entry if you scroll down to the ‘XML’ section of the page.  The XML is hidden in a collapsed section of the page and if you click on the header it expands.  I’ve added in styles to make it easier to read, using a very nice JavaScript library called prism.js (https://prismjs.com/).  There is also a button to download the XML.  Pressing on this prompts you to save the file, and the filename consists of the entryorder plus the entry ID.  This section of the page will also keep a record of all previous versions of the XML when a new version is uploaded into the system (once I develop the upload feature).  This will allow you to access, check and download older versions of the XML, if some mistake has been made when uploading a new version.

Beneath the XML section you can view all of the information that is extracted from the XML and used in the system for search and display purposes: forms, parts of speech, cross references, labels, citations and translations.  This is to enable the editors to check that the data extracted and used by the system is correct.  I could possibly add in options for you to edit this data, but any edits made would then be overwritten the next time an XML file is uploaded for the entry, so I’m not sure how useful this would be.  I think it would be better to limit the editing of this information to via a new XML file upload only.

However, we may want to make some of the information in this page directly editable, specifically some of the fields in the first table on the page.  The editors may want to change the lemma or homonym number, or the slug or entry order.  Similarly the editors may want to manually override the earliest date for the entry (although this would then be overwritten when a new XML version is uploaded) or change the ‘phase’ information.

The scripts to upload a new XML entry are going to take some time to get working, but at least for now you can view and download entries as required. Here’s a screenshot of how the facility works:

Also this week I dealt with a few queries about the Symposium for Seventeenth-Century Scottish Literature, which was taking place online this week and for which I had set up the website.  I also spoke to Arts IT Support about getting a test server set up for the Historical Thesaurus.  I spent a bit of time working for the Books and Borrowing project, processing images for a ledger from Edinburgh University Library, uploading these to the server and generating page records and links between pages for the ledger.  I also gave some advice to the Scots Language Policy RA about how to use the University’s VPN, spoke to Jennifer Smith about her SCOSYA follow-on funding proposal and had a chat with Thomas Clancy about how we will use GIS systems in the Iona project.

Week Beginning 8th February 2021

I was on holiday from Monday to Wednesday this week to cover the school half-term, so only worked on Thursday and Friday.  On Thursday I had a Zoom call with the Historical Thesaurus team to discuss further imports of new data from the OED and how to export our data (such as the revised category hierarchy) in a format that the OED team would be able to use.  We have a meeting with the OED the week after next so it was good to go over some of the issues and refresh my memory about where things were left off as it’s been several months since I last did any major work on the HT.  As a result of the meeting I also did some further work, namely exporting the current version of the online database and making it available for Fraser to download and access on his own PC, and updating some of the earlier scripts I’d created to generate statistics about the unmatched categories and words so that they used the most recent versions of the database.

Also this week I made some further tweaks to the SCOSYA website and created a user account for a researcher who is going to work with some of the data that is only available in the project’s CMS rather than the public website.  I also read through a new funding proposal that Wendy Anderson is involved with and have her some feedback on that and reported a couple of issues with expired SSL certificates that were affecting some websites.

I spent some time on the Books and Borrowing project on two data-related tasks.  First was to look through the new set of digitised images from Edinburgh University Library and decide what we should do with them.  Each image is of an open book, featuring both recto and verso pages in one image.  We may need to split these up into individual images, or we may just create page records that cover both pages.  I alerted the project PI Katie Halsey to the issue and the team will make a decision about which approach to take next week.  The second task was to look through the data from Selkirk library that another project had generated.  We had previously imported data for Selkirk that another researcher had compiled a few years before our project began, but recently discovered that this data did not include several thousand borrowing records of French prisoners of war, as the focus of the researcher was on Scottish borrowers.  We need these missing records and another project has agreed to let us use their data.  I had intended to completely replace the database I’d previously ingested with this new data, but on closer inspection of the new data I have a number of reservations about doing so.

The data from the other project has been compiled in an Excel spreadsheet and as far as I can tell there is no record of the ledger volume or page that each borrowing record was originally located on.  In the data we already have there is a column for ‘source ref’, containing the ledger volume (e.g. ‘volume 1’) and a column for ‘page number’, containing a unique ID for each page in the spreadsheet (e.g. ‘1010159r’).  Looking through the various sheets in the new spreadsheet there is nothing comparable to this, which is vital for our project, as borrowing records must be associated with page records, which in turn must be associated with a ledger.  It also would make it extremely difficult to trace a record back to the original physical record.

Another issue is that in our existing data the researcher has very handily used unique identifiers for readers (e.g. ‘brodie_james’), borrowing records (e.g. ‘1’) and books (e.g. ‘adam_view_religion’) that tie the various records together very nicely.  The new project’s data does not appear to use any unique identifiers to connect bits of data together.  For example, there are three ‘John Anderson’ borrowers and in the data we’re currently using these are differentiated by their IDs as ‘anderson_john’, ‘anderson_john2’ and ‘anderson_john3’.  This means it’s easy to tell which borrower appears in the borrowing records.  In the new project’s data three different fields are required to identify the borrower:  surname, forename and residence.  This data is stored in separate columns in the ‘All loans’ sheet (e.g. ‘Anderson’, ‘John’, ‘Cramalt’), but in the ‘Members’ sheet everything is joined together in one ‘Name’ field, e.g. ‘Anderson, John (Cramalt)’.  This lack of unique identifiers combined with the inconsistent manner of recording name and place will make it very difficult to automatically join up records and I’ve flagged this up with Katie for further discussion with the team.  It’s looking like we may want to try and identify the POW records from the new project’s data and amalgamate these with the data we already have, rather than replacing everything.

I also spent a bit of time on the Anglo-Norman Dictionary this week, making some changes to homonym numbers for a few entries and manually updating a couple of commentaries.  I also worked for the Dictionary of the Scots Language, preparing the SND and DOST datasets for import into the new editing system that the project is now going to use.  This was a little trickier than anticipated as initially I zipped up the data that I’d exported from the old editing system in November when I worked on the new ‘V4’ version of the online API, but we realised that this still contained duplicates that I’d stripped out when uploading the data into the new online database.  So instead I exported the XML from the online database, but it turned out that during the upload process a section of the entry XML was being removed.  This section (<meta>) contained all of the forms and URLs and my upload process exported these to a separate table and reformatted the XML so that it matched the structure that was defined during the creation of the first version of the API.  However, the new editing system requires this <meta> section so that data I’d prepared was not usable.  Instead I took the XML exported from the old editing system back in November and ran it through the script I’d written to strip out duplicates, then prepared the resulting XML dataset for transfer.  It looks like this approach has worked, but I’ll find out more next week.

Week Beginning 1st February 2021

I had two Zoom calls this week, the first on Wednesday with Kirsteen McCue to discuss a new, small project to publish a selection of musical settings to Burns poems and the second on Friday with Joanna Kopaczyk and her RA on the Scots Language Policy project to give a tutorial on how to use WordPress.

The majority of my week was divided between the Anglo-Norman Dictionary, the Dictionary of the Scots Language and the Place-names of Iona projects.  For the AND I made a few tweaks to the static content of the site and migrated some more blog posts across to the new site (these are not live yet).  I also added commentaries to more than 260 entries, which took some time to test.  I also worked on the DTD file that the editors reference from their XML editing software to ensure that all of the elements and attributes found within commentaries are ‘allowed’ in the XML.  Without doing this it was possible to add the tags in, but this would give errors in the editing software.  I also batch updated all of the entries on the site to reference the new DTD and exported all of the files, zipped them up and sent them to the editors so they can work on them as required.  I also began to think about migrating the TextBase from the old site to the new one, and managed to source the XML files that comprise this system.  It looks like it may be quite tricky to work with these as there are more than 70 book-length XML files to deal with and so far I have not managed to locate the XSLT that was originally used to process these files.

For the DSL I completed work on the new bibliography search pages that use the new ‘V4’ data.  These pages allow the authors and titles of bibliographical items to be searched, results to be viewed and individual items to be displayed.  I also made some minor tweaks to the live site and had a discussion with Ann Fergusson about transferring the project’s data to the people who have set up a new editing interface for them, something I’m hoping to be able to tackle next week.

For the Place-names of Iona project I had a discussion about implementing a new ‘work of the month’ feature and spent quite a bit of time investigating using 10-digit OS grid references in the project’s CMS.  The team need to use up to 10-digit grid references to get 1m accuracy for individual monuments, but the library I use in the CMS to automatically generate latitude and longitude from the supplied grid reference will only work with a 6-digit NGR.  The automatically generated latitude and longitude are then automatically passed to Google Maps to ascertain the altitude of the location and all of this information is stored in the database whenever a new place-name record is created or an existing record is edited.

As the library currently in use will only accept 6-digit NGRs I had to do a bit of research into alternative libraries, and I managed to find one that can accept NGRs of 2,4,6,8 or 10 digits.  Information about the library, including text boxes where you can enter an NGR and see the results can be found here: http://www.movable-type.co.uk/scripts/latlong-os-gridref.html along with an awful lot of description about the calculations and some pretty scary looking formulae.

The library is written in JavaScript, which runs in the client’s browser, whereas the previous library was written in PHP, which runs on the server.  This means I needed to change the way the CMS works – previously you’d enter an NGR and then when the form was submitted to the server the PHP library would generate the latitude and longitude whereas now the latitude and longitude need to be generated in the browser as soon as the NGR is entered into the textbox, and two further textboxes for latitude and longitude will appear in the form and will then be automatically populated with the results.

 

This does mean the person filling out the form can see the generated latitude and longitude and also tweak it if required before submitting the form, which is a potentially useful thing.  I may even be able to add a Google Map to the form so you can see (and possibly tweak) the point before submitting the form, but I’ll need to look into this further.  I also still need to work on the format of the latitude and longitude as the new library generates them with a compass point (e.g. 6.420848° W) and we need to store them as a purely decimal value (e.g. -6.420848) with ‘W’ and ‘S’ figures being negatives.

However, whilst researching this I discovered a potentially worrying thing that needs discussion with the wider team.  The way the Ordnance Survey generates latitude and longitude from their grid references was changed in 2014.  Information about this can be found in the page linked to above in the ‘Latitude/longitudes require a datum’ section.  Previously the OS used ‘OSGB-36’ to generate latitude and longitude, but in 2014 this was changed to ‘WGS84’, which is used by GPS systems.  The difference in the latitude / longitude figures generated by the two systems is about 100 metres, which is quite a lot if you’re intending to pinpoint individual monuments.

The new library has facilities to generate latitude and longitude using either the new or old systems, but defaults to the new system.  I’ve checked the output of the library we currently use and it uses the old ‘OSGB-36’ system.  This means all of the place-names in the system so far (and all those for the previous projects) have latitudes and longitudes generated using the now obsolete (since 2014) system. To give an example of the difference, the place-name A’ Mhachair in the CMS has this location: https://www.google.com/maps/place/56%C2%B019’33.2%22N+6%C2%B025’11.4%22W/@56.3258889,-6.422022,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325885!4d-6.419828 and with the newer ‘WGS84’ system it would have this location: https://www.google.com/maps/place/56%C2%B019’32.7%22N+6%C2%B025’15.1%22W/@56.325744,-6.4230367,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325744!4d-6.420848

So what we need to decide before I replace the old library with the new one in the CMS is whether we switch to using ‘WGS84’ or we keep using ‘OSGB-36’.  As I say, this will need further discussion before I implement any changes.

Also this week I responded to a query from Cris Sarg of the Medical Humanities Network project, spoke to Fraser Dallachy about future updates to the HT’s data from the OED, made some tweaks to the structure of the SCOSYA website for Jennifer Smith, added a plugin to the Editing Burns site for Craig Lamont and had a chat with the Books and Borrowing people about cleaning the authors data, importing the Craigston data and how to deal with a lot of borrowers that were excluded from the Selkirk data that I previously imported.

Next week I’ll be on holiday from Monday to Wednesday to cover the school half term.