Week Beginning 27th September 2021

I had two Zoom calls on Monday this week.  The first was with the Burns people to discuss the launch of the website for the ‘letters and poems’ part of ‘Editing Burns’, to complement the existing ‘Prose and song’ website (https://burnsc21.glasgow.ac.uk/).  The new website will launch in January with some video content and blogs, plus I will be working on a content management system for managing the network of Burns’ letter correspondents, which I will put together some time in November, assuming the team can send me on some sample data by then.  This system will eventually power the ‘Burns letter writing trail’ interactive maps that I’ll create for the new site sometime next year.

My second Zoom call was for the Books and Borrowing project to discuss adding data from a new source to the database.  The call gave us an opportunity to discuss the issues with the data that I’d highlighted last week.  It was good to catch up with the team again and to discuss the issues with the researcher who had originally prepared the spreadsheet containing the data.  We managed to address all of the issues and the researcher is going to spend a bit of time adapting the spreadsheet before sending it to me to be batch uploaded into our system.

I spent some further time this week investigating the issue of some of the citation dates in the Anglo-Norman Dictionary being wrong, as discussed last week.  The issue affects some 4309 entries where at least one citation features the form only in a variant text.  This means that the citation date should not be the date of the manuscript in the citation, but the date when the variant of the manuscript was published.  Unfortunately this situation was never flagged in the XML, and there was never any means of flagging the situation.  The variant date should only ever be used when the form of the word in the main manuscript is not directly related to the entry in question but the form in the variant text is.  The problem is it cannot be automatically ascertained when the form in the main manuscript is the relevant one and when the form in the variant text is as there is so much variation in forms.

For example, the entry https://anglo-norman.net/entry/bochet_1 there is a form ‘buchez’ in a citation and then two variant texts for this where the form is ‘huchez’ and ‘buistez’.  None of these forms are listed in the entry’s XML as variants so it’s not possible for a script to automatically deduce which is the correct date to use (the closest is ‘buchet’).  In this case the main citation form and its corresponding date should be used.  Whereas in the entry https://anglo-norman.net/entry/babeder the main citation form is ‘gabez’ while the variant text has ‘babedez’ and so this is the form and corresponding date that needs to be used.  It would be difficult for a script to automatically deduce this.  In this case a Levenstein test (which test how many letters need to be changed to turn one string into another) could work, but this would still need to be manually checked.

The editor wanted me to focus on those entries where the date issue affects the earliest date for an entry, as these are the most important as the issue results in an incorrect date being displayed for the entry in the header and the browse feature.  I wrote a script that finds all entries that feature ‘<varlist’ somewhere in the XML (the previously exported 4309 entries).  It then goes through all attestations (in all sense, subsense and locution sense and subsense sections) to pick out the one with the earliest date, exactly as the code for publishing an entry does.  What it then does is checks the quotation XML for the attestation with the earliest date for the presence of ‘<varlist’ and if it finds this it outputs information for the entry, consisting of the slug, the earliest date as recorded in the database, the earliest date of the attestation as found by the script, the ID of the  attestation and then the XML of the quotation.  The script has identified 1549 entries that have a varlist in the earliest citation, all of which will need to be edited.

However, every citation has a date associated with it and this is used in the advanced search where users have the option to limit their search to years based on the citation date.  Only updating citations that affect the entry’s earliest date won’t fix this, as there will still be many citations with varlists that haven’t been updated and will still therefore use the wrong date in the search.  Plus any future reordering of citations would require all citations with varlists to be updated to get entries in the correct order.  Fixing the earliest citations with varlists in entries based on the output of my script will fix the earliest date as used in the header of the entry and the ‘browse’ feature only, but I guess that’s a start.

Also this week I sorted out some access issues for the RNSN site, submitted the request for a new top-level ‘ac.uk’ domain for the STAR project and spent some time discussing the possibilities for managing access to videos of the conference sessions for the Iona place-names project.  I also updated the page about the Scots Dictionary for Schools app on the DSL website (https://dsl.ac.uk/our-publications/scots-dictionary-for-schools-app/) after it won the award for ‘Scots project of the year’.

I also spent a bit of time this week learning about the statistical package R (https://www.r-project.org/).  I downloaded and installed the package and the R Studio GUI and spent some time going through a number of tutorials and examples in the hope that this might help with the ‘Speak for Yersel’ project.

For a few years now I’ve been meaning to investigate using a spider / radar chart for the Historical Thesaurus, but I never found the time.  I unexpectedly found myself with some free time this week due to ‘Speak for Yersel’ not needing anything from me yet so I thought I’d do some investigation.  I found a nice looking d3.js template for spider / radar charts here: http://bl.ocks.org/nbremer/21746a9668ffdf6d8242  and set about reworking it with some HT data.

My idea was to use the chart to visualise the distribution of words in one or more HT categories across different parts of speech in order to quickly ascertain the relative distribution and frequency of words.  I wanted to get an overall picture of the makeup of the categories initially, but to then break this down into different time periods to understand how categories changed over time.

As an initial test I chose the categories 02.04.13 Love and 02.04.14 Hatred, and in this initial version I looked only at the specific contents of the categories – no subcategories and no child categories.  I manually extracted counts of the words across the various parts of speech and then manually split them up into words that were active in four broad time periods: OE (up to 1149), ME (1150-1449), EModE (1450-1799) and ModE (1800 onwards) and then plotted them on the spider / radar chart, as you can see in this screenshot:

You can quickly move through the different time periods plus the overall picture using the buttons above the visualisation, and I think the visualisation does a pretty good job of giving you a quick and easy to understand impression of how the two categories compare and evolve over time, allowing you to see, for example, how the number of nouns and adverbs for love and hate are pretty similar in OE:

but by ModE the number of nouns for Love have dropped dramatically, as have the number of adverbs for Hate:

We are of course dealing with small numbers of words here, but even so it’s much easier to use the visualisation to compare different categories and parts of speech than it is to use the HT’s browse interface.  Plus if such a visualisation was set up to incorporate all words in child categories and / or subcategories it could give a very useful overview of the makeup of different sections of the HT and how they develop over time.

There are some potential pitfalls to this visualisation approach, however.  The scale used currently changes based on the largest word count in the chosen period, meaning unless you’re paying attention you might get the wrong impression of the number of words.  I could change it so that the scale is always fixed as the largest, but that would then make it harder to make out details in periods that have much fewer words.  Also, I suspect most categories are going to have many more nouns than other parts of speech, and a large spike of nouns can make it harder to see what’s going on with the other axes.  Another thing to note is that the order of the axes is fairly arbitrary but can have a major impact on how someone may interpret the visualisation.  If you look at the OE chart the ‘Hate’ area looks massive compared to the ‘Love’ area, but this is purely because there is only one ‘Love’ adjective compared to 5 for ‘Hate’.  If the adverb axis had come after the noun one instead the shapes of ‘Love’ and ‘Hate’ would have been more similar.  You don’t necessarily appreciate on first glance that ‘Love’ and ‘Hate’ have very similar numbers of nouns in OE, which is concerning.  However, I think the visualisations have a potential for the HT and I’ve emailed the other HT people to see what they think.

 

Week Beginning 24th May 2021

I had my first dose of the Covid vaccine on Tuesday morning this week (the AstraZeneca one), so I lost a bit of time whilst going to get that done.  Unfortunately I had a bit of a bad reaction to it and ended up in bed all day Wednesday with a pretty nasty fever.  I had Covid in October last year but only experienced mild symptoms and wasn’t even off work for a day with it, so in my case the cure has been much worse than the disease.  However, I was feeling much better again by Thursday, so I guess I lost a total of about a day and a half of work, which is a small price to pay if it helps to ensure I don’t catch Covid again and (what would be worse) pass it on to anyone else.

In terms of work this week I continued to work on the Anglo-Norman Dictionary, beginning with a few tweaks to the data builder that I had completed last week.  I’d forgotten to add a bit of processing to the MS date that was present in the Text Date section to handle fractions, so I added that in.  I also updated the XML output so that ‘pref’ and ‘suff’ only appear if they have content now, as the empty attributes were causing issues in the XML editor.

I then began work on the largest outstanding task I still have to tackle for the project: the migration of the textbase texts to the new site.  There are about 80 lengthy XML digital editions on the old site that can be searched and browsed, and I need to ensure these are also available on the new site.  I managed to grab a copy of all of the source XML files and I tracked down a copy of the script that the old site used to process the files.  At least I thought I had.  It turned out that this file actually references another file that must do most of the processing, including the application of an XSLT file to transform the XML into HTML, which is the thing I really could do with getting access to.  Unfortunately this file was no in the data from the server that I had been given access to, which somewhat limited what I could do.  I still have access to the old site and whilst experimenting with the old textbase I managed to make it display an error message that gives the location of the file: [DEBUG: Empty String at /var/and/reduce/and-fetcher line 486. ].  With this location available I asked Heather, the editor who has access to the server, if she might be able to locate this file and others in the same directory.  She had to travel to her University in order to be able to access the server, but once she did she was able to track the necessary directory down and get a copy to me.  This also included the XSLT file, which will help a lot.

I wrote a script to process all of the XML files, extracting titles, bylines, imprints, dates, copyright statements and splitting each file up into individual pages.  I then updated the API to create the endpoints necessary to browse the texts and navigate through the pages, for example the retrieval of summary data for all texts, or information about a specified texts, or information about a specific page (including its XML).  I also began working on a front-end for the textbase, which is still very much in progress.  Currently it lists all texts with options to open a text at the first available page or select a page from a drop-down list of pages.  There are also links directly into the AND bibliography and DEAF where applicable, as the following screenshot demonstrates:

It is also possible to view a specific page, and I’ve completed work on the summary information about the text and a navbar through which it’s possible to navigate through the pages (or jump directly to a different page entirely).  What I haven’t yet tackled is the processing of the XML, which is going to be tricky and I hope to delve into next week.   Below is a screenshot of the page view as it currently looks, with the raw XML displayed.

I also investigated and fixed an issue the editor Geert spotted, whereby the entire text of an entry was appearing in bold.  The issue was caused by an empty <link_form/> tag.  In the XSLT each <link_form> becomes a bold tag <b> with the content of the link form in the middle.  As there was no content it became a self-closed tag <b/> which is valid in XML but not valid in HTML, where it was treated as an opening tag with no corresponding closing tag, resulting in the remainder of the page all being bold.  I got around this by placing the space that preceded the bold tag “ <b></b>” within the bold tag instead “<b> </b>” meaning the tag is no longer considered empty and the XSLT doesn’t self-close it, but ideally if there is no <link_form> then the tag should just be omitted, which would also solve the problem.

I also looked into an issue with the proofreader that Heather encountered.  When she uploaded a ZIP file with around 50 entries in it some of the entries wouldn’t appear in the output, but would just display their title.  The missing entries would be random without any clear reason as to why some were missing.    After some investigation I realised what the problem was:  each time an XML file is processed for display the DTD referenced in the file was being checked.  When processing lots of files all at once this was exceeding the maximum number of file requests the server was allowing from a specific client and was temporarily blocking access to the DTD, causing the processing of some of the XML files to silently fail.  The maximum number would be reached at a different point each time, thus meaning a different selection of entries would be blank.  To fix this I updated the proofreader script to remove the reference to the DTD from the XML files in the uploaded ZIP before they are processed for display.  The DTD isn’t actually needed for the display of the entry – all it does is specify the rules for editing it.  With the DTD reference removed it looks like all entries are getting properly displayed.

Also this week I gave some further advice to Luca Guariento about a proposal he’s working on, fixed a small display issue with the Historical Thesaurus and spoke to Craig Lamont about the proposal he’s putting together.  Other than that I spent a bit of time on the Dictionary of the Scots Language, creating four different mockups of how the new ‘About this entry’ box could look and investigating why some of the bibliographical links in entries in the new front-end were not working.  The problem was being caused by the reworking of cref contents that the front-end does in order to ensure only certain parts of the text become a link.  In the XML the bib ID is applied to the full cref, (e.g. <cref refid=”bib018594″><geo>Sc.</geo> <date>1775</date> <title>Weekly Mag.</title> (9 Mar.) 329: </cref>) but we wanted the link to only appear around titles and authors rather than the full text.  The issue with the missing links was cropping up where there is no author or title for the link to be wrapped around (e.g. <cit><cref refid=”bib017755″><geo>Ayr.</geo><su>4</su> <date>1928</date>: </cref><q>The bag’s fu’ noo’ we’ll sadden’t.</q></cit>).  In such cases the link wasn’t appearing anywhere.  I’ve updated this now so that if no author or title is found then the link gets wrapped around the <geo> tag instead, and if there is no <geo> tag the link gets wrapped around the whole <cref>.

I also fixed a couple of advanced search issues that had been encountered with the new (and as yet not publicly available) site.  There was a 404 error that was being caused by a colon in the title.  The selected title gets added into the URL and colons are special characters in URLs, which was causing a problem.  However, I updated the scripts to allow colons to appear and the search now works.  It also turned out that the full-text searches were searching the contents of the <meta> tag in the entries, which is not something that we want.  I knew there was some other reason why I stripped the <meta> section out of the XML and this is it.  The contents of <meta> end up in the free-text search and are therefore both searchable and returned in the snippets.  To fix this I updated my script that generates the free-text search data to remove <meta> before the free-text search is generated.  This doesn’t remove it permanently, just in the context of the script executing.  I regenerated the free-text data and it no longer includes <meta>, and I then passed this on to Arts IT Support who have the access rights to update the Solr collection.  With this in place the advanced search no longer does anything with the <meta> section.

Week Beginning 10th May 2021

I continued to work on updates to the Anglo Norman Dictionary for most of this week, looking at fixing the bad citation dates in entries that were causing the display of ‘earliest date’ to be incorrect.  A number of the citation dates have a proper date in text form (e.g. s.xii/xiii) but have incorrect ‘post’ and ‘pre’ attributes (e.g. ‘00’ and ‘99’).  The system uses these ‘post’ and ‘pre’ attributes for date searching and for deciding which is the earliest date for an entry, and if one of these bad dates was encountered it was considering it to be the earliest date.  Initially I thought there were only a few entries that had ended up with an incorrect earliest date, because I was searching the database for all earliest dates that were less than 1000.  However, I then realised that the bulk of the entries with incorrect earliest dates had the earliest date field set to ‘null’ and in database queries ‘null’ is not considered less than 1000 but a separate thing entirely and so such entries were not being found.  I managed to identify several hundred entries that needed their dates fixed and wrote a script to do so.

It was slightly more complicated than a simple ‘find and replace’ as the metadata about the entry needed to be regenerated too – e.g. the dates extracted from the citations that are used in the advanced search and the earliest date display for entries.  I managed to batch correct several hundred entries using the script and also adapted it to look for other bad dates that needed fixing too.

I also created a new feature for the Dictionary Management System: an entry proofreader.  It allows an editor to attach a ZIP file containing XML entries and it then displays all of these in a similar manner to the live site, only with all entries on one long page.  The editor can then select all of the text, copy it and then paste it into Word and the major formatting elements will be retained (bold, italic, superscript etc.).  I tested the feature by zipping up 3,178 XML entries and although it took a few minutes to process, the page displayed properly and I was able to copy the text to Word (resulting in a 1,029 page Word file).  After finishing the initial version of the script I had to tweak it a bit, as I wrote the HTML and JavaScript with the expectation that there would be one dictionary item on the page and some aspects were not working when there were multiple items and needed updating.  I also ensured that links to sources in entries work.  In the actual dictionary they open a pop-up, which clearly isn’t going to work in Word so instead I made the link go to the relevant item in the bibliography page (e.g. https://anglo-norman.net/bibliography/B#bib-Best).  Links to other dictionaries, labels and other AND entries also all now work from Word.

In addition, cogrefs appear before variants and deviants, commentaries appear (as full text, not cut off), Xrefs at the bottom now have the ‘see also’ text above them as in the live site, editor initials now appear where they exist and numerals only appear where there is more than once sense in a POS.

Also this week I did some further work for the Dictionary of the Scots Language based on feedback after my upload of data from the DSL’s new editing system.  There was a query about the ‘slug’ used for referencing an entry in a URL.  When the new data is processed by the import script the ‘slug’ is generated from the first <url> entry in the XML.  If this <url> begins ‘dost’ or ‘snd’ it means a headword is not present in the <url> and therefore the new system ID is taken as the new ‘slug’ instead.  All <url> forms are also stored as alternative ‘slugs’ that can still be used to access the entry.  I checked the new database and there are 3258 entries that have a ‘slug’ beginning with ‘dost’ or ‘snd’, i.e. they have the new ID as their ‘slug’ because they had an old ID as their first <url> in the XML.  I checked a couple of these and they don’t seem to have the headword as a <url>, e.g. ‘beit’ (dost00052776) only has the old ID (twice) as URLs: <url>dost2543</url><url>dost2543</url>, ‘well-fired’ (snd00090568) only has the old ID (twice) as URLs: <url>sndns4098</url><url>sndns4098</url>.  I’ve asked the editors what should be done about this.

Also this week I wrote a script to generate a flat CSV from the Historical Thesaurus’s relational database structure, joining the lexeme and category tables together and appending entries from the new ‘date’ table as additional columns as required.  It took a little while to write the script and then a bit longer to run it, resulting in a 241MB CSV file.

I also gave some advice to Craig Lamont in Scottish Literature about a potential bid he’s putting together, and spoke to Luca about a project he’s been asked to write a DMP for.  I also looked through some journals that Gerry Carruthers is hoping to host at Glasgow and gave him an estimate of the amount of time it would take to create a website based on the PDF contents of the old journal items.

Week Beginning 8th March 2021

It was another Data Management Plan heavy week this week.  I created an initial version of a DMP for Kirsteen McCue’s project at the start of the week and then participated in a Zoom call with Kirsteen and other members of the proposed team on Thursday where the plan was discussed.  I also continued to think through the technical aspects of the metaphor-related proposal involving Wendy and colleagues at Duncan Jordanstone College of Art and Design at Dundee and reviewed another DMP that Katherine Forsyth in Celtic had asked me to look at.

Other than issues I arranged for Joanna Kopaczyk’s ‘The Future of Scots’ project website to be moved to its top-level ‘ac.uk’ domain, and it can now be found here: https://scotslanguagepolicy.ac.uk/.  Marc Alexander had also contacted me about a weird bug he’d encountered in the Historical Thesaurus.  One of the category pages was failing to display properly and after investigation I figured out that it was an issue with the timeline data for one of the words on this page, which was causing the JavaScript to break.  I pulled out the JSON embedded in the page and the data for the word seemed to be missing a closing ‘}’, which was causing the error.  It turned out that someone had entered the dates the wrong way round for the word.  It was listed as ‘a1400-c1386’.  My dates system had plucked out the dates and correctly ordered them, but this meant the system was left with ‘1400’ with a joining ‘-‘ and then nothing after it, which resulted in the JSON being malformed.  I swapped the dates around (both in the new dates table and in the display date) and everything started working as it should again.  It was a relief to know that it was an issue with the data rather than my code.

Also this week I spent a bit of time working on the Books and Borrowing project, generating more page image tilesets and their corresponding pages for two more of the Edinburgh ledgers and adding an ‘Events’ page to the project website and giving more members of the project team permission to edit the site.  I also had an email chat with Thomas Clancy about the Iona project and created a ‘Call for Papers’ page including submission form on the project website (it’s not live yet, though).

I spent the rest of my week continuing to work on the Anglo-Norman Dictionary.  We received the excellent news this week that our AHRC application for funding to complete the remaining letters of the dictionary (and carry out more development work) was successful.  This week I mage some further tweaks to the new blog pages, adding in the first image in the blog post to the right of the blog snippet on the blog summary page.  I also made the new blog pages live, and you can now access them here: https://anglo-norman.net/blog/.

I also made some updates to the bibliography system based on requests from the editors to separate out the display of links to the DEAF website from the actual URLs (previously just the URLs were displayed).  I updated the database, the DMS and the new bibliography page to add in a new ‘DEAF link text’ field for both main source text records and items within source text records.  I copied the contents of the DEAF field into this new field for all records, I updated the DMS to add in the new fields when adding / editing sources and I updated the new bibliography page so that the text that gets displayed for the DEAF link uses the new field, whereas the actual link through to the DEAF website uses the original field.

I also continued to work on the facilities to upload batches of new or updated entry XML files to the DMS.  I created a new ‘holding’ table for uploaded entries and created a page that allows the user to drag and drop XML files into the browser, with files then getting uploaded, processed and added into this new table.  This uses a handy JavaScript library called Dropzone (https://www.dropzonejs.com/) that I previously used for the Scots Syntax Atlas CMS.  The initial version of the upload is working well, but I needed to know exactly how uploaded files should be fully processed before I could proceed further, which required some lengthy email exchanges with the editors.

The scripts I written when uploading the new ‘R’ dataset needed to make changes to the data to bring it into line with the data already in the system as the ‘R’ data didn’t include some attributes that were necessary for the system to work with the XML files, namely:

In the <main_entry> tag the attribute ‘lead’, which is used to display the editor’s initials in the front end (e.g. “gdw”) and the ‘id’ attribute, which although not used to uniquely identify the entries in my new system is still used in the XML for things like cross-references and therefore is required and must be unique.  In the <sense> tag the attribute ‘n’, which increments from 1 within each part of speech and is used to identify senses in the front-end.  In the <senseInfo> tag the ID attribute, which is used in the citation and translation searches and the POS attribute which is used to generate the summary information at the top of each entry page.  In the <attestation> tag the ID attribute, which is used in the citation search.

We needed to decide how these will be handled in future – whether they will be manually added to the XML as the editors work on them or whether the upload script needs to add them in at the point of upload.  We also needed to consider updates to existing entries.  If an editor downloads an entry and then works on it (e.g. adding in a new sense or attestation) then the exported file will already include all of the above attributes, except for any new sections that are added.  In such cases should the new sections have the attributes added manually, or do I need to ensure my script checks for the existence of the attributes and only adds the missing ones as required?

We decided that I’d set up the systems to automatically check for the existence of the attributes and add them in if they’re not already present.  It will take more time to develop such a system but it will make it more robust and hopefully will result in fewer errors.  I’ll also add an option to specify the ‘lead’ initials for the batch of files that are being uploaded, but this will not overwrite the ‘lead’ attribute for any XML files in the batch that already have the attribute specified.

I’ll hopefully get a chance to work on this next week.  Thankfully this is the last week of home-schooling for us so I should have a bit more time from next week onwards.

Week Beginning 22nd February 2021

I had a couple of Zoom meetings this week, then first on Monday was with the Historical Thesaurus team and members of the Oxford English Dictionary’s team to discuss how our two datasets will be aligned and updated in future.  It was an interesting meeting, but there’s still a lot of uncertainty regarding how the datasets can be tracked and connected as future updates are made, at least some of which will probably only become apparent when we get new data to integrate.

My second Zoom meeting was on Tuesday with the Place-Names of Iona project to discuss how we will be working with the QGIS package that team members will be using to access some of the archaeological data and Lidar maps, and also to discuss the issue of 10 digit grid references and the potential change from the old OSGB-36 means of generating latitude and longitude from grid references to the new WGS84 method.  It was a productive meeting and we decided that we would switch over to WGS84 and I would update the CMS to incorporate the new library for generating latitude and longitude from grid references.

I spent some time later in the week implementing this change, meaning that when a member of the project team adds or edits a place-name and supplies a grid reference the latitude and longitude generated use the new system.  As I mentioned a couple of weeks ago, the new library (see  http://www.movable-type.co.uk/scripts/latlong-os-gridref.html) allows 6, 8 or 10 digit grid references to be used and is JavaScript based, meaning as soon as the user enters the grid reference the latitude and longitude are generated.  I updated my scripts so that these values immediately appear in the relevant boxes in the form, and also integrated the Google Maps service that generates altitude data from the latitude and longitude, populating the altitude box in the form and also displaying a Google Map showing the exact location that the entered grid reference has produced if further tweaks are required.  I’m pretty happy with how the new system is working out.

Also this week I continued to work on the Books and Borrowing project, generating image tilesets for the scans of several volumes of ledgers from Edinburgh University Library and writing scripts to generate pages in the Content Management System, creating ‘next’ and ‘previous’ links as required and associating the relevant images.  I also had an email correspondence about some of the querying methods we will develop for the data, such as collocation information.

I also gave some feedback on a data management plan for a project I’m involved with, had a chat with Wendy Anderson about a possible future project she’s trying to set up and spent some time making updates to the underlying data of the Interactive Map of Burns Suppers that launched last month.  I didn’t have the time to do a huge amount of work on the Anglo-Norman Dictionary this week, but I still managed to migrate some of the project’s old blog posts to our new site over the course of the week.

Finally, I made some updates to the bibliography system for the Dictionary of the Scots Language, updating the new system so it works in a similar manner to the live site.  I added ‘Author’ and ‘Title’ to the drop-down items when searching for both to help differentiate them and a search for an item when the user ignores the drop-down options and manually submits the search now works as it does in the live site.  I also fixed the issue with selecting ‘Montgomerie, Norah & William’ resulting in a 404 error.  This was caused by the ampersand.  There were some issues with other non-alphanumeric characters that I’ve fixed too, including slashes and apostrophes.

Week Beginning 15th February 2021

I spent quite a bit of time this week continuing to work on the Anglo-Norman Dictionary, creating a new ‘bibliography’ page that will replace the existing ‘source texts’ page and uses the new source text management scripts that I added to the new content management system recently.  This required rather a lot of updates as I needed to update the API to use the new source texts table and also to incorporate source text items as required, which took some time.  I then created the new ‘bibliography’ page which uses the new source text data.  There is new introductory text and each item features the new fields as requested by the editors.  ‘Dean’ references always appear, the title and author are in bold and ‘details’ and ‘notes’ appear when present.  If a source text has one or more items these are listed in numeric order, in a slightly smaller font and indented.  Brackets for page numbers are added in.  I also had to change the way the source texts were ordered as previously the list was ordered by the ‘slug’ but with the updates to the data it sometimes happens that the ‘slug’ doesn’t begin with the same letter as the siglum text and this was messing up the order and the alphabetical buttons.  Now the list is ordered by the siglum text stripped of any tags and all seems to be working fine.  I will still need to update the links from dictionary items to the bibliography when the new page goes live, and update the search facilities too, but I’ll leave this until we’re ready to launch the new page.

During the week I made a number of further tweaks to the new bibliography page based on feedback from the editors.  One big thing was to change the way the page was split up in order to allow links to specific bibliographical items to be added.  Previously the selection of a section of the bibliography based on the initial letter was handled in the browser via JavaScript.  This made it fast to switch between letters, but meant that it was not possible to easily link to a specific section of the bibliography.  I changed this so that the selection was handled on the server side.  This does mean that each time a letter is pressed on the whole page needs to reload, which is a bit slower, but it also means you can bookmark a specific letter, e.g. bibliographies beginning with ‘T’.  It also means it’s possible to link to a specific item within a page.  Each item in the page has an ID in the HTML consisting of ‘bib-‘ plus the item’s slug.  To link to this section of the page you can add a link consisting of the page URL, a hash, and then this ID.  Then when the page loads it will jump down to the relevant section.

I also had to change the way items within bibliographical entries were ordered.  These were previously ordered on the ‘numeral’ field, which contained a Roman numeral.  I’d written a bit of a hack to ensure that these were ordered correctly up to 20, but it turns out that there are some entries with more than 60 items, and some of them have non-standard numerals, such as ‘IXa’.  I decided that it would be too complicated to use the ‘numeral’ field for ordering as the contents are likely to be too inconsistent for a computer to automatically order successfully.  I therefore created a new ‘itemorder’ column in the database that holds a numerical value that decides the order of the items.  I wrote a little script that populates this field for the items already in the system and for any bibliographical entry with 20 or fewer items the order should be correct without manual intervention.  For the handful of entries with more than 20 items the editors will have to manually update the order.  I updated the DMS so that the new ‘item order’ field appears when you add or edit items, and this will need to be used for each item to rearrange the items into the order they should be in.  The new bibliography page uses the new itemorder field so updates are reflected on this page.

I also needed to update the system to correctly process multiple DEAF links, which I’d forgotten to do previously, made some changes to the ordering of items (e.g. so that entries with a number appear before entries with the same text but without a number) and added in an option to hide certain fields by adding a special character into the field.  Also for the AND I updated the XML of an entry and continued to migrate blog posts from the old blog to our new system.

I then began work on the pages of the CMS that will be used for uploading, viewing and downloading entries.  I added an option to the CMS that allows the editors to choose an entry to view all of the data stored about it and to download its XML for editing.  This consists of a textbox into which an entry’s slug can be entered.  After entering the slug and pressing ‘Go’ a page loads that lists all of the data stored about the entry in the system, such as its ID, the ID from the old system, last editor and date of last edit.  You can also access the XML of the entry if you scroll down to the ‘XML’ section of the page.  The XML is hidden in a collapsed section of the page and if you click on the header it expands.  I’ve added in styles to make it easier to read, using a very nice JavaScript library called prism.js (https://prismjs.com/).  There is also a button to download the XML.  Pressing on this prompts you to save the file, and the filename consists of the entryorder plus the entry ID.  This section of the page will also keep a record of all previous versions of the XML when a new version is uploaded into the system (once I develop the upload feature).  This will allow you to access, check and download older versions of the XML, if some mistake has been made when uploading a new version.

Beneath the XML section you can view all of the information that is extracted from the XML and used in the system for search and display purposes: forms, parts of speech, cross references, labels, citations and translations.  This is to enable the editors to check that the data extracted and used by the system is correct.  I could possibly add in options for you to edit this data, but any edits made would then be overwritten the next time an XML file is uploaded for the entry, so I’m not sure how useful this would be.  I think it would be better to limit the editing of this information to via a new XML file upload only.

However, we may want to make some of the information in this page directly editable, specifically some of the fields in the first table on the page.  The editors may want to change the lemma or homonym number, or the slug or entry order.  Similarly the editors may want to manually override the earliest date for the entry (although this would then be overwritten when a new XML version is uploaded) or change the ‘phase’ information.

The scripts to upload a new XML entry are going to take some time to get working, but at least for now you can view and download entries as required. Here’s a screenshot of how the facility works:

Also this week I dealt with a few queries about the Symposium for Seventeenth-Century Scottish Literature, which was taking place online this week and for which I had set up the website.  I also spoke to Arts IT Support about getting a test server set up for the Historical Thesaurus.  I spent a bit of time working for the Books and Borrowing project, processing images for a ledger from Edinburgh University Library, uploading these to the server and generating page records and links between pages for the ledger.  I also gave some advice to the Scots Language Policy RA about how to use the University’s VPN, spoke to Jennifer Smith about her SCOSYA follow-on funding proposal and had a chat with Thomas Clancy about how we will use GIS systems in the Iona project.

Week Beginning 8th February 2021

I was on holiday from Monday to Wednesday this week to cover the school half-term, so only worked on Thursday and Friday.  On Thursday I had a Zoom call with the Historical Thesaurus team to discuss further imports of new data from the OED and how to export our data (such as the revised category hierarchy) in a format that the OED team would be able to use.  We have a meeting with the OED the week after next so it was good to go over some of the issues and refresh my memory about where things were left off as it’s been several months since I last did any major work on the HT.  As a result of the meeting I also did some further work, namely exporting the current version of the online database and making it available for Fraser to download and access on his own PC, and updating some of the earlier scripts I’d created to generate statistics about the unmatched categories and words so that they used the most recent versions of the database.

Also this week I made some further tweaks to the SCOSYA website and created a user account for a researcher who is going to work with some of the data that is only available in the project’s CMS rather than the public website.  I also read through a new funding proposal that Wendy Anderson is involved with and have her some feedback on that and reported a couple of issues with expired SSL certificates that were affecting some websites.

I spent some time on the Books and Borrowing project on two data-related tasks.  First was to look through the new set of digitised images from Edinburgh University Library and decide what we should do with them.  Each image is of an open book, featuring both recto and verso pages in one image.  We may need to split these up into individual images, or we may just create page records that cover both pages.  I alerted the project PI Katie Halsey to the issue and the team will make a decision about which approach to take next week.  The second task was to look through the data from Selkirk library that another project had generated.  We had previously imported data for Selkirk that another researcher had compiled a few years before our project began, but recently discovered that this data did not include several thousand borrowing records of French prisoners of war, as the focus of the researcher was on Scottish borrowers.  We need these missing records and another project has agreed to let us use their data.  I had intended to completely replace the database I’d previously ingested with this new data, but on closer inspection of the new data I have a number of reservations about doing so.

The data from the other project has been compiled in an Excel spreadsheet and as far as I can tell there is no record of the ledger volume or page that each borrowing record was originally located on.  In the data we already have there is a column for ‘source ref’, containing the ledger volume (e.g. ‘volume 1’) and a column for ‘page number’, containing a unique ID for each page in the spreadsheet (e.g. ‘1010159r’).  Looking through the various sheets in the new spreadsheet there is nothing comparable to this, which is vital for our project, as borrowing records must be associated with page records, which in turn must be associated with a ledger.  It also would make it extremely difficult to trace a record back to the original physical record.

Another issue is that in our existing data the researcher has very handily used unique identifiers for readers (e.g. ‘brodie_james’), borrowing records (e.g. ‘1’) and books (e.g. ‘adam_view_religion’) that tie the various records together very nicely.  The new project’s data does not appear to use any unique identifiers to connect bits of data together.  For example, there are three ‘John Anderson’ borrowers and in the data we’re currently using these are differentiated by their IDs as ‘anderson_john’, ‘anderson_john2’ and ‘anderson_john3’.  This means it’s easy to tell which borrower appears in the borrowing records.  In the new project’s data three different fields are required to identify the borrower:  surname, forename and residence.  This data is stored in separate columns in the ‘All loans’ sheet (e.g. ‘Anderson’, ‘John’, ‘Cramalt’), but in the ‘Members’ sheet everything is joined together in one ‘Name’ field, e.g. ‘Anderson, John (Cramalt)’.  This lack of unique identifiers combined with the inconsistent manner of recording name and place will make it very difficult to automatically join up records and I’ve flagged this up with Katie for further discussion with the team.  It’s looking like we may want to try and identify the POW records from the new project’s data and amalgamate these with the data we already have, rather than replacing everything.

I also spent a bit of time on the Anglo-Norman Dictionary this week, making some changes to homonym numbers for a few entries and manually updating a couple of commentaries.  I also worked for the Dictionary of the Scots Language, preparing the SND and DOST datasets for import into the new editing system that the project is now going to use.  This was a little trickier than anticipated as initially I zipped up the data that I’d exported from the old editing system in November when I worked on the new ‘V4’ version of the online API, but we realised that this still contained duplicates that I’d stripped out when uploading the data into the new online database.  So instead I exported the XML from the online database, but it turned out that during the upload process a section of the entry XML was being removed.  This section (<meta>) contained all of the forms and URLs and my upload process exported these to a separate table and reformatted the XML so that it matched the structure that was defined during the creation of the first version of the API.  However, the new editing system requires this <meta> section so that data I’d prepared was not usable.  Instead I took the XML exported from the old editing system back in November and ran it through the script I’d written to strip out duplicates, then prepared the resulting XML dataset for transfer.  It looks like this approach has worked, but I’ll find out more next week.

Week Beginning 1st February 2021

I had two Zoom calls this week, the first on Wednesday with Kirsteen McCue to discuss a new, small project to publish a selection of musical settings to Burns poems and the second on Friday with Joanna Kopaczyk and her RA on the Scots Language Policy project to give a tutorial on how to use WordPress.

The majority of my week was divided between the Anglo-Norman Dictionary, the Dictionary of the Scots Language and the Place-names of Iona projects.  For the AND I made a few tweaks to the static content of the site and migrated some more blog posts across to the new site (these are not live yet).  I also added commentaries to more than 260 entries, which took some time to test.  I also worked on the DTD file that the editors reference from their XML editing software to ensure that all of the elements and attributes found within commentaries are ‘allowed’ in the XML.  Without doing this it was possible to add the tags in, but this would give errors in the editing software.  I also batch updated all of the entries on the site to reference the new DTD and exported all of the files, zipped them up and sent them to the editors so they can work on them as required.  I also began to think about migrating the TextBase from the old site to the new one, and managed to source the XML files that comprise this system.  It looks like it may be quite tricky to work with these as there are more than 70 book-length XML files to deal with and so far I have not managed to locate the XSLT that was originally used to process these files.

For the DSL I completed work on the new bibliography search pages that use the new ‘V4’ data.  These pages allow the authors and titles of bibliographical items to be searched, results to be viewed and individual items to be displayed.  I also made some minor tweaks to the live site and had a discussion with Ann Fergusson about transferring the project’s data to the people who have set up a new editing interface for them, something I’m hoping to be able to tackle next week.

For the Place-names of Iona project I had a discussion about implementing a new ‘work of the month’ feature and spent quite a bit of time investigating using 10-digit OS grid references in the project’s CMS.  The team need to use up to 10-digit grid references to get 1m accuracy for individual monuments, but the library I use in the CMS to automatically generate latitude and longitude from the supplied grid reference will only work with a 6-digit NGR.  The automatically generated latitude and longitude are then automatically passed to Google Maps to ascertain the altitude of the location and all of this information is stored in the database whenever a new place-name record is created or an existing record is edited.

As the library currently in use will only accept 6-digit NGRs I had to do a bit of research into alternative libraries, and I managed to find one that can accept NGRs of 2,4,6,8 or 10 digits.  Information about the library, including text boxes where you can enter an NGR and see the results can be found here: http://www.movable-type.co.uk/scripts/latlong-os-gridref.html along with an awful lot of description about the calculations and some pretty scary looking formulae.

The library is written in JavaScript, which runs in the client’s browser, whereas the previous library was written in PHP, which runs on the server.  This means I needed to change the way the CMS works – previously you’d enter an NGR and then when the form was submitted to the server the PHP library would generate the latitude and longitude whereas now the latitude and longitude need to be generated in the browser as soon as the NGR is entered into the textbox, and two further textboxes for latitude and longitude will appear in the form and will then be automatically populated with the results.

 

This does mean the person filling out the form can see the generated latitude and longitude and also tweak it if required before submitting the form, which is a potentially useful thing.  I may even be able to add a Google Map to the form so you can see (and possibly tweak) the point before submitting the form, but I’ll need to look into this further.  I also still need to work on the format of the latitude and longitude as the new library generates them with a compass point (e.g. 6.420848° W) and we need to store them as a purely decimal value (e.g. -6.420848) with ‘W’ and ‘S’ figures being negatives.

However, whilst researching this I discovered a potentially worrying thing that needs discussion with the wider team.  The way the Ordnance Survey generates latitude and longitude from their grid references was changed in 2014.  Information about this can be found in the page linked to above in the ‘Latitude/longitudes require a datum’ section.  Previously the OS used ‘OSGB-36’ to generate latitude and longitude, but in 2014 this was changed to ‘WGS84’, which is used by GPS systems.  The difference in the latitude / longitude figures generated by the two systems is about 100 metres, which is quite a lot if you’re intending to pinpoint individual monuments.

The new library has facilities to generate latitude and longitude using either the new or old systems, but defaults to the new system.  I’ve checked the output of the library we currently use and it uses the old ‘OSGB-36’ system.  This means all of the place-names in the system so far (and all those for the previous projects) have latitudes and longitudes generated using the now obsolete (since 2014) system. To give an example of the difference, the place-name A’ Mhachair in the CMS has this location: https://www.google.com/maps/place/56%C2%B019’33.2%22N+6%C2%B025’11.4%22W/@56.3258889,-6.422022,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325885!4d-6.419828 and with the newer ‘WGS84’ system it would have this location: https://www.google.com/maps/place/56%C2%B019’32.7%22N+6%C2%B025’15.1%22W/@56.325744,-6.4230367,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325744!4d-6.420848

So what we need to decide before I replace the old library with the new one in the CMS is whether we switch to using ‘WGS84’ or we keep using ‘OSGB-36’.  As I say, this will need further discussion before I implement any changes.

Also this week I responded to a query from Cris Sarg of the Medical Humanities Network project, spoke to Fraser Dallachy about future updates to the HT’s data from the OED, made some tweaks to the structure of the SCOSYA website for Jennifer Smith, added a plugin to the Editing Burns site for Craig Lamont and had a chat with the Books and Borrowing people about cleaning the authors data, importing the Craigston data and how to deal with a lot of borrowers that were excluded from the Selkirk data that I previously imported.

Next week I’ll be on holiday from Monday to Wednesday to cover the school half term.

 

Week Beginning 26th October 2020

This was something of an odd week as I tested positive for Covid.  I’m not entirely sure how I managed to get it, but I’d noticed on Friday last week that I’d lost my sense of taste and thought it would be sensible to get tested and the result came back positive.  I’d been feeling a bit under the weather last week and this continued throughout this week too, but thankfully the virus never affected my chest or throat and I managed to more or less work all week.  However, with our household in full-on in isolation our son was off school all week, and will be all next week, which did impact on the work I could do.

My biggest task of the week was to complete the work in preparation for the launch of the second edition of the Historical Thesaurus.  This included fixing the full-size timelines to ensure that words that have been updated to have post-1945 end dates display properly.  As we had changed the way these were stored to record the actual end date rather than ‘9999’ the end points of the dates on the timeline were stopping short and not having a pointy end to signify ‘current’.  New words that only had post-1999 dates were also not displaying properly.  Thankfully I managed to get these issues sorted.  I also updated the search terms to fix some of the unusual characters that had not migrated over properly but had been replaced by question marks.  I then updated the advanced search options to provide two checkboxes to allow a user to limit their search to new word or words that have been updated (or both), which is quite handy, as it means you can fine out all of the new words in a particular decade, for example all of the new words that have a first date some time in the 1980s:

 

https://ht.ac.uk/category-selection/?word=&label=&category=&year=&startf=1980&endf=1989&startl=&endl=&twoEdNew=Y

 

I also tweaked the text that appears beside the links to the OED and added the Thematic Heading codes to the drop-down section of the main category.  We also had to do some last-minute renumbering of categories, which affected several hundred categories and subcategories in ’01.02’ and manually moved a couple of other categories to new locations, and after that we were all set for the launch.  The new second edition is now fully available, as you can see from the above link.

Other than I worked on a few other projects this week.  I helped to migrate a WordPress site for Bryony Randall’s Imprints of New Modernist Editing project, which is now available here: https://imprintsarteditingmodernism.glasgow.ac.uk/ and responded to a query about software purchased from Lisa Kelly in TFTS.

I spent the rest of the week continuing with the redevelopment of the Anglo-Norman Dictionary website.  I updated my script that extracts citations and their dates, which I’d started to work on last week.  I figured out why my script was not extracting all citations (it was only picking out the citations form the first sense and subsense in each entry rather than all senses) and managed to get all citations out.  With dates extracted for each entry I was then able to store the earliest date for each entry and update the ‘browse’ facility to display this date alongside the headword.

With this in place I moved on to looking at the advanced search options. I created the tab-based interface for the various advanced search options and implemented searches for headwords and citations.  The headword search works in a very similar way to the quick search – you can enter a term and use wildcards or double quotes for an exact search.  You can also combine this with a date search.  This allows you to limit your results to only those entries that have a citation in the year or range of years you specify.  I would imagine entering a range of years would be more useful than a single year.  You can also omit the headword and just specify a citation year to find all entries with a citation in the year or range, e.g. all entries with a citation in 1210.

The citation search is also in place and this works rather differently.  As mentioned in the overview document, this works in a similar (but not identical) way to the old ‘concordance search of citations’.  You can search for a word or a part of a word using the same wildcards as for the headword and limiting your search to particular citation dates.  When you submit the search this then loads an intermediary page that lists all of the word forms in citations that your search matches, plus a count of the number of citations each form is in.  From this page you can then select a specific form and view the results.  So, for example, a search for words beginning with ‘tre’ with a citation date between 1200 and 1250 lists 331 forms in citations will list all of the ‘tre’ words and you can then choose a specific form, e.g. ‘tref’ to see the results. The citation results include all of the citations for an entry that include the word, with the word highlighted in yellow.  I still need to think about how this might work better, as currently there is no quick way to get back to the intermediary list of forms.  But progress is being made.

Week Beginning 19th October 2020

I was back at work this week after having a lovely holiday the previous week.  It was a pretty busy week, mostly spent continuing to work on the preparations for the second edition of the Historical Thesaurus, which needs to be launched before the end of the month.  I updated the OED date extraction script that formats all of the OED dates as we need them in the HT, including making full entries, in the HT dates table, generating the ‘full date’ text string that gets displayed on the website and generating cached first and last dates that are used for searching.  I’d somehow managed to get the plus and dash connectors the wrong way round in my previous version of the script (a plus should be used where there is a gap of more than 150 years, otherwise it’s a dash) so I fixed this.  I also stripped out dates that were within a 150 year time span, which really helped to make the full date text more readable. I also updated the category browser so that the category’s thematic heading is displayed in the drop-down section.

Fraser had made some suggested changes to the script I’d written to figure out whether an OED lexeme was new or already in the system so I made some changes to this and regenerated the output.  I also made further tweaks to the date extraction script so that we record the actual final date in the system rather than converting it to ‘9999’ and losing this information that will no doubt be useful in future.  I then worked on the post-1999 lexemes, which followed a similar set of processes.

With this all in place I could then run a script that would actually import the new lexemes and their associated dates into the HT database.  This included changelog codes, new search terms and new dates (cached firstdate and lastdate, fulldate and individual entries in the dates table).  A total of 11116 new words were added, although I subsequently noticed there were a few duplicates that had slipped through the net.  With these stripped out we had a total of 804,830 lexemes in the HT, and it’s great to have broken through the 800,000 mark.  Next week I need to fix a few things (e.g. the fullsize timelines aren’t set up to cope with post-1945 dates that don’t end in ‘9999’ if they’re current) but we’re mostly now good to launch the second edition.

Also this week I worked on setting up a website for the ‘Symposium for Seventeenth-Century Scottish Literature’ for Roslyn Potter in Scottish Literature and set up a subdomain for an art catalogue website for Bryony Randall’s ‘Imprints of the New Modernist Editing’ project.  I also helped Megan Coyer out with an issue she was having in transcribing multi-line brackets in Word and travelled to the University to collect a new, higher-resolution monitor and some other peripherals to make working from home more pleasant.  I also fixed a couple of bugs in the Books and Borrowing CMS, including one that was resulting in BC dates of birth and death for authors being lost when data was edited.  I also spent some time thinking about the structure for the Burns Correspondence system for Pauline Mackay, resulting in a long email with a proposed database structure.  I met with Thomas Clancy and Alasdair Whyte to discuss the CMS for the Iona place-names project (it now looks like this is going to have to be a completely separate system from Alasdair’s existing Mull / Ulva system) and replied to Simon Taylor about a query he had regarding the Place-names of Fife data.

I also found some time to continue with the redevelopment of the Anglo-Norman Dictionary website.  I updated the way cognate references were processed to enable multiple links to be displayed for each dictionary. I also added in a ‘Cite this entry’ button, which now appears in the top right of the entry that when clicked on opens a pop-up where citation styles will appear (they’re not there yet).  I updated the left-hand panel to make it ‘sticky’: If you scroll down a long entry the panel stays visible on screen (unless you’re viewing on a narrow screen like a mobile phone in which case the left-hand panel appears full-width before the entry).  I also added in a top bar that appears when you scroll down the screen that contains the site title, the entry headword and the ‘cite’ button.  I then began working on extracting the citations, including their dates and text, which will be used for search purposes.  I ran an extraction script that extracted about 60,0000 citations, but I released that this was not extracting all of the citations and further work will be required to get this right next week.