Week Beginning 27th September 2021

I had two Zoom calls on Monday this week.  The first was with the Burns people to discuss the launch of the website for the ‘letters and poems’ part of ‘Editing Burns’, to complement the existing ‘Prose and song’ website (https://burnsc21.glasgow.ac.uk/).  The new website will launch in January with some video content and blogs, plus I will be working on a content management system for managing the network of Burns’ letter correspondents, which I will put together some time in November, assuming the team can send me on some sample data by then.  This system will eventually power the ‘Burns letter writing trail’ interactive maps that I’ll create for the new site sometime next year.

My second Zoom call was for the Books and Borrowing project to discuss adding data from a new source to the database.  The call gave us an opportunity to discuss the issues with the data that I’d highlighted last week.  It was good to catch up with the team again and to discuss the issues with the researcher who had originally prepared the spreadsheet containing the data.  We managed to address all of the issues and the researcher is going to spend a bit of time adapting the spreadsheet before sending it to me to be batch uploaded into our system.

I spent some further time this week investigating the issue of some of the citation dates in the Anglo-Norman Dictionary being wrong, as discussed last week.  The issue affects some 4309 entries where at least one citation features the form only in a variant text.  This means that the citation date should not be the date of the manuscript in the citation, but the date when the variant of the manuscript was published.  Unfortunately this situation was never flagged in the XML, and there was never any means of flagging the situation.  The variant date should only ever be used when the form of the word in the main manuscript is not directly related to the entry in question but the form in the variant text is.  The problem is it cannot be automatically ascertained when the form in the main manuscript is the relevant one and when the form in the variant text is as there is so much variation in forms.

For example, the entry https://anglo-norman.net/entry/bochet_1 there is a form ‘buchez’ in a citation and then two variant texts for this where the form is ‘huchez’ and ‘buistez’.  None of these forms are listed in the entry’s XML as variants so it’s not possible for a script to automatically deduce which is the correct date to use (the closest is ‘buchet’).  In this case the main citation form and its corresponding date should be used.  Whereas in the entry https://anglo-norman.net/entry/babeder the main citation form is ‘gabez’ while the variant text has ‘babedez’ and so this is the form and corresponding date that needs to be used.  It would be difficult for a script to automatically deduce this.  In this case a Levenstein test (which test how many letters need to be changed to turn one string into another) could work, but this would still need to be manually checked.

The editor wanted me to focus on those entries where the date issue affects the earliest date for an entry, as these are the most important as the issue results in an incorrect date being displayed for the entry in the header and the browse feature.  I wrote a script that finds all entries that feature ‘<varlist’ somewhere in the XML (the previously exported 4309 entries).  It then goes through all attestations (in all sense, subsense and locution sense and subsense sections) to pick out the one with the earliest date, exactly as the code for publishing an entry does.  What it then does is checks the quotation XML for the attestation with the earliest date for the presence of ‘<varlist’ and if it finds this it outputs information for the entry, consisting of the slug, the earliest date as recorded in the database, the earliest date of the attestation as found by the script, the ID of the  attestation and then the XML of the quotation.  The script has identified 1549 entries that have a varlist in the earliest citation, all of which will need to be edited.

However, every citation has a date associated with it and this is used in the advanced search where users have the option to limit their search to years based on the citation date.  Only updating citations that affect the entry’s earliest date won’t fix this, as there will still be many citations with varlists that haven’t been updated and will still therefore use the wrong date in the search.  Plus any future reordering of citations would require all citations with varlists to be updated to get entries in the correct order.  Fixing the earliest citations with varlists in entries based on the output of my script will fix the earliest date as used in the header of the entry and the ‘browse’ feature only, but I guess that’s a start.

Also this week I sorted out some access issues for the RNSN site, submitted the request for a new top-level ‘ac.uk’ domain for the STAR project and spent some time discussing the possibilities for managing access to videos of the conference sessions for the Iona place-names project.  I also updated the page about the Scots Dictionary for Schools app on the DSL website (https://dsl.ac.uk/our-publications/scots-dictionary-for-schools-app/) after it won the award for ‘Scots project of the year’.

I also spent a bit of time this week learning about the statistical package R (https://www.r-project.org/).  I downloaded and installed the package and the R Studio GUI and spent some time going through a number of tutorials and examples in the hope that this might help with the ‘Speak for Yersel’ project.

For a few years now I’ve been meaning to investigate using a spider / radar chart for the Historical Thesaurus, but I never found the time.  I unexpectedly found myself with some free time this week due to ‘Speak for Yersel’ not needing anything from me yet so I thought I’d do some investigation.  I found a nice looking d3.js template for spider / radar charts here: http://bl.ocks.org/nbremer/21746a9668ffdf6d8242  and set about reworking it with some HT data.

My idea was to use the chart to visualise the distribution of words in one or more HT categories across different parts of speech in order to quickly ascertain the relative distribution and frequency of words.  I wanted to get an overall picture of the makeup of the categories initially, but to then break this down into different time periods to understand how categories changed over time.

As an initial test I chose the categories 02.04.13 Love and 02.04.14 Hatred, and in this initial version I looked only at the specific contents of the categories – no subcategories and no child categories.  I manually extracted counts of the words across the various parts of speech and then manually split them up into words that were active in four broad time periods: OE (up to 1149), ME (1150-1449), EModE (1450-1799) and ModE (1800 onwards) and then plotted them on the spider / radar chart, as you can see in this screenshot:

You can quickly move through the different time periods plus the overall picture using the buttons above the visualisation, and I think the visualisation does a pretty good job of giving you a quick and easy to understand impression of how the two categories compare and evolve over time, allowing you to see, for example, how the number of nouns and adverbs for love and hate are pretty similar in OE:

but by ModE the number of nouns for Love have dropped dramatically, as have the number of adverbs for Hate:

We are of course dealing with small numbers of words here, but even so it’s much easier to use the visualisation to compare different categories and parts of speech than it is to use the HT’s browse interface.  Plus if such a visualisation was set up to incorporate all words in child categories and / or subcategories it could give a very useful overview of the makeup of different sections of the HT and how they develop over time.

There are some potential pitfalls to this visualisation approach, however.  The scale used currently changes based on the largest word count in the chosen period, meaning unless you’re paying attention you might get the wrong impression of the number of words.  I could change it so that the scale is always fixed as the largest, but that would then make it harder to make out details in periods that have much fewer words.  Also, I suspect most categories are going to have many more nouns than other parts of speech, and a large spike of nouns can make it harder to see what’s going on with the other axes.  Another thing to note is that the order of the axes is fairly arbitrary but can have a major impact on how someone may interpret the visualisation.  If you look at the OE chart the ‘Hate’ area looks massive compared to the ‘Love’ area, but this is purely because there is only one ‘Love’ adjective compared to 5 for ‘Hate’.  If the adverb axis had come after the noun one instead the shapes of ‘Love’ and ‘Hate’ would have been more similar.  You don’t necessarily appreciate on first glance that ‘Love’ and ‘Hate’ have very similar numbers of nouns in OE, which is concerning.  However, I think the visualisations have a potential for the HT and I’ve emailed the other HT people to see what they think.

 

Week Beginning 30th August 2021

This week I completed work on the proximity search of the Anglo-Norman textbase.  Thankfully the performance issues I’d feared might crop up haven’t occurred at all.  The proximity search allows you to search for term 1 up to 10 words to the left or right of term 2 using ‘after’ or ‘before’.  If you select ‘after or before’ then (as you might expect) the search looks 10 words in each direction.  This ties in nicely with the KWIC display, which displays 10 words either side of your term.  As mentioned last week, unless you search for exact terms (surrounded by double quotes) you’ll reach an intermediary page that lists all possible matching forms for terms 1 and 2.  Select one of each and you can press the ‘Continue’ button to perform the actual search.  What this does is finds all occurrences of term 2 (term 2 is the fixed anchor point, it’s term 1 that can be variable in position), then for each it checks the necessary words before or after (or before and after) the term for the presence of term 1.  When generating the search words I generated and stored the position the word appears on the page, which made it relatively easy to pinpoint nearby words.  What is trickier is dealing with words near the beginning or the end of a page, as in such cases the next or previous page must also then be looked at.  I hadn’t previously generated a total count of the number of words on a page, which was needed to ascertain whether a word was close to the end of the page, so I ran a script that generated and stored the word count for each page.  The search seems to be working as it should for words near the beginning and end of a page.

The results page is displayed in the same way as the regular search, complete with KWIC and sorting options.  Both terms 1 and 2 are bold, and if you sort the results the relevant numbered word left or right of term 2 is highlighted, as with the regular search.  When you click through to the actual text all occurrences of both term 1 and term 2 are highlighted (not just those in close proximity), but the page centres on the part of the text that meets the criteria, so hopefully this isn’t a problem – it is quite useful to see other occurrences of the terms after all.  There are still some tweaks I need to make to the search based on feedback I received during the week, and I’ll look at these next week, but on the whole the search facility (and the textbase facility in general) is just about ready to launch, which is great as it’s the last big publicly facing feature of the AND that I needed to develop.

Also this week I spent some time working on the Books and Borrowing project.  I created a new user account for someone who will be working for the project and I also received the digitised images for another library register, this time from the NLS.  I downloaded these and then uploaded them to the server, associating the images with the page records that were already in the system.  The process was a little more complicated and time consuming than I’d anticipated as the register has several blank pages in it that are not in our records but have been digitised.  Therefore the number of page images didn’t match up with the number of pages, plus page images were getting associated with the wrong page.  I had to manually look through the page images and delete the blanks, but I was still off by one image.  I then had to manually check through the contents of the images to compare them with the transcribed text to see where the missing image should have gone.  Thankfully I managed to track it down and reinstate it (it had one very faint record on it, which I hadn’t noticed when viewing and deleting blank thumbnails).  With that in place all images and page records aligned and I could made the associations in the database.  I also sent Gerry McKeever the zipped up images (several gigabytes) for a couple of the St Andrews registers as he prefers to have the complete set when working on the transcriptions.

I had a meeting with Gerry Carruthers and Pauline McKay this week to discuss further developments of the ‘phase 2’ Burns website, which they are hoping to launch in the new year, and also to discuss the hosting of the Scottish theatre studies journal that Gerry is sorting out.

I spent the rest of the week working on mockups for the two websites for the STAR speech and language therapy project.  Firstly there’s the academic site.  The academics site is going to sit alongside Seeing Speech and Dynamic Dialects, and as such it should have the same interface as these sites.  Therefore I’ve made a site that is pretty much identical in terms of the overall theme.  I added in a new ‘site tab’ for the site that sits at the top of the page and have added in the temporary logo as a site logo and favicon (the latter may need a dark background to make it stand out).  I created menu items for all of the items in Eleanor Lawson’s original mockup image.  These all work – leading to empty pages for now and added the star logo to the ‘Star in-clinic’ menu item as in the mockup too.  In the footer I made a couple of tweaks to the layout – the logos are all centre aligned and have a white border.  I added in the logo for Strathclyde and have only included the ESRC logo, but can add others in if required.  The actual content of the homepage is identical to Seeing Speech for now – I haven’t changed any images or text.

For the clinic website I’ve taken Eleanor’s mockup as a starting point again and have so far made two variations.  I will probably work on at least one more different version (with multiple variations) next week.  I haven’t added in the ‘site tabs’ to either version as I didn’t want to clutter things up, and I’m imagining that there will be a link somewhere to the STAR academic site for those that want it, and from there people would be able to find Seeing Speech and Dynamic Dialects.  The first version of the mockup has a top-level menu bar (we will need such a menu listing the pages the site features otherwise people may get confused) then the main body of the page is the blue, as in the mockup.  I used the same logo and the font for the header is this Google font: https://fonts.google.com/?query=rampart+one&preview.text=STAR%20Speech%20and%20Language%20Therapy&preview.text_type=custom.  Other headers on the page use this font: https://fonts.google.com/specimen/Annie+Use+Your+Telescope?query=annie&preview.text=STAR%20Speech%20and%20Language%20Therapy&preview.text_type=custom.  I added in a thick dashed border under the header.  The intro text is just some text I’ve taken from one of the Seeing Speech pages, and the images are still currently just the ones in the mockup.  Hovering over an image causes the same dashed border to appear.  The footer is a kind of pink colour, which is supposed to suggest those blue and pink rubbers you used to get in schools.

The second version uses the ‘rampart one’ font just for ‘STAR’ in the header, with the other font used for the rest of the text.  The menu bar is moved to underneath the header and the dashed line is gone.  The main body of the page is white rather than continuing the blue of the header and ‘rampart one’ is used as the in-page headers.  The images now have rounded edges, as do the text blocks in the images.  Hovering over an image brings up a red border, the same shade as used in the active menu item.  The pink footer has been replaced with the blue from the navbar.  Both versions are ‘responsive’ and work on all screen sizes.

I’ll be continuing to work on the mockups next week.

Week Beginning 22nd February 2021

I had a couple of Zoom meetings this week, then first on Monday was with the Historical Thesaurus team and members of the Oxford English Dictionary’s team to discuss how our two datasets will be aligned and updated in future.  It was an interesting meeting, but there’s still a lot of uncertainty regarding how the datasets can be tracked and connected as future updates are made, at least some of which will probably only become apparent when we get new data to integrate.

My second Zoom meeting was on Tuesday with the Place-Names of Iona project to discuss how we will be working with the QGIS package that team members will be using to access some of the archaeological data and Lidar maps, and also to discuss the issue of 10 digit grid references and the potential change from the old OSGB-36 means of generating latitude and longitude from grid references to the new WGS84 method.  It was a productive meeting and we decided that we would switch over to WGS84 and I would update the CMS to incorporate the new library for generating latitude and longitude from grid references.

I spent some time later in the week implementing this change, meaning that when a member of the project team adds or edits a place-name and supplies a grid reference the latitude and longitude generated use the new system.  As I mentioned a couple of weeks ago, the new library (see  http://www.movable-type.co.uk/scripts/latlong-os-gridref.html) allows 6, 8 or 10 digit grid references to be used and is JavaScript based, meaning as soon as the user enters the grid reference the latitude and longitude are generated.  I updated my scripts so that these values immediately appear in the relevant boxes in the form, and also integrated the Google Maps service that generates altitude data from the latitude and longitude, populating the altitude box in the form and also displaying a Google Map showing the exact location that the entered grid reference has produced if further tweaks are required.  I’m pretty happy with how the new system is working out.

Also this week I continued to work on the Books and Borrowing project, generating image tilesets for the scans of several volumes of ledgers from Edinburgh University Library and writing scripts to generate pages in the Content Management System, creating ‘next’ and ‘previous’ links as required and associating the relevant images.  I also had an email correspondence about some of the querying methods we will develop for the data, such as collocation information.

I also gave some feedback on a data management plan for a project I’m involved with, had a chat with Wendy Anderson about a possible future project she’s trying to set up and spent some time making updates to the underlying data of the Interactive Map of Burns Suppers that launched last month.  I didn’t have the time to do a huge amount of work on the Anglo-Norman Dictionary this week, but I still managed to migrate some of the project’s old blog posts to our new site over the course of the week.

Finally, I made some updates to the bibliography system for the Dictionary of the Scots Language, updating the new system so it works in a similar manner to the live site.  I added ‘Author’ and ‘Title’ to the drop-down items when searching for both to help differentiate them and a search for an item when the user ignores the drop-down options and manually submits the search now works as it does in the live site.  I also fixed the issue with selecting ‘Montgomerie, Norah & William’ resulting in a 404 error.  This was caused by the ampersand.  There were some issues with other non-alphanumeric characters that I’ve fixed too, including slashes and apostrophes.

Week Beginning 1st February 2021

I had two Zoom calls this week, the first on Wednesday with Kirsteen McCue to discuss a new, small project to publish a selection of musical settings to Burns poems and the second on Friday with Joanna Kopaczyk and her RA on the Scots Language Policy project to give a tutorial on how to use WordPress.

The majority of my week was divided between the Anglo-Norman Dictionary, the Dictionary of the Scots Language and the Place-names of Iona projects.  For the AND I made a few tweaks to the static content of the site and migrated some more blog posts across to the new site (these are not live yet).  I also added commentaries to more than 260 entries, which took some time to test.  I also worked on the DTD file that the editors reference from their XML editing software to ensure that all of the elements and attributes found within commentaries are ‘allowed’ in the XML.  Without doing this it was possible to add the tags in, but this would give errors in the editing software.  I also batch updated all of the entries on the site to reference the new DTD and exported all of the files, zipped them up and sent them to the editors so they can work on them as required.  I also began to think about migrating the TextBase from the old site to the new one, and managed to source the XML files that comprise this system.  It looks like it may be quite tricky to work with these as there are more than 70 book-length XML files to deal with and so far I have not managed to locate the XSLT that was originally used to process these files.

For the DSL I completed work on the new bibliography search pages that use the new ‘V4’ data.  These pages allow the authors and titles of bibliographical items to be searched, results to be viewed and individual items to be displayed.  I also made some minor tweaks to the live site and had a discussion with Ann Fergusson about transferring the project’s data to the people who have set up a new editing interface for them, something I’m hoping to be able to tackle next week.

For the Place-names of Iona project I had a discussion about implementing a new ‘work of the month’ feature and spent quite a bit of time investigating using 10-digit OS grid references in the project’s CMS.  The team need to use up to 10-digit grid references to get 1m accuracy for individual monuments, but the library I use in the CMS to automatically generate latitude and longitude from the supplied grid reference will only work with a 6-digit NGR.  The automatically generated latitude and longitude are then automatically passed to Google Maps to ascertain the altitude of the location and all of this information is stored in the database whenever a new place-name record is created or an existing record is edited.

As the library currently in use will only accept 6-digit NGRs I had to do a bit of research into alternative libraries, and I managed to find one that can accept NGRs of 2,4,6,8 or 10 digits.  Information about the library, including text boxes where you can enter an NGR and see the results can be found here: http://www.movable-type.co.uk/scripts/latlong-os-gridref.html along with an awful lot of description about the calculations and some pretty scary looking formulae.

The library is written in JavaScript, which runs in the client’s browser, whereas the previous library was written in PHP, which runs on the server.  This means I needed to change the way the CMS works – previously you’d enter an NGR and then when the form was submitted to the server the PHP library would generate the latitude and longitude whereas now the latitude and longitude need to be generated in the browser as soon as the NGR is entered into the textbox, and two further textboxes for latitude and longitude will appear in the form and will then be automatically populated with the results.

 

This does mean the person filling out the form can see the generated latitude and longitude and also tweak it if required before submitting the form, which is a potentially useful thing.  I may even be able to add a Google Map to the form so you can see (and possibly tweak) the point before submitting the form, but I’ll need to look into this further.  I also still need to work on the format of the latitude and longitude as the new library generates them with a compass point (e.g. 6.420848° W) and we need to store them as a purely decimal value (e.g. -6.420848) with ‘W’ and ‘S’ figures being negatives.

However, whilst researching this I discovered a potentially worrying thing that needs discussion with the wider team.  The way the Ordnance Survey generates latitude and longitude from their grid references was changed in 2014.  Information about this can be found in the page linked to above in the ‘Latitude/longitudes require a datum’ section.  Previously the OS used ‘OSGB-36’ to generate latitude and longitude, but in 2014 this was changed to ‘WGS84’, which is used by GPS systems.  The difference in the latitude / longitude figures generated by the two systems is about 100 metres, which is quite a lot if you’re intending to pinpoint individual monuments.

The new library has facilities to generate latitude and longitude using either the new or old systems, but defaults to the new system.  I’ve checked the output of the library we currently use and it uses the old ‘OSGB-36’ system.  This means all of the place-names in the system so far (and all those for the previous projects) have latitudes and longitudes generated using the now obsolete (since 2014) system. To give an example of the difference, the place-name A’ Mhachair in the CMS has this location: https://www.google.com/maps/place/56%C2%B019’33.2%22N+6%C2%B025’11.4%22W/@56.3258889,-6.422022,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325885!4d-6.419828 and with the newer ‘WGS84’ system it would have this location: https://www.google.com/maps/place/56%C2%B019’32.7%22N+6%C2%B025’15.1%22W/@56.325744,-6.4230367,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325744!4d-6.420848

So what we need to decide before I replace the old library with the new one in the CMS is whether we switch to using ‘WGS84’ or we keep using ‘OSGB-36’.  As I say, this will need further discussion before I implement any changes.

Also this week I responded to a query from Cris Sarg of the Medical Humanities Network project, spoke to Fraser Dallachy about future updates to the HT’s data from the OED, made some tweaks to the structure of the SCOSYA website for Jennifer Smith, added a plugin to the Editing Burns site for Craig Lamont and had a chat with the Books and Borrowing people about cleaning the authors data, importing the Craigston data and how to deal with a lot of borrowers that were excluded from the Selkirk data that I previously imported.

Next week I’ll be on holiday from Monday to Wednesday to cover the school half term.

 

Week Beginning 18th January 2021

I worked on many different projects this week, with most of my time being split between the Dictionary of the Scots Language, the Anglo-Norman Dictionary, the Books and Borrowing project and the Scots Language Policy project.  For the DSL I began investigating adding the bibliographical data to the new API and developing bibliographical search facilities.  Ann Ferguson had sent me spreadsheets containing the current bibliographical data for DOST and SND and I migrated this data into a database and began to think about how the data needs to be processed in order to be used on the website.  At the moment links to bibliographies from SND entries are not appearing in the new version of the API, while DOST bibliographical links do appear but don’t lead anywhere.  Fixing the latter should be fairly straightforward but the former looks to be a bit trickier.

For SND for the live site using the original V1 API it looks like the bibliographical links are stored in a database table and these are then injected into the XML entries whenever an entry is displayed.  A column in the table contains the order the citation appears in the entry and this is how the system knows which bibliographical ID to assign to which link in the entry.  This raises some questions about what happens when an entry is edited.  If the order of the citations in the XML is changed, or a new citation is added then all of the links to the bibliographies will be out of sync.  Plus, unless the database table is edited no new bibliographical links will ever display.  It is possible that the data in bibliographical links table is already out of date and we are going to need to try and find a way to add these bibliographical links into the actual XML entries rather than retaining the old system of storing them separately and then injected then each time the entry is requested.  I emailed Ann for further discussion about these points.  Also this week I made a few updates to the live DSL website, changing the logos that are used and making ‘Dictionary’ in the title plural.

For the AND this week I added in the missing academic articles that Geert had managed to track down and then began focusing on updating the source texts and working with the commentaries for the R data.  The commentaries were sent to me in two Word files, and although we had hoped to be able to work out a mechanism for automatically extracting these and adding them to their corresponding entries it looks like this will be very difficult to achieve with any accuracy.  I concluded that I could split the entries up in Geert’s document based on the ‘**’ characters between commentaries and possibly split Heather’s up based on blank lines.  I could possibly retain the formatting (bold, italic, superscript text etc) and convert this to HTML, although even this would be tricky, time consuming and error-prone.  The commentaries include links to other entries in bold, and I would possibly be able to automatically add in links to other entries based on entries appearing in bold in the commentaries, but again this would be highly error-prone as bold text is used for things other than entries, and sometimes the entry number follows a hash while at other times it’s superscript.  It would also be difficult to automatically ascertain which entry a commentary belongs to as there is some inconsistency here too – e.g. the commentary for ‘remuement’ is listed as ‘[remuement]??’ and there are other occasions where the entry doesn’t appear on its own on a line – e.g. ‘Retaillement xref with recelement’ and ‘Reverdure—Geert says to omit’.  Then there are commentaries that are all crossed out, e.g. ‘resteot’.  We decided that attempting to automatically process the commentaries would not be feasible and instead the editors would add them to the entry XML files manually, adding the tags for bold, italic, superscript and other formatting as required.  Geert added commentaries to two entries to see how this would work and it worked very well.

For the source texts, we had originally discussed the editors editing these via a spreadsheet that I’d generated from the online data last year, but I decided it would be better if I just start work on the new online Dictionary Management System (DMS) and create the means of adding, listing and editing the source texts as the first thing that can be managed via the new DMS.  This seemed preferable to establishing a new, temporary workflow that may take some time to set up and may end up not being used for very long.  I therefore created the login and initial pages for the DMS (by repurposing earlier content management systems I’d created).  I then set up database tables for holding the new source text data, which includes multiple potential items for each source and a range of new fields that the original source text data does not contain.  With this in place I created the DMS pages for browsing the source texts and deleting them, and I’m midway through writing the scripts for editing existing and adding new source texts.  I aim to have this finished next week.

For the Books and Borrowing project I continued to make refinements to the CMS, namely reducing the number of books and borrowers from 500 to 200 to speed up page loads, adding in the day of the week that books were borrowed and returned, based on the date information already in the system, removing tab characters for edition titles as these were causing some issues for the system, replacing the editor’s notes rich text box with a plain text area to save space on the edit page and adding a new field to the borrowing record that allows the editor to note when certain items appear for display only and should otherwise be overlooked, for example when generating stats.  This is to be used for duplicate lines and lines that are crossed out.  I also had a look through the new sample data from Craigston that was sent to us this week.

For the Scots Language Policy project I set up the project’s website, including the user interface, adding in fonts, plugins, initial page structure, site graphics, logos etc.  Also this week I fixed an issue with song downloads on the Burns website (the plugin the controls the song downloads is very old and had broken.  I needed to install a newer version and upgrade the song data for the downloads to work again.  I also continued my email conversation with Rachel Fletcher about a project she’s putting together and created a user account to allow Simon Taylor to access the Ayr Placenames CMS.

Week Beginning 11th January 2021

This was my first full week back of the year, although it was also the first week of a return to homeschooling, which made working a little trickier than usual.  I also had a dentist’s appointment on Tuesday and lost some time to that due to my dentist being near the University rather than where I live.  However, despite these challenges I was able to achieve quite a lot this week.  I had two Zoom calls, the first on Monday to discuss a new ESRC grant that Jane Stuart-Smith is putting together with colleagues at Strathclyde while the second on Wednesday was with a partner in Joanna Kopaczyk’s new RSC funded project about Scots Language Policy to discuss the project’s website and the survey they’re going to put out.  I also made a few tweaks to the DSL website, replied to Kirsteen McCue about the AHRC proposal she’s currently putting together, replied to a query regarding the technologies behind the Scots Syntax Atlas, made a few further updates to the Burns Supper map and replied to a query from Rachel Fletcher in English Language about lemmatising Old English.

Other than these various tasks I split my time between the Anglo-Norman Dictionary and the Books and Borrowing projects.  For the former I completed adding explanatory notes to all of the ‘Introducing the AND’ pages.  This was a very time consuming task as there were probably about 150 explanatory notes in total to add in, each appearing in a Bootstrap dialog box, and each requiring me to copy the note form the old website, add in any required HTML formatting, find and check all of the links to AND entries on the old site and add these in as required.  It was pretty tedious to do, but it feels great to get it done, as the notes were previously just giving 404 errors on the new live site, and I don’t like having such things on a site I’m responsible for.  I also migrated the academic articles from the old site to the new one (https://anglo-norman.net/articles/) which also required some manual formatting of the content.  There are five other articles that I haven’t managed to migrate yet as they are full of character encoding errors on the old site.  Geert is looking for copies of these articles that actually work and I’ll add them in once he’s able to get them to me.  I also begin migrating the blog posts to the new site too.  Currently the blog is hosted on Blogspot and there are 55 entries, but we’d like these to be an internal part of the new site.  Migrating these is going to take some time as it means copying the text (which thankfully retains formatting) and then manually saving and embedding any images in the posts.  I’m just going to do a few of these a week until they’re all done and so far I’ve migrated seven.  I also needed to look into how the blogs page works in the WordPress theme I created for the AND, as to start with the page was just listing the full text of every post rather than giving summaries and links through to the full text of each.  After some investigation I figured out that in my theme there is a script called ‘home.php’ and this is responsible for displaying all of the blog posts on the ‘blog’ page.  It in turn calls another template called ‘content-blog.php’ which was previously set to display the full content of the blog.  Instead I set it to display the title as a link through to the full post, the date and then an excerpt from the full blog, which can be accessed through a handy WordPress function called ‘the_excerpt()’.

For the Books and Borrowing project I made some improvements and fixes to the Content Management System.  I’d been meaning to enhance the CMS for some time, but due to other commitments to other projects I didn’t have the time to delve into it.  It felt good to find the time to return to the project this week.

I updated the ‘Books’ and ‘Borrowers’ tabs when viewing a library in the CMS.  I added in pagination to speed up the loading of the pages.  Pages are now split into 500 record blocks and you can navigate between pages using the links above and below the tables.  For some reason the loading of the page is still a bit slow on the Stirling server whereas it was fine on the Glasgow server I was using for test purposes.  I’m not entirely sure why as I’d copied the database over too – presumably the Stirling server is slower.  However, it is still a massive improvement on the speed of the page previously.

I also changed the way tables scroll horizontally.  Previously if a table was wider than the page a scrollbar appeared above and below the table, but this was rather awkward to use if you were looking at the middle of the table (you had to scroll up or down to the beginning or end of the table, then use the horizontal scrollbar to move the table along a bit, then navigate back to the section of the page you were interested in).  Now the scrollbar just appears at the bottom of the browser window and can always be accessed no matter where in the table you are.

I also removed the editorial notes from tables by default to reduce clutter, and added in a button for showing / hiding the editors’ notes near the top of each page.  I also added a limit option in the ‘Books’ and ‘Borrowers’ pages within a library to limit the displayed records to only those found in a specific ledger.  I added in a further option to display those records that are not currently associated with any ledgers too.

I then deleted the ‘original borrowed date’ and ‘original returned date’ fields in St Andrews data as these were no longer required.  I deleted these additional fields from the system and all data that were contained in these fields.

It had been noted that the book part numbers were not being listed numerically.  As part numbers can contain text as well as numbers (e.g. ‘Vol. II’), this field in the database needed to be set as text rather than an integer.  Unfortunately the database doesn’t order numbers correctly when they are contained in a non-numerical field  – instead all the ones come first (1, 10, 11) then all the twos (2, 20, 22) etc.  However, I managed to find a way to ensure that the numbers are ordered correctly.

I also fixed the ‘Add another Edition/Work to this holding’ button that was not working.  This was caused by the Stirling server running a different version of PHP that doesn’t allow functions to have variable numbers of arguments.  The autocomplete function was also not working at edition level and I investigated this.  The issue was being caused by tab characters appearing in edition titles, and I updated my script to ensure these characters are stripped out before the data is formatted as JSON.

There may be further tweaks to be made – I’ll need to hear back from the rest of the team before I know more, but for now I’m up to date with the project.  Next week I intend to get back into some of the larger and more trickier outstanding AND tasks (of which there are, alas, many) and to begin working towards adding the DSL bibliography data into the new version of the API.

Week Beginning 4th January 2021

This was my first week back after the Christmas holidays, and I only worked the Thursday and the Friday.  We’re back in full lockdown and homeschooling again now, so it’s not the best of starts to the new year.  I spent my two days this week catching up with emails and finishing off some outstanding tasks from last year.  I spoke to Joanna Kopaczyk about her new RSE funded project that I need to set up a website for, and I had a chat with the DSL people about the outstanding tasks that still need to be tackled for the Dictionary of the Scots Language.  I also added a few more Burns Suppers to the Supper Map that I created over the past year for Paul Malgrati in Scottish Literature, which was a little time consuming as the data is contained in a spreadsheet featuring more than 70 columns.

I spent the remainder of the week continuing to work on the new Anglo-Norman Dictionary site, which we launched just before Christmas.  The editors, Geert and Heather, had spotted some issues with the site whilst using it so I had a few more things to add to my ‘to do’ list, some of which I ticked off.  One such thing was that entries with headwords that consisted of multiple words weren’t loading.  This required an update to the way the API handles variables passed in URL strings, and after I implemented that such entries then loaded successfully.

A bigger issue was the fact that some citations were not appearing in the entries.  This took some time to investigate but I eventually tracked down the problem.  I’d needed to write a script that reordered all of the citations in every sense in every entry by date, as previously the citations were not in date order.  However, when looking at the entries that had missing citations it would appear that where a sense has more than one citation in the same year only one of these citations was appearing.  This is because within each sense I was placing the citations in an array with the year as the key, e.g:

$citation[“1134”] = citation 1

$citation[“1362”] = citation 2

$citation[“1247”] = citation 3

I was then reordering the array based on the key to get things in year order.  But where there were multiple citations in a single year for a sense this approach wasn’t working as the array key needs to be unique.  So if there were two ‘1134’ citations only one was being retained.  To fix this I updated the reordering script to add a further incrementing number to the key, so if there are two ‘1134’ citations the key for the first is ‘1134-1’ and the second is ‘1134-2’.  This ensures all citations for a year are retained and the sorting by key still works.  After implementing the fix and rerunning the citation ordering script I updated the XML in the online database and the missing citations are now thankfully appearing online.

I ended the week by continuing to work through the ancillary pages of the dictionary, focusing on the ‘Introducing the AND’ pages (https://anglo-norman.net/introducing-the-and/).  I’d managed to get the main content of the pages in place before Christmas, but explanatory notes and links were not working.  There are about 50 explanatory notes in the ‘Magna Carta’ page and I needed to copy all of these from the old site and add them to a Bootstrap dialog pop-up, which was rather time-consuming.  I also had to update the links through to the dictionary entries as although I’d added redirects to ensure the old links worked, some of the links in these pages didn’t feature an entry number where one was required.  For example on the page about food there was a link to ‘pere’ but the dictionary contains three ‘pere’ entries and the correct one is actually the third (the fruit pear).  I still need to fix links and explanatory notes in the two remaining pages of the introduction, which I will try to get sorted next week.

Week Beginning 14th December 2020

This was my last week before the Christmas holidays, and it was a four-day week as I’d taken Friday off to use up some unspent holidays.  Despite only being four days long it was a very hectic week, as I had lots of loose ends to tie up before the launch of the new Anglo-Norman Dictionary website on Wednesday.  This included tweaking the appearance of ‘Edgloss’ tags to ensure they always have brackets (even if they don’t in the XML), updating the forms to add line breaks between parts of speech and updating the source texts pop-ups and source texts page to move the information about the DEAF website.

I also added in a lot of the ancillary page data, including the help text, various essays, the ‘history’ page, copyright and privacy pages, the memorial lectures and the multi-section ‘introduction to the AND’.  I didn’t quite manage to get all of the links working in the latter and I’ll need to return to this next year.  I also overhauled the homepage and footer, adding in the project’s Twitter feed, a new introduction and adding links to Twitter and Facebook to the footer.

I also identified and fixed an error with the label translations, which were sometimes displaying the wrong translation.  My script that extracted the labels was failing to grab the sense ID for subsenses.  This ID is only used to pull out the appropriate translation, but because of the failure the ID of the last main sense was being used instead.  I therefore had to update my script and regenerate the translation data.  I also updated the label search to add in citations as well as translations.  This means the search results page can get very long as both labels and translations are applied at sense level, so we end up with every citation in a matching sense listed, but apparently this is what’s wanted.

I also fixed the display of ‘YBB’ sources, which for some unknown reason are handled differently to all other sources in the system and fixed the issue with deviant forms and their references and parts of speech.

On Wednesday we made the site live, replacing the old site with the new one, which you can now access here:  https://anglo-norman.net/.  It wasn’t entirely straightforward to get the DNS update working, but we got there in the end, and after making some tweaks to paths and adding in Google Analytics the site was ready to use, which is quite a relief.  There is still a lot of work to do on the site, but I’m very happy with the progress I’ve made with the site since I began the redevelopment in October.

Also this week I set up a new website for phase two of the ‘Editing Burns for the 21st Century’ project and upgraded all of the WordPress sites I manage to the most recent version.  I also arranged a meeting with Jane Stuart-Smith to discuss a new project in the New Year, replied to Kirsteen McCue about a proposal she’s finishing off, replied to Simon Taylor about a new place-name project he wants me to be involved with and replied to Carolyn Jess-Cooke about a project of hers that will be starting next year.

That’s all for 2020.  Here’s hoping 2021 is not going to be quite so crazy!

Week Beginning 7th December 2020

I spent most of the week working on the Anglo-Norman Dictionary as we’re planning on launching this next week and there was still much to be done before that.  One of the big outstanding tasks was to reorder all of the citations in all senses within all entries so they are listed by their date.  This was a pretty complex task as each entry may any number of up to four different types of sense:  main senses, subsenses and then main senses and subsenses within locutions.  My script needed to be able to extract the dates for each citation within each of these blocks, figure out their date order, rearrange the citations by this order and then overwrite the XML section with the reordered data.  Any loss of or mangling of the data would be disastrous and with almost 60,000 entries being updated it would not be possible to manually check that everything worked in all circumstances.

Updating the XML proved to be a little tricky as I had been manipulating the data with PHP’s simplexml functionsand it doesn’t include a facility to replace a child node.  This meant that I couldn’t tell the script to identify a sense and replace its citations with a new block.  In addition, the XML was not structured to include a ‘citations’ element that contained all of the individual citations for an entry but instead just listed each citation as an ‘attestation’ element within the sense, therefore it wasn’t straightforwardly possible to replace the clock of citations with an updated block.  Instead I needed to reconstruct the sense XML in its entirety, including both the complete set of citations and all other elements and attributes contained within the sense, such as IDs, categories and labels.  With a completely new version of the sense XML stored in memory by the script I then needed to write this to the XML, and for this I needed to use PHP’s DOM manipulation functions because (as mentioned earlier) simplexml has no means of identifying and replacing a child node.

I managed to get a version of my script working and all seemed to be well with the entries I was using for test purposes so I ran the script on the full dataset and replaced the data on the website (ensuring that I kept a record of the pre-reordered data handy in case of any problems).  When the editors reviewed the data they noticed that while the reordering had worked successfully for some senses, it had not reordered others.  This was a bit strange and I therefore had to return to my script to figure out what had gone wrong.  I noticed that only the citations in the first sense / subsense / locution sense / locution subsense had been reordered, with others being skipped.  But when I commented out the part of the script that updated the XML all senses were successfully being picked out.  This seemed strange to me as I didn’t see why the act of identifying senses should be affected by the writing of data.  After some investigation I discovered that with PHP’s simplexml implementation if you iterate through nodes using a ‘foreach’ and then update the item picked out by the loop (so for example in ‘foreach($sense as $s)’ updating $s) then subsequent iterations fail.  It would appear that updating $s in this example changes the XML string that’s loaded into memory which then means the loop reckons it’s reached the end of the matching elements and stops.  My script had different loops for going through senses / subsenses / locution senses / locution subsenses which is why the first of each type was being updated while others weren’t.  After I figured this out I updated my script to use a ‘for’ loop instead of a ‘foreach’ and stored $s within the scope of the loop only and this worked.  With the change in place I reran the script on the full dataset and uploaded it to the website and all thankfully appears to have worked.

For the rest of the week I worked through my ‘to do’ list, ticking items off. I updated the ‘Blog’ menu item to point to the existing blog site (this will eventually be migrated across).  The ‘Textbase’ menu item now loads a page stating that this feature will be added in 2021.  I managed to implement the ‘source texts’ page as it turns out that I’d already developed much of the underpinnings for this page whilst developing other features.  As with citation popups, it links into the advanced search and also to the DEAF website.  I figured out how to ensure that words with accented characters is citation searches now appear separately in the list from their non-accented versions.  E.g. a search for ‘apres*’ now has ‘apres (28)’ separate from ‘après (4)’ and ‘aprés (2229)’.  We may need to think about the ordering, though, as accented characters are currently appearing at the end of the list.  I also made the words lower case here – they were previously being transformed into upper case.  Exact searches (surrounded by quotes) are still accent-sensitive.  This is required so that the link through the list of forms to the search results works (otherwise the results display all accented and non-accented forms).  I also ensured that word highlighting in snippets in results now works as it should with accented characters and upper case initial letters are now retained too.

I added in an option to return to the list of forms (i.e. the intermediate page) from the search results.  In addition to ‘Refine your search’ there is also a ‘Select another form’ button and I ensured that the search results page still appears when there is only one search result for citation and translation searches now.  I also figured out why multiple words were sometimes being returned in the citation and translation searches.  This was because what looked like spaces between words in the XML were sometimes not regular spaces but non-breaking space characters (\u00a0).  As my script split up citations and translations on spaces these were not being picked up as divisions between words.  I needed to update my script to deal with these characters and then regenerate all of the citation and translation data again in order to fix this.

I also ensured that when conducting a label search the matching labels in an entry page are now highlighted and the page automatically scrolls down to the first matching label.  I also made several tweaks to the XSLT, ensuring that where there are no dates for citations the text ‘TBD’ appears instead and ensuring a number of tags that were not getting properly transformed were handled.

Also this week I made some final changes to the interactive map of Burns Suppers, including tweaking the site icon so it looks a bit nicer and adding in a ‘read more’ button to the intro text and fixing the scrolling issue on small screens, plus updating the text to show 17 filters.  I fixed the issue with the attendance filter and have also updated the layout of the filters so they look better on both monitors and mobile devices.

My other main task of the week was to restructure the Mapping Metaphor website based on suggestions for REF from Wendy and Carole.  This required a lot of work as the visualisations needed to be moved to different URLs and the Old English map, which was previously a separate site in a subdirectory, needed to be amalgamated with the main site.

I removed the top-level tabs that linked between MM, MMOE and MetaphorIC and also the ‘quick search’ box.  The ‘metaphor of the day’ page now displays both a main and an OE connection and the ‘Metaphor Map of English’ / ‘Metaphor Map of Old English’ text in the header has been removed.  I reworked the navigation bar in order to allow a sub-navigation bar to appear.  It is now positioned within the header and is centre-aligned.  ‘Home’ now features introductory text rather than the visualisation.  ‘About the project’ now has the new secondary menu rather than the old left-panel menu.  This is because the secondary menu on the map pages couldn’t have links in the left-hand panel as it’s already used for something else.  It’s better to have the sub-menu displaying consistently across different sections of the site.  I updated the text within several ‘About’ pages and ‘How to Use’, which also now has the new secondary menu.  The main metaphor map is now in the ‘Metaphor Map of English’ menu item.  This has sub-menu items for ‘search’ and ‘browse’.  The OE metaphor map is now in the ‘Metaphor Map of Old English’ menu item.  It also has sub-menu items for ‘search’ and ‘browse’.  The OE pages retain their purple colour to make a clear distinction between the OE map and the main one.  MetaphorIC retains the top-level navigation bar but now only features one link back to the main MM site.  This is right-aligned to avoid getting in the way of the ‘Home’ icon that appears in the top left of sub-pages.  The new site replaced the old one on Friday and I also ensured that all of the old URLs continue to work (e.g. the ‘cite this’ will continue to work.

Week Beginning 30th November 2020

I took Friday off again this week as I needed to go and collect a new pair of glasses from my opticians in the West End, which is quite a trek from my house.  Although I’d taken the day off I ended up working for about three hours as on Thursday Fraser Dallachy emailed me to ask about the location of the semantically tagged EEBO dataset that we’d worked on a couple of years ago.  I didn’t have this at home but I was fairly certain I had it on a computer in my office so I decided to take the opportunity to pop in and locate the data.  I managed to find a 10Gb tar.gz file containing the data on my desktop PC, along with the unzipped contents (more than 25,000 files) in another folder.  I’d taken an empty external hard drive with me and began the process of copying the data, which took hours.  I’d also remembered that I’d developed a website where the tagged data could be searched and that this was on the old Historical Thesaurus server, but unfortunately it no longer seemed to be accessible.  I also couldn’t seem to find the code or data for it on my desktop PC, but I remembered the previously I’d set up one of the four old desktop PCs I have sitting in my office as a server and the system was running on this.  It took me a while to get the old PC connected and working, but I managed to get it to boot up.  It didn’t have a GUI installed so everything needed to be done at the command line, but I located the code and the database.  I had planned to copy this to a USB stick, but the server wasn’t recognising USB drives (in either NTFS or FAT format) so I couldn’t actually get the data off the machine.  I decided therefore to install Ubuntu Linux on a bootable USB stick and to get the old machine to boot into this rather than run the operating system on the hard drive.  Thankfully this worked and I could then access the PC’s hard drive from the GUI that ran from the USB stick.  I was able to locate the code and the data and copy them onto the external hard drive, which I then left somewhere that Fraser would be able to access it.  Not a bad bit of work for a supposed holiday.

As with previous week’s I split my time mostly between the Anglo-Norman Dictionary and the Dictionary of the Scots Language.  For the AND I finally updated the user interface.  I added in the AND logo and updated the colour schemes to reflect the colours used in the logo.  I’m afraid the colours used in the logo seem to be straight out of a late 1990s website so unfortunately the new interface has that sort of feel about it too.  The header area now has a white background as the logo needs a white background to work.  The ‘quick search’ option is now better positioned and there is a new bar for the navigation buttons.  The active navigation button and other site buttons are now the ‘A’ red, panels are generally the ‘N’ blue and the footer is the ‘D’ green.  The main body is now slightly grey so that the entry area stands out from it.  I replaced the header font (Cinzel) with a Cormorant Garamond as this more closely resembles the font used in the logo.

The left-hand panel has been reworked so that entries are smaller and their dates and right-aligned.  I also added stripes to make it easier to keep your eye on an entry and its date.  The fixed header that appears when you scroll down a longer entry now features the AND logo/  The ‘Top’ button that appears when you scroll down a long entry now appears to the right so it doesn’t interfere with the left-hand panel.  The footer now only features the logos for Aberystwyth and AHRC and these appear on the right, with links to some pages on the left.

I have also updated the appearance of the ‘Try and Advanced Search’ button so it only appears on the ‘quick search’ results page (which is what should have happened originally).  I also removed the semantic tags that were added from the XML but need to be edited out of the XML.  I have also ticked a few more things off my ‘to do’ list, including replacing underscores with spaces in parts of speech and language tags and replacing ‘v.a.’ and ‘v.n’ as requested.  I also updated the autocomplete styles (when you type into the quick search box) so they fit in with the site a bit better.

I then began looking into reordering the citations in the entries so they appear in date order within their senses, but I remembered that Geert wanted some dates to be batch processed and realised that this should be attempted first.  I had a conversation with Geert about this, but the information he sent wasn’t well structured enough to be used and it looks like the batch updating of dates will need to wait until after the launch.  Instead I moved on to updating the source text pop-ups in the entry.  These feature links to the DEAF website and a link to search the AND entries for all others that feature the source.

On the old site the DEAF links linked through to another page on the old site that included the DEAF text and then linked through to the DEAF website.  I figured it would be better to cut out this middle stage and link directly through to DEAF.  This meant figuring out which DEAF page should be linked to and formatting the link so their page jumps to the right place.  I also added in a note about the link under it.

This was pretty straightforward but the ‘AND Citations’ link was not.  On the old site clicking on this link ran a search that displayed the citations.  We had nothing comparable to this developed for the new site.  So I needed to update the citation search to allow the user to search based on the sigla (source text).  This in turn meant updating my citations table to add a field for holding the citation siglum and regenerating the citations and citation search words and then updating the API to allow a citation search to be limited by a siglum ID.  I then updated the ‘Citations’ tab of the ‘Advanced Search’ page to add a new box for ‘citation siglum’.  This is an autocomplete box – you type some text and a list of matching sigla are displayed, from which you can select one.  This in turn meant updating the API to allow the sigla to be queried for this autocomplete.  But for example type in ‘a-n’ into the box and a list of all sigla containing this are displayed.  Select ‘A-N Falconry’ and you can then find all entries where this siglum appears.  You can also combine this with citation text and date (although the latter won’t be much use).

I’ve also tweaked the search results tab on the entry page so that the up and down buttons don’t appear if you’re at the top or bottom of the results, and I’ve ensured if you’re looking at an entry towards the end of the results a sufficient number of results before the one you’re looking at appear.  I’ve also ensured that the entry lemma and hom appear in the <title> of the web page (in the browser tab) so you can easily tell which tab contains which entry.  Here’s a screenshot of the new interface:

For the DSL I spent some time answering emails about a variety of issues.  I also completed my work on the issue of accents in the search, updating the search forms so that any accented characters that a user adds in are converted to their non-accented version before the search runs, ensuring someone searching for ‘Privacé’ will find all ‘privace’ in the full text.  I also tweaked the wording of the search results to remove the ‘supplementary’ text from it as all supplementary items have now either been amalgamated or turned into main entries.  I also put in redirects from all of the URLs for the deleted child entries to their corresponding main entries.  This was rather time consuming to do as I needed to go through each deleted child entry, get each of its URLs, then get the main URL of the corresponding main entry, add these to a new database table and then add a new endpoint to the V4 API that accepts a child URL, then checks the database for any main URL and returns this.  Then I needed to update the entry page so that the URL is passed to this new redirect checking API endpoint and if it matches a deleted item the page needs to redirect to the proper URL.

Also this week I had a conversation with Wendy Anderson about updates to the Mapping Metaphor website.  I had thought these would just be some simple tweaks to the text of existing pages, but instead the site structure needs to be updated, which might prove to be tricky.  I’m hoping to be able to find the time to do this next week.

Finally, I continued to work on the Burns Supper map, adding in the remaining filters.  I also fixed a few dates, added in the introductory text and a ‘favicon’.  I still need to work on the layout a bit, which I’ll hopefully do next week, but the bulk of the work for the map is now complete.