Week Beginning 17th May 2021

I spent a lot of this week continuing with the Anglo-Norman Dictionary, including making some changes to the proofreader feature I created recently.  I tweaked the output of this so that there is now a space between siglum and ‘MS’, ‘edgloss’ now has brackets and there is now a blank paragraph before the ‘summary’ section and also before the ‘cognate refs’ section to split things up a bit.  I also added some characters (~~) before and after the ‘summary’ section to help split things up and added extra spaces before and after sense numbers, and square brackets around them (because background styles, which give the round, black circle are not carried over into Word when the content is copied).  I also added more spaces round the labels, added an extra line break before locutions and made the locution phrase appear in bold.

I also spent some time investigating some issues with the data, for example a meaning was not getting displayed in the summary section of https://anglo-norman.net/entry/chaucer_3 because the part of speech labels didn’t quite match up (one was ‘subst.’, the other was ‘sbst.’) and updated the entry display so that the ‘form section’ at the top of an entry gets displayed even if there is no ‘cognate refs’ section.  My code repositions the ‘formSection’ so it appears before ‘cognateRefs’ and as it was not finding this section it wasn’t repositioning the forms anywhere – instead they just disappeared.  I therefore updated the code to ensure that the forms will only be repositioned if the ‘cognateRefs’ section is present, and this has fixed the matter.

I also responded to a request for data from a researcher at Humboldt-Universität zu Berlin who wanted information on entries that featured specific grammatical labels.  As of yet the advanced search does not include a part of speech search, but I could generate the necessary data from the underlying database.  I also ran a few queries to update further batches of bad dates in the system.

With all of this out of the way I then moved onto a more substantial task – creating a new ‘date builder’ feature for the Dictionary Management System.  The old DMS featured such a tool, which allows the editor to fill in some text boxes and for an XML form of the date (either text, manuscript or both) to be generated, copied and pasted into their XML editor.  The old feature used a mixture of Perl scripts and JavaScript to generate the XML, over several thousand lines of code, but I wanted to handle it all in JavaScript in a (hopefully) more succinct way.

My initial version allowed an editor to add Text and MS dates using the input boxes and then by pressing the ‘Generate XML’ button the ‘XML’ box is populated and the date as it would be displayed on the site is also displayed.  I amalgamated the ‘proof’ and ‘Build XML’ options from the old DMS as it seemed more useful to just do both at the same time.  There is also a ‘clear’ button that does what you’d expect it to do and a ‘log’ that displays feedback about the date.  E.g. if the date doesn’t conform to the expected pattern (yyyy / yyyy-yyyy / yyyy-yy / yyyy-y) or one of the characters isn’t a number or the date after the dash is earlier than the date before the dash then a warning will be displayed here.  The XML area is editable so if needs be the content can be manually tweaked.  There is also a ‘Copy XML’ button to copy the contents of the XML area to the clipboard.

What I didn’t realise was that non-numerical dates also need to be processed using the date builder, so for example ‘s.xiii’, ‘s.xivex’, ‘sxii/xiii’.  I needed to update the date builder to handle seven different centuries which could be joined in a range either by a dash or a slash, and 16 different suffixes, each of which would change how the numerical date should be generated from the century, and all this in addition to the three prefixes ‘a’,’b’ and ‘c’ that also change the generated date.  Getting this to work was all very complicated, but by the end of the week I had a working version, all of which took up less than 500 lines of JavaScript.  Below is a screenshot of the date builder in action:

Also this week I set up some new user accounts for the Books and Borrowing project, I gave Luca Guariento some feedback about an AHRC proposal, I had to deal with the server and database going down a few times and I added a new publication to the SCOSYA website.

I also updated the DSL test site so that cross references in entries don’t use IDs (as found in the XML) but use ‘slugs’ (as we use on the site).  This required me to write a new API endpoint to return slugs from IDs and to update the JavaScript to find and replace cross reference IDs when an entry is loaded.  I also spoke to Rhona about the launch of the new DSL website, which is possibly going to be pushed back a bit now.

Finally, I made some further tweaks to the Comparative Kingship content management systems for Scottish and Irish placenames.  When I set up the two systems I’d forgotten to add the x-refs section into the form.  The code was all there to handle them, but the section wasn’t appearing.  I therefore updated both Scotland and Ireland so x-refs now appear.  I’d also noticed that some of the autogenerated lists that appear when you type into boxes in the Ireland site(e.g. xrefs) were pointing to the Scotland database and therefore bringing back the wrong data and I fixed this too.

I also added all of the sources from the Kirkcudbrightshire system to the Scotland CMS and replaced the Scotland elements database with the one from KCB as well, which required me to check the elements already associated with names to ensure they point to the same data.  Thankfully all did except the existing name ‘Rhynie’, which was newly added and its ID ended up referencing an entirely different element from the KCB database, but I fixed this.  I also fixed a bug with the name and element deletion code that was preventing things for getting deleted.

Week Beginning 10th May 2021

I continued to work on updates to the Anglo Norman Dictionary for most of this week, looking at fixing the bad citation dates in entries that were causing the display of ‘earliest date’ to be incorrect.  A number of the citation dates have a proper date in text form (e.g. s.xii/xiii) but have incorrect ‘post’ and ‘pre’ attributes (e.g. ‘00’ and ‘99’).  The system uses these ‘post’ and ‘pre’ attributes for date searching and for deciding which is the earliest date for an entry, and if one of these bad dates was encountered it was considering it to be the earliest date.  Initially I thought there were only a few entries that had ended up with an incorrect earliest date, because I was searching the database for all earliest dates that were less than 1000.  However, I then realised that the bulk of the entries with incorrect earliest dates had the earliest date field set to ‘null’ and in database queries ‘null’ is not considered less than 1000 but a separate thing entirely and so such entries were not being found.  I managed to identify several hundred entries that needed their dates fixed and wrote a script to do so.

It was slightly more complicated than a simple ‘find and replace’ as the metadata about the entry needed to be regenerated too – e.g. the dates extracted from the citations that are used in the advanced search and the earliest date display for entries.  I managed to batch correct several hundred entries using the script and also adapted it to look for other bad dates that needed fixing too.

I also created a new feature for the Dictionary Management System: an entry proofreader.  It allows an editor to attach a ZIP file containing XML entries and it then displays all of these in a similar manner to the live site, only with all entries on one long page.  The editor can then select all of the text, copy it and then paste it into Word and the major formatting elements will be retained (bold, italic, superscript etc.).  I tested the feature by zipping up 3,178 XML entries and although it took a few minutes to process, the page displayed properly and I was able to copy the text to Word (resulting in a 1,029 page Word file).  After finishing the initial version of the script I had to tweak it a bit, as I wrote the HTML and JavaScript with the expectation that there would be one dictionary item on the page and some aspects were not working when there were multiple items and needed updating.  I also ensured that links to sources in entries work.  In the actual dictionary they open a pop-up, which clearly isn’t going to work in Word so instead I made the link go to the relevant item in the bibliography page (e.g. https://anglo-norman.net/bibliography/B#bib-Best).  Links to other dictionaries, labels and other AND entries also all now work from Word.

In addition, cogrefs appear before variants and deviants, commentaries appear (as full text, not cut off), Xrefs at the bottom now have the ‘see also’ text above them as in the live site, editor initials now appear where they exist and numerals only appear where there is more than once sense in a POS.

Also this week I did some further work for the Dictionary of the Scots Language based on feedback after my upload of data from the DSL’s new editing system.  There was a query about the ‘slug’ used for referencing an entry in a URL.  When the new data is processed by the import script the ‘slug’ is generated from the first <url> entry in the XML.  If this <url> begins ‘dost’ or ‘snd’ it means a headword is not present in the <url> and therefore the new system ID is taken as the new ‘slug’ instead.  All <url> forms are also stored as alternative ‘slugs’ that can still be used to access the entry.  I checked the new database and there are 3258 entries that have a ‘slug’ beginning with ‘dost’ or ‘snd’, i.e. they have the new ID as their ‘slug’ because they had an old ID as their first <url> in the XML.  I checked a couple of these and they don’t seem to have the headword as a <url>, e.g. ‘beit’ (dost00052776) only has the old ID (twice) as URLs: <url>dost2543</url><url>dost2543</url>, ‘well-fired’ (snd00090568) only has the old ID (twice) as URLs: <url>sndns4098</url><url>sndns4098</url>.  I’ve asked the editors what should be done about this.

Also this week I wrote a script to generate a flat CSV from the Historical Thesaurus’s relational database structure, joining the lexeme and category tables together and appending entries from the new ‘date’ table as additional columns as required.  It took a little while to write the script and then a bit longer to run it, resulting in a 241MB CSV file.

I also gave some advice to Craig Lamont in Scottish Literature about a potential bid he’s putting together, and spoke to Luca about a project he’s been asked to write a DMP for.  I also looked through some journals that Gerry Carruthers is hoping to host at Glasgow and gave him an estimate of the amount of time it would take to create a website based on the PDF contents of the old journal items.

Week Beginning 19th April 2021

It was a return to a full five-day week this week, after taking some days off to cover the Easter school holidays for the previous two weeks.  The biggest task I tackled this week was to import the data from the Dictionary of the Scots Language’s new editing system into my online system.  I’d received a sample of the data from the company responsible for the new editing system a couple of weeks ago, and we had agreed on a slightly updated structure after that.  Last week I was sent the full dataset and I spent some time working with it this week.  I set up a local version of the online system on my PC and tweaked the existing scripts I’d previously written to import the XML dataset generated by the old editing system.  Thankfully the new XML was not massively different in structure to the old set, and different mostly in the addition of a few new attributes, such as ‘oldid’ that referenced to old ID of each entry, and ‘typeA’ and ‘typeB’, which contain numerical codes that denote which text should be displayed to note when the entry was published.  With changes made to the database to store these attributers and updates to the import script to process them I was ready to go, and all 80,432 DOST and SND entries were successfully imported, including extracting all forms and URLs for use in the system.

I had a conversation with the DSL team about whether my ‘browse order’ would still be required, as the entries now appear to be ordered nicely by their new IDs.  Previously I ran a script to generate the dictionary order based on the alphanumeric characters in the headword and the ‘posnum’ that I generated based on the classification of parts of speech taken from a document written by Thomas Widmann when he worked for the DSL (e.g. all POS beginning ‘n.’ have a ‘posnum’ of 1, all POS beginning ‘ppl. adj.’ have a ‘posnum’ of 8).  Although the new data is now nicely ordered by the new ID field I wanted to check whether I should still be generating and using my browse order columns or whether I should just order things by ID.  I suggested that going forward it will not be possible to use the ID field as browse order, as whenever the editors add a new entry its ID will position it in the wrong place (unless the ID field is not static and is regenerated whenever a new entry is added).  My assumption was correct and we agreed to continue using my generated browse order.

In a related matter my script extracts the headword of each entry from the XML and this is used in my system and also to generate the browse order.  The headword is always taken to be the first <f> of type “form” within <meta> in the <entry>.  However, I noticed that there are five entries that have no <f> of type “form” and are therefore missing a headword, and are appearing first in the ‘browseorder’ because of this.  This is something that still needs to be addressed.

In our conversations, Ann Ferguson mentioned that my browse system wasn’t always getting the correct order where there were multiple identical headwords all within the same generate part of speech.  For example there are multiple noun ‘point’ entries in DOST – n. 1, n. 2 and n. 3.  These were appearing in the ‘browse’ feature with n. 3 first.  This is because (as per Thomas’s document) all entries with a POS starting with ‘n.’ are given a ‘posorder’ of 1.  In cases such as ‘point’ where the headword is the same and there are several entries with a POS beginning ‘n.’ the order is then set to depend on the ID, and ‘Point n.3’ has the lowest ID, so appears first.  I therefore updated the script that generates the browse order so that in such cases entries are ordered alphabetically by POS instead.

I also regenerated the data for the Solr full-text search, but I’ll need Arts IT Support to update this, and they haven’t got back to me yet.  I then migrated all of the new data to the online server and also created a table for the ‘about’ text that will get displayed based on the ‘typeA’ and ‘tyepB’ number in the entry.  I then created a new version of the API that uses the new data and pulls in the necessary ‘about’ data.  When I did this I noticed that some slugs (the identifier that will be used to reference an entry in a URL) are still coming out as old IDs because this is what is found in the <url> elements.  So for example the entry ‘snd00087693’ had the slug ‘snds165’.  After discussion we agreed that in such cases the slug should be the new ID, and I tweaked the import script and regenerated the data to make this the case.  I then updated one of our test front-ends to use the new API, updating the XSLT to ensure that the <meta> tag that now appears in the XML is not displayed and updating bibliographical references and cross references to use the new ‘refid’ attribute.  I also set up the entry page to display the ‘about’ text, although the actual placement and formatting of this text still needs to be decided upon. I then moved on to the bibliographical data, but this is going to take a bit longer to sort out, as previous bib info was imported from a CSV.

Also this week I read through and gave feedback on a data management plan for a proposal Marc Alexander in involved with and created a new version of the DMP for the new metaphor proposal that Wendy Anderson is involved with.  I also gave some advice to Gerry Carruthers about hosting some journal issues at Glasgow.

For the Books and Borrowing project I made some updates to the data of the 18th Century Borrowers pilot project, including fixing some issues with special characters, updating information relating to a few books and merging a couple of book records.  I also continued to upload the page images of the Edinburgh registers, finishing the upload of 16 registers and then generating the page records for all of the pages in the content management system.  I then started on the St Andrews registers.

I also participated in a Zoom call about GIS for the place-names of Iona project, where we discussed the sort of data and maps that would appear in the QGIS system and how this would relate to the online CMS, and also tweaked the Call of Papers page of the website.

Finally, I continued to make updates to the content management systems for the Comparative Kingship project, adding in Irish versions of the classifications and some of the labels, changing some parishes, adding in the languages that are needed for the Irish system and removing the unnecessary place-names that were imported from the GB1900 dataset.  These are things like ‘F.P.’ for ‘footpath’.  A total of 2,276 names, with their parish references, historical forms and links to the OS source were deleted by a little script I wrote for the purpose.  I think I’m up to date with this project for the moment, so next week I intend to continue with the DSL bibliographical data import and to return to working on the Anglo-Norman Dictionary.

Week Beginning 22nd March 2021

I continued to develop the ‘Dictionary Management System’ for the Anglo-Norman Dictionary this week, following on with the work I began last week to allow the editors to drag and drop sets of entry XML files into the system.  I updated the form to add in another option underneath the selection of phase statement called ‘Phase Statements for existing records’.  Here the editor can choose whether to retain existing statements or replace them.  If ‘retain’ is selected then any XML entries attached to the form that either have an existing entry ID in their filename or have a slug that matches an existing entry in the system will retain whatever phase statement the existing entry has, no matter what phase statement is selected in the form.  The phase statement selected in the form will still be applied to any XML entries attached to the form that don’t have an existing entry in the system.  Selecting ‘replace existing statements’ will ignore all phase statements of existing entries and will overwrite them with whatever phase statement is selected in the form.  I also updated the system so that it extracts the earliest date for an entry at the point of upload.  I added two new columns to the holding area (for earliest date and the date that is displayed for this) and have ensured that the display date appears on the ‘review’ page too.  In addition, I added in an option to download the XML of an entry in the holding area, if it needs further work.

I ran a large-scale upload test, comprising of around 3,200 XML files from the ‘R’ data to see how the system would cope with this, but unfortunately I ran into difficulties with the server rejecting too many requests in a short space of time and only about 600 of the files made it through.  I asked Arts IT Support to see whether the server limits can be removed for this script, but haven’t heard anything back yet.  I ran into a similar issue when processing files for the Mull and Ulva place-names project in January last year and Raymond was able to update the whitelist for the Apache module mod_evasive that was blocking such uploads and I’m hoping he’ll be able to do something similar this time.  Alternatively, I’ll need to try and throttle the speed of uploads in the browser.

In the meantime, I continued with the scripts for publishing entries that had been uploaded to the holding area, using a test version of the site that I set up on my local PC to avoid messing up the live database.  I updated the ‘holding area’ page quite significantly.  At the top of the page is a box for publishing selected items, and beneath this is the table containing the holding items.  Each row now features a checkbox, and there is an option above the table to select / deselect all rows on the page (so currently up to 200 entries can be published in one batch as 200 is the page limit).    The ‘preview’ button has been replaced with an ‘eye’ icon but the preview page works in the same way as before.  I was intending to add the ‘publish’ options to this page but I’ve moved this to the holding area page instead to allow multiple entries to be selected for publication at any one time.

Selecting one or more items for publication and then pressing the ‘publish selected holdings’ button runs some JavaScript that grabs the ID of each holding item and then submits this to a script on the server via AJAX, and the server-side script then processes each selected item for publication in turn.  I limited the processing of this to one item per second to hopefully avoid the server rejecting requests.  Rather a lot happens when an item is published: The holding item is copied to the live entry table and then its XML is analysed to extract and store for search purposes: Citations, attestation dates and word counts of every word in each citation; translations and word counts of every word in each translation; semantic and usage labels (including adding new labels to the system if the XML contains new ones); word forms and their types (lemma, variant, deviant); parts of speech; cross references in xref entries.

If there is an existing live entry that matches the current entry (either because of the stored ‘Existing ID’ or because it has the same slug as the holding item) then this entry is deactivated in the database, its XML is copied to the ‘history’ table and associated with the new item record and all search data for the live entry as mentioned above is deleted.  At this point the holding item record is deleted and the server-side script finishes executing, returning its output to the JavaScript, which then adds a row to the ‘publication log’ on the holding entries page; decreases the count of the number of holding entries on the page by one and removes the row containing the holding item from the table on the page.

Once all of the selected items are published there is one final task that the page performs, which is to completely regenerate the cross references data.  This is something that unfortunately needs to be done after each batch (even if it’s only one record) because cross references rely on database IDs and when a new version of an existing entry is published it receives a new ID.  This means any existing cross references to that item will no longer work.  The publication log will state that the regeneration is taking place and then after about 30 seconds another statement will say it is complete.  I tested this process on my local PC, publishing single items, a few items and entire pages (200 items) at a time and all seemed to be working fine so I then copied the new scripts to the server.

Also this week I continued with the processing of library registers for the Books and Borrowing project.  These are coming in rather quickly now and I’m getting a bit of a backlog.  This is because I have to download the image files, then process then to generate tilesets, and then upload all of the images and their tilesets to the server.  It’s the tilesets that are the real sticking point, as these consist of thousands of small files.  I’m only getting an upload speed of about 70KB/s and I’m having to upload many gigabytes of data.  I did a test where I zipped up some of the images and uploaded this zip file instead and was getting a speed of around 900KB/s and as it looks like I can get command-line access to the server I’m going to investigate whether zipping up the files, then uploading them then unzipping them will be a quicker process.  I also had to spend some time sorting out connection issues to the server as the Stirling VPN wasn’t letting me connect.  It turned out that they had switched to multi-factor authentication and I needed to set this up before I could continue.

Also this week I wrote a summary of the work I’ve done so far for the Place-names of Iona project for a newsletter they’re putting together, spoke to people about the new ‘Comparative Kingship’ place-names project I’m going to be involved with, spoke to the Scots Language Policy people about setting up a mailing list for the project(it turns out that the University has software to handle this, available here: https://www.gla.ac.uk/myglasgow/it/emaillists/) and fixed an issue relating to the display of citations that have multiple dates for the DSL.

Week Beginning 1st February 2021

I had two Zoom calls this week, the first on Wednesday with Kirsteen McCue to discuss a new, small project to publish a selection of musical settings to Burns poems and the second on Friday with Joanna Kopaczyk and her RA on the Scots Language Policy project to give a tutorial on how to use WordPress.

The majority of my week was divided between the Anglo-Norman Dictionary, the Dictionary of the Scots Language and the Place-names of Iona projects.  For the AND I made a few tweaks to the static content of the site and migrated some more blog posts across to the new site (these are not live yet).  I also added commentaries to more than 260 entries, which took some time to test.  I also worked on the DTD file that the editors reference from their XML editing software to ensure that all of the elements and attributes found within commentaries are ‘allowed’ in the XML.  Without doing this it was possible to add the tags in, but this would give errors in the editing software.  I also batch updated all of the entries on the site to reference the new DTD and exported all of the files, zipped them up and sent them to the editors so they can work on them as required.  I also began to think about migrating the TextBase from the old site to the new one, and managed to source the XML files that comprise this system.  It looks like it may be quite tricky to work with these as there are more than 70 book-length XML files to deal with and so far I have not managed to locate the XSLT that was originally used to process these files.

For the DSL I completed work on the new bibliography search pages that use the new ‘V4’ data.  These pages allow the authors and titles of bibliographical items to be searched, results to be viewed and individual items to be displayed.  I also made some minor tweaks to the live site and had a discussion with Ann Fergusson about transferring the project’s data to the people who have set up a new editing interface for them, something I’m hoping to be able to tackle next week.

For the Place-names of Iona project I had a discussion about implementing a new ‘work of the month’ feature and spent quite a bit of time investigating using 10-digit OS grid references in the project’s CMS.  The team need to use up to 10-digit grid references to get 1m accuracy for individual monuments, but the library I use in the CMS to automatically generate latitude and longitude from the supplied grid reference will only work with a 6-digit NGR.  The automatically generated latitude and longitude are then automatically passed to Google Maps to ascertain the altitude of the location and all of this information is stored in the database whenever a new place-name record is created or an existing record is edited.

As the library currently in use will only accept 6-digit NGRs I had to do a bit of research into alternative libraries, and I managed to find one that can accept NGRs of 2,4,6,8 or 10 digits.  Information about the library, including text boxes where you can enter an NGR and see the results can be found here: http://www.movable-type.co.uk/scripts/latlong-os-gridref.html along with an awful lot of description about the calculations and some pretty scary looking formulae.

The library is written in JavaScript, which runs in the client’s browser, whereas the previous library was written in PHP, which runs on the server.  This means I needed to change the way the CMS works – previously you’d enter an NGR and then when the form was submitted to the server the PHP library would generate the latitude and longitude whereas now the latitude and longitude need to be generated in the browser as soon as the NGR is entered into the textbox, and two further textboxes for latitude and longitude will appear in the form and will then be automatically populated with the results.

 

This does mean the person filling out the form can see the generated latitude and longitude and also tweak it if required before submitting the form, which is a potentially useful thing.  I may even be able to add a Google Map to the form so you can see (and possibly tweak) the point before submitting the form, but I’ll need to look into this further.  I also still need to work on the format of the latitude and longitude as the new library generates them with a compass point (e.g. 6.420848° W) and we need to store them as a purely decimal value (e.g. -6.420848) with ‘W’ and ‘S’ figures being negatives.

However, whilst researching this I discovered a potentially worrying thing that needs discussion with the wider team.  The way the Ordnance Survey generates latitude and longitude from their grid references was changed in 2014.  Information about this can be found in the page linked to above in the ‘Latitude/longitudes require a datum’ section.  Previously the OS used ‘OSGB-36’ to generate latitude and longitude, but in 2014 this was changed to ‘WGS84’, which is used by GPS systems.  The difference in the latitude / longitude figures generated by the two systems is about 100 metres, which is quite a lot if you’re intending to pinpoint individual monuments.

The new library has facilities to generate latitude and longitude using either the new or old systems, but defaults to the new system.  I’ve checked the output of the library we currently use and it uses the old ‘OSGB-36’ system.  This means all of the place-names in the system so far (and all those for the previous projects) have latitudes and longitudes generated using the now obsolete (since 2014) system. To give an example of the difference, the place-name A’ Mhachair in the CMS has this location: https://www.google.com/maps/place/56%C2%B019’33.2%22N+6%C2%B025’11.4%22W/@56.3258889,-6.422022,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325885!4d-6.419828 and with the newer ‘WGS84’ system it would have this location: https://www.google.com/maps/place/56%C2%B019’32.7%22N+6%C2%B025’15.1%22W/@56.325744,-6.4230367,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325744!4d-6.420848

So what we need to decide before I replace the old library with the new one in the CMS is whether we switch to using ‘WGS84’ or we keep using ‘OSGB-36’.  As I say, this will need further discussion before I implement any changes.

Also this week I responded to a query from Cris Sarg of the Medical Humanities Network project, spoke to Fraser Dallachy about future updates to the HT’s data from the OED, made some tweaks to the structure of the SCOSYA website for Jennifer Smith, added a plugin to the Editing Burns site for Craig Lamont and had a chat with the Books and Borrowing people about cleaning the authors data, importing the Craigston data and how to deal with a lot of borrowers that were excluded from the Selkirk data that I previously imported.

Next week I’ll be on holiday from Monday to Wednesday to cover the school half term.

 

Week Beginning 25 January 2021

I headed into the University for the first time this year on Wednesday this week to collect a new iPad that I’d ordered and to get some files from my office.  It was great to see the old place again, but it did take quite a chunk out of my day to travel there and back, especially as I’m still home-schooling either a morning or an afternoon each day at the moment too.

As with last week, I mainly divided my time this week between the Dictionary of the Scots Language, the Anglo-Norman Dictionary and the Books and Borrowing project, with a few other bits and bobs added in as well.  For the DSL I retrieved the source code for my original Scots School Dictionary app from my office so we can host this somewhere on the DSL website.  This is because the DSL have commissioned someone else to make a new School Dictionary app, which launched this week, but doesn’t include an ‘English to Scots’ feature as the old app does, so we’re going to make the old app available as a website for those people who miss the feature.  I also made a few minor tweaks to the main DSL site, and then focussed on adding bibliography search facilities to the new version of the API, a task that I’d begun last week.

I created a new table for the bibliographical data that includes the various fields used for DOST (note, author, editor, date, longtitle etc) and a field for the XML data used for SND.  I then created two further tables for searching, one that contains every author and editor name for each item (for DOST there may be different names in the author, editor, longauthor and longeditor fields while for SND there may be any number of <author> tags) and the other containing every title for each item (DOST may have different text in title and longtitle while SND items can have any number of <title> tags).  These tables allow you to search for any variant author, editor or title and find the item.

I also created two additional fields in the bibliography table that contain the ‘display author’ and ‘display title’.  These are the forms that get displayed in the search results before you click on an item to open the full bibliographical entry.  I then updated the V4 API to add in facilities to search and retrieve the bibliographies.  I didn’t have the time to connect to this API and to implement the search on the Sienna test site, which is something I hope to do next week, but the logic behind the search and display of bibliographies is all there.  There is a predictive search that will be used to generate the autocomplete list, similar to how the live site currently works:  You will be able to select whether your search is for authors, titles or both and when you start typing in some text a list of matching items will appear, e.g. typing in ‘ham’ for authors in both dictionaries will display the following all items containing ‘ham’ and when you select an item this will then perform a search for the specific text.  You will then be able to click on an item to view the full bibliography.  This is a bit different to how the live site currently works, as with these if you enter ‘ham’ and select (for example) ‘Hamilton, J,’ from the autocomplete list you are taken directly to a page that lists all of the items for the author.  However, we can’t do that any more as we no longer have unique identifiers that group bibliographical items by author.  I may be able to do something similar with the page that comes up when you select an author, but this would have to rely on the name to group items together and a name may not be unique.

For the AND I made some tweaks to the website, such as adding a link to the search page if you type some text into the ‘jump to entry’ option and no matching entries are found.  I then spent the rest of my time continuing to develop the new content management system, specifically the pages for managing source texts.  I finished work on this, adding in facilities to add, edit, browse and delete source texts from the database.  I then migrated the DTD to the new site, which is referenced by the editors’ XML editor when they work on the entry XML files.  The DTD on the old server referenced several lists of things that are then used to populate drop-down lists of options in the XML editor.  I migrated these too, making them dynamically generated from the underlying database rather than statis lists, meaning when (for example) new source texts are added to the CMS these will automatically become available when using the XML editor.

For the Books and Borrowing project I participated in the project’s Zoom call on Monday to discuss the project’s CMS and how to amalgamate the various duplicate author records that resulted from data uploads from different libraries.   After the call I made some required changes to the CMS, such as making the editor’s notes fields visible by default again, and worked on the duplicate authors matching script to add in further outputs when comparing the author names with Levenshtein ratings of 1 and 2.  I also reviewed some content that was sent to us from another library.

Also this week I responded to an email from James Caudle in Scottish Literature about a potential project he’s setting up, made a couple of changes to the Scots Language Policy website, made some tweaks to the menu structure for the Scots Syntax Atlas project and gave some advice to a post-grad student who had contacted me about setting up a corpus.

Week Beginning 11th January 2021

This was my first full week back of the year, although it was also the first week of a return to homeschooling, which made working a little trickier than usual.  I also had a dentist’s appointment on Tuesday and lost some time to that due to my dentist being near the University rather than where I live.  However, despite these challenges I was able to achieve quite a lot this week.  I had two Zoom calls, the first on Monday to discuss a new ESRC grant that Jane Stuart-Smith is putting together with colleagues at Strathclyde while the second on Wednesday was with a partner in Joanna Kopaczyk’s new RSC funded project about Scots Language Policy to discuss the project’s website and the survey they’re going to put out.  I also made a few tweaks to the DSL website, replied to Kirsteen McCue about the AHRC proposal she’s currently putting together, replied to a query regarding the technologies behind the Scots Syntax Atlas, made a few further updates to the Burns Supper map and replied to a query from Rachel Fletcher in English Language about lemmatising Old English.

Other than these various tasks I split my time between the Anglo-Norman Dictionary and the Books and Borrowing projects.  For the former I completed adding explanatory notes to all of the ‘Introducing the AND’ pages.  This was a very time consuming task as there were probably about 150 explanatory notes in total to add in, each appearing in a Bootstrap dialog box, and each requiring me to copy the note form the old website, add in any required HTML formatting, find and check all of the links to AND entries on the old site and add these in as required.  It was pretty tedious to do, but it feels great to get it done, as the notes were previously just giving 404 errors on the new live site, and I don’t like having such things on a site I’m responsible for.  I also migrated the academic articles from the old site to the new one (https://anglo-norman.net/articles/) which also required some manual formatting of the content.  There are five other articles that I haven’t managed to migrate yet as they are full of character encoding errors on the old site.  Geert is looking for copies of these articles that actually work and I’ll add them in once he’s able to get them to me.  I also begin migrating the blog posts to the new site too.  Currently the blog is hosted on Blogspot and there are 55 entries, but we’d like these to be an internal part of the new site.  Migrating these is going to take some time as it means copying the text (which thankfully retains formatting) and then manually saving and embedding any images in the posts.  I’m just going to do a few of these a week until they’re all done and so far I’ve migrated seven.  I also needed to look into how the blogs page works in the WordPress theme I created for the AND, as to start with the page was just listing the full text of every post rather than giving summaries and links through to the full text of each.  After some investigation I figured out that in my theme there is a script called ‘home.php’ and this is responsible for displaying all of the blog posts on the ‘blog’ page.  It in turn calls another template called ‘content-blog.php’ which was previously set to display the full content of the blog.  Instead I set it to display the title as a link through to the full post, the date and then an excerpt from the full blog, which can be accessed through a handy WordPress function called ‘the_excerpt()’.

For the Books and Borrowing project I made some improvements and fixes to the Content Management System.  I’d been meaning to enhance the CMS for some time, but due to other commitments to other projects I didn’t have the time to delve into it.  It felt good to find the time to return to the project this week.

I updated the ‘Books’ and ‘Borrowers’ tabs when viewing a library in the CMS.  I added in pagination to speed up the loading of the pages.  Pages are now split into 500 record blocks and you can navigate between pages using the links above and below the tables.  For some reason the loading of the page is still a bit slow on the Stirling server whereas it was fine on the Glasgow server I was using for test purposes.  I’m not entirely sure why as I’d copied the database over too – presumably the Stirling server is slower.  However, it is still a massive improvement on the speed of the page previously.

I also changed the way tables scroll horizontally.  Previously if a table was wider than the page a scrollbar appeared above and below the table, but this was rather awkward to use if you were looking at the middle of the table (you had to scroll up or down to the beginning or end of the table, then use the horizontal scrollbar to move the table along a bit, then navigate back to the section of the page you were interested in).  Now the scrollbar just appears at the bottom of the browser window and can always be accessed no matter where in the table you are.

I also removed the editorial notes from tables by default to reduce clutter, and added in a button for showing / hiding the editors’ notes near the top of each page.  I also added a limit option in the ‘Books’ and ‘Borrowers’ pages within a library to limit the displayed records to only those found in a specific ledger.  I added in a further option to display those records that are not currently associated with any ledgers too.

I then deleted the ‘original borrowed date’ and ‘original returned date’ fields in St Andrews data as these were no longer required.  I deleted these additional fields from the system and all data that were contained in these fields.

It had been noted that the book part numbers were not being listed numerically.  As part numbers can contain text as well as numbers (e.g. ‘Vol. II’), this field in the database needed to be set as text rather than an integer.  Unfortunately the database doesn’t order numbers correctly when they are contained in a non-numerical field  – instead all the ones come first (1, 10, 11) then all the twos (2, 20, 22) etc.  However, I managed to find a way to ensure that the numbers are ordered correctly.

I also fixed the ‘Add another Edition/Work to this holding’ button that was not working.  This was caused by the Stirling server running a different version of PHP that doesn’t allow functions to have variable numbers of arguments.  The autocomplete function was also not working at edition level and I investigated this.  The issue was being caused by tab characters appearing in edition titles, and I updated my script to ensure these characters are stripped out before the data is formatted as JSON.

There may be further tweaks to be made – I’ll need to hear back from the rest of the team before I know more, but for now I’m up to date with the project.  Next week I intend to get back into some of the larger and more trickier outstanding AND tasks (of which there are, alas, many) and to begin working towards adding the DSL bibliography data into the new version of the API.

Week Beginning 14th December 2020

This was my last week before the Christmas holidays, and it was a four-day week as I’d taken Friday off to use up some unspent holidays.  Despite only being four days long it was a very hectic week, as I had lots of loose ends to tie up before the launch of the new Anglo-Norman Dictionary website on Wednesday.  This included tweaking the appearance of ‘Edgloss’ tags to ensure they always have brackets (even if they don’t in the XML), updating the forms to add line breaks between parts of speech and updating the source texts pop-ups and source texts page to move the information about the DEAF website.

I also added in a lot of the ancillary page data, including the help text, various essays, the ‘history’ page, copyright and privacy pages, the memorial lectures and the multi-section ‘introduction to the AND’.  I didn’t quite manage to get all of the links working in the latter and I’ll need to return to this next year.  I also overhauled the homepage and footer, adding in the project’s Twitter feed, a new introduction and adding links to Twitter and Facebook to the footer.

I also identified and fixed an error with the label translations, which were sometimes displaying the wrong translation.  My script that extracted the labels was failing to grab the sense ID for subsenses.  This ID is only used to pull out the appropriate translation, but because of the failure the ID of the last main sense was being used instead.  I therefore had to update my script and regenerate the translation data.  I also updated the label search to add in citations as well as translations.  This means the search results page can get very long as both labels and translations are applied at sense level, so we end up with every citation in a matching sense listed, but apparently this is what’s wanted.

I also fixed the display of ‘YBB’ sources, which for some unknown reason are handled differently to all other sources in the system and fixed the issue with deviant forms and their references and parts of speech.

On Wednesday we made the site live, replacing the old site with the new one, which you can now access here:  https://anglo-norman.net/.  It wasn’t entirely straightforward to get the DNS update working, but we got there in the end, and after making some tweaks to paths and adding in Google Analytics the site was ready to use, which is quite a relief.  There is still a lot of work to do on the site, but I’m very happy with the progress I’ve made with the site since I began the redevelopment in October.

Also this week I set up a new website for phase two of the ‘Editing Burns for the 21st Century’ project and upgraded all of the WordPress sites I manage to the most recent version.  I also arranged a meeting with Jane Stuart-Smith to discuss a new project in the New Year, replied to Kirsteen McCue about a proposal she’s finishing off, replied to Simon Taylor about a new place-name project he wants me to be involved with and replied to Carolyn Jess-Cooke about a project of hers that will be starting next year.

That’s all for 2020.  Here’s hoping 2021 is not going to be quite so crazy!

Week Beginning 9th November 2020

I took Friday off this week as I had a dentist’s appointment across town in the West End and I decided to take the opportunity to do some Christmas shopping whilst all the shops in Glasgow are still open (there’s some talk of us having greater Covid restrictions imposed in the next week or so).  I spent a couple of days this week working on the Dictionary of the Scots Language, a project I’ve been meaning to return to for many months but have been too busy with other work to really focus on.  Thankfully in November with the launch of the second edition of the Historical Thesaurus out of the way I have a bit of time to get back into the outstanding DSL issues.

Rhona Alcorn had sent a list of outstanding tasks a while back and I spent some time going through this and commenting on each item.  I then began to work through each item, starting with fixing cross references in our ‘V3’ test site (which features data that the editors have been working on in recent years).  Cross references appear differently in the XML for this version so I needed to update the XSLT in order to make them work correctly.  I then updated the full-text extraction script that prepares data for inclusion in the Solr search engine.  Previously this was stripping out all of the XML tags in order to leave the plain text, but unfortunately there were occasions where the entries contains words separated by tags but no spaces, meaning when the tags were removed the words ended up joined together.  I fixed this by adding a space character before every XML tag before the tags were stripped out.  This resulted in plain text that often contained multiple spaces between words, but thankfully Solr ignores these when it indexes the text.  I asked Raymond of Arts IT Support to upload the new text to the server and tested things out and all worked perfectly.

After this I moved on to creating a new ordering for the ‘browse’ feature.  This new ordering takes into consideration parts of speech and ensures that supplemental entries appear below main entries.  It also correctly positions entries beginning with a yogh.  I’d created a script to generate the new browse order many months ago, so I could just tweak this and then use it to update the database.  After that I needed to make some updates to the V2 and V3 front-ends to use the new ordering fields, which took a little time, but it seems to have worked successfully.  I may need to tweak the ordering further, but will await feedback before I make any changes.

I then moved on to investigating searches for accented characters, that were apparently not working correctly.  I noticed that the htaccess script was not set up to accept accented characters so I updated this.  However, the advanced headword search itself was finding forms with accented characters in them if the non-accented version was passed.  The ‘privace’ example was redirecting to the entry page as only one result was matched, but if you perform a search for ‘*vace’ it finds and displays the accented headword in both V2 and V3 but not the live site.  Therefore I think this issue is now sorted.  However, we should perhaps strip out accents from any submitted search terms as allowing accented characters to be submitted (e.g. for *vacé) gives the impression that we allow accented characters to be searched for distinctly from their unaccented versions and the results including both accented and unaccented might confuse people.

The last DSL issue I looked at involved hiding superscript characters in certain circumstances (after ‘geo’ tags in ‘cref’ tags).  There are 3093 SND entries that include the text ‘</geo><su>’ or ‘</geo> <su>’ and I updated the XSLT file that transforms the XML into HTML to deal with these.  Previously it transformed the <su> tag into the HTML superscript tag <sup>.  I’ve updated it so that it now checks to see what the tag’s preceding sibling is.  If it’s a <geo> tag it now adds the class ‘noSup’ to the generated <sup>.  Currently I’ve set <sup> elements with this class to have a pink background so the editors can check to see how the match is performing, and once they’re happy with it I can update the CSS to hide ‘noSup’ elements.

Other than DSL work I also spent some time continuing to work on the redevelopment of the Anglo-Norman Dictionary and completed an initial version of the label search that I began working on last week.  The search form as discussed last week hasn’t changed, but it’s now possible to submit the search, navigate through the search results, return to the search form to make changes to your selection and view entries.  I have needed to overhaul how the search page works to accommodate the label search, which required some pretty major changes behind the scenes, but hopefully none of the other searches will have been affected by this.  You can select a single label and search for that, e.g. ‘archit.’ and if you then refine your search you will see that the label is ‘remembered’ in the form so you can add to it or remove it, for example if you’re interested in all of the entries that are labelled ‘archit.’ and ‘mil’.  As mentioned last week, adding or changing a citation year resets the boxes as different labels are displayed depending on the years chosen.  The chosen year is remembered by the form if you choose to refine your search and the labels and selected labels and Booleans are pulled in alongside the remembered year.  So for example if you want to find entries that feature a sense labelled ‘agricultural’ or ‘bot.’ that have a citation between 1400 and 1410 you can do this.  On the entry page both semantic and usage labels are now links that lead through to the search results for the label in question.  I’ve currently given both label types a somewhat garish pink colour, but this can be changed, or we could use two different colours for the two types.

Other than these projects, I fixed an issue with the 18th century Glasgow borrowers site (https://18c-borrowing.glasgow.ac.uk/) and made some tweaks to the place-names of Iona site, fixing the banner and creating Gaelic versions of the pages and menu items.  The site is not live yet, but I’m pretty happy with how it’s looking.  Here’s an image of the banner I created:

Also this week I spoke to Kirsteen McCue about the project she’s currently preparing a proposal for and I created a new version of the Burns Suppers map for Paul Malgrati.  This was rather tricky as his data is contained in a spreadsheet that has more than 2,500 rows and more than 90 columns, and it took some time to process this in a way that worked, especially as some fields contained carriage returns which resulted in lines being split where they shouldn’t be when the data was exported.  However, I got there in the end, and next week I hope to develop the filters for the data.

Week Beginning 2nd November 2020

I spent a lot of this week continuing to work on the redevelopment of the Anglo-Norman Dictionary website, focussing on the search facilities.  I made some tweaks to the citation search that I’d developed last week, ensuring that the intermediate ‘pick a form’ screen appears even if only one search word is returned and updating the search results to omit forms and labels but to include the citation dates and siglums, the latter opening up pop-ups as with the entry pages.  I also needed to regenerate the search terms as I’d realised that due to a typo in my script a number of punctuation marks that should have been stripped out were remaining, meaning some duplicate forms were being listed, sometimes with a punctuation mark such as a colon and othertimes ‘clean’.

I also realised that I needed to update the way apostrophes were being handled.  In my script these were just being stripped out, but this wasn’t working very well as forms like ‘s’oreille’ were then becoming ‘soreille’ when really it’s the ‘oreille’ part that’s important.  However, I couldn’t just split words up on an apostrophe and use the part on the right as apostrophes appear elsewhere in the citations too.  I managed to write a script that successfully split words on apostrophes and retained the sections on both sides as individual search word forms (if they are alphanumeric).  Whilst writing this script I also fixed an issue with how the data stripped of XML tags is processed.  Occasionally there are no spaces between a word and a tag that contains data, and when my script removed tags to generate the plain text required for extracting the search words this led to a word and the contents of the following tag being squashed together, resulting in forms such as ‘apresentDsxiii1’.  By adding spaces between tags I managed to get around this problem.

With these tweaks in place I then moved onto the next advanced search option:  the English translations.  I extracted the translations from the XML and generated the unique words found in each (with a count of their occurrences), also storing the Sense IDs for the senses in which the translations were found so that I could connect the translations up to the citations found within the senses in order to enable a date search (i.e. limiting a search to only those translations that are in a sense that has a citation in a particular year or range of years).  The search works in a similar way to the citation search, in that you can enter a search term (e.g. ‘bread’) and this will lead you to an intermediary page that lists all words in translations that match ‘bread’.  You can then select one to view all of the entries with their translation that feature the word, with it highlighted.  If you supply a year or a range of years then the search connects to the citations and only returns translations for senses that have a citation date in the specified year or range.  This connects citations and translations via the ‘senseid’ in the XML.  So for example, if you only want to find translations containing ‘bread’ that have a citation between 1350 and 1400 you can do so.  There are still some tweaks that need to be done.  For example, one inconsistency we might need to address is that the number in brackets in the intermediary page refers to the number of translations / citations the word is found in, but when you click through to the full results the ‘matched results’ number will likely be different because this refers to matched entries, and an entry may contain more than one matching translation / citation.

I then moved onto the final advanced search option, the label search.  This proved to be a pretty tricky undertaking, especially when citation dates also have to be taken into consideration.  I didn’t manage to get the search working this week, but I did get the form where you can build your label query up and running on the advanced search page.  If you select the ‘Semantic & Usage Labels’ tab you should see a page with a ‘citation date’ box, a section on the left that lists the labels and a section on the right where your selection gets added.  I considered using tooltips for the semantic label descriptions, but decided against it as tooltips don’t work so well on touchscreens and I thought the information would be pretty important to see.  Instead the description (where available) appears in a smaller font underneath the label, with all labels appearing in a scrollable area.  The number on the right is the number of senses (not entries) that have the label applied to them, as you can see in the following screenshot:

As mentioned above, things are seriously complicated by the inclusion of citation dates.  Unlike with other search options, choosing a date or a range here affects the search options that are available.  E.g. if you select the years 1405-1410 then the labels used in this period and the number of times they are used differs markedly from the full dataset.  For this reason the ‘citation date’ field appears above the label section, and when you update the ‘citation date’ the label section automatically updates to only display labels and counts that are relevant to the years you have selected.  Removing everything from the ‘citation date’ resets the display of labels.

When you find labels you want to search for pressing on the label area adds it to the ‘selected labels’ section on the right.  Pressing on it a second time deselects the label and removes it from the ‘selected labels’ section.  If you select more than one label then a Boolean selector appears between the selected label and the one before, allowing you to choose AND, OR, or NOT, as you can see in the above screenshot.

I made a start on actually processing the search, but it’s not complete yet and I’ll have to return to this next week.  However, building complex queries is going to be tricky as without a formal querying language like SQL there are ambiguities that can’t automatically be sorted out by the interface I’m creating.  E.g. how should ‘X AND Y NOT Z OR B’ be interpreted?  Is it ‘(X AND Y) NOT (Z OR B)’ or ‘((X AND Y) NOT Z) OR B’ or ‘(X AND (Y NOT Z)) OR B’ etc.  Each would give markedly different results.  Adding more than two or possibly three labels is likely to lead to confusing results for people.

Other than working on the AND I spent some time this week working on the Place-names of Iona project.  We had a team meeting on Friday morning and after that I began work on the interface for the website.  This involved the usual installing a theme, customising fonts, selecting colour schemes, adding in logos, creating menus and an initial site structure.  As with the Mull site, the Iona site is going to be bilingual (English and Gaelic) so I needed to set this up too.  I also worked on the banner image, combining a lovely photo of Iona from Shutterstock with a map image from the NLS.  It’s almost all in place now, but I’ll need to make a few further tweaks next week.  I also set up the CMS for the project, as we have decided to not just share the Mull CMS.  I migrated the CMS and all of its data across and then worked on a script that would pick out only those place-names from the Mull dataset that are of relevance to the Iona project.  I did this by drawing a box around the island using this handy online interface: https://geoman.io/geojson-editor and then grabbing the coordinates.  I needed to reverse the latitude and longitude of these due to GeoJSON using them the other way around to other systems, and then I plugged these into a nice little algorithm I discovered for working out which coordinates are within a polygon (see https://assemblysys.com/php-point-in-polygon-algorithm/).  This resulted in about 130 names being identified, but I’ll need to tweak this next week to see if my polygon area needs to be increased.

For the remainder of the week I upgraded all of the WordPress sites I manage to the most recent version (I manage 39 such sites so this took a little while).  I also helped Simon Taylor to access the Berwickshire and Kirkcudbrightshire place-names systems again and fixed an access issue with the Books and Borrowing CMS.  I also looked into an issue with the DSL test sites as the advanced searches on each of these had stopped working.  This was caused by an issue with the Solr indexing server that thankfully Arts IT Support were able to address.

Next week I’ll continue with the AND redevelopment and also return to working on the DSL for the first time in quite a while.