Week Beginning 15th February 2021

I spent quite a bit of time this week continuing to work on the Anglo-Norman Dictionary, creating a new ‘bibliography’ page that will replace the existing ‘source texts’ page and uses the new source text management scripts that I added to the new content management system recently.  This required rather a lot of updates as I needed to update the API to use the new source texts table and also to incorporate source text items as required, which took some time.  I then created the new ‘bibliography’ page which uses the new source text data.  There is new introductory text and each item features the new fields as requested by the editors.  ‘Dean’ references always appear, the title and author are in bold and ‘details’ and ‘notes’ appear when present.  If a source text has one or more items these are listed in numeric order, in a slightly smaller font and indented.  Brackets for page numbers are added in.  I also had to change the way the source texts were ordered as previously the list was ordered by the ‘slug’ but with the updates to the data it sometimes happens that the ‘slug’ doesn’t begin with the same letter as the siglum text and this was messing up the order and the alphabetical buttons.  Now the list is ordered by the siglum text stripped of any tags and all seems to be working fine.  I will still need to update the links from dictionary items to the bibliography when the new page goes live, and update the search facilities too, but I’ll leave this until we’re ready to launch the new page.

During the week I made a number of further tweaks to the new bibliography page based on feedback from the editors.  One big thing was to change the way the page was split up in order to allow links to specific bibliographical items to be added.  Previously the selection of a section of the bibliography based on the initial letter was handled in the browser via JavaScript.  This made it fast to switch between letters, but meant that it was not possible to easily link to a specific section of the bibliography.  I changed this so that the selection was handled on the server side.  This does mean that each time a letter is pressed on the whole page needs to reload, which is a bit slower, but it also means you can bookmark a specific letter, e.g. bibliographies beginning with ‘T’.  It also means it’s possible to link to a specific item within a page.  Each item in the page has an ID in the HTML consisting of ‘bib-‘ plus the item’s slug.  To link to this section of the page you can add a link consisting of the page URL, a hash, and then this ID.  Then when the page loads it will jump down to the relevant section.

I also had to change the way items within bibliographical entries were ordered.  These were previously ordered on the ‘numeral’ field, which contained a Roman numeral.  I’d written a bit of a hack to ensure that these were ordered correctly up to 20, but it turns out that there are some entries with more than 60 items, and some of them have non-standard numerals, such as ‘IXa’.  I decided that it would be too complicated to use the ‘numeral’ field for ordering as the contents are likely to be too inconsistent for a computer to automatically order successfully.  I therefore created a new ‘itemorder’ column in the database that holds a numerical value that decides the order of the items.  I wrote a little script that populates this field for the items already in the system and for any bibliographical entry with 20 or fewer items the order should be correct without manual intervention.  For the handful of entries with more than 20 items the editors will have to manually update the order.  I updated the DMS so that the new ‘item order’ field appears when you add or edit items, and this will need to be used for each item to rearrange the items into the order they should be in.  The new bibliography page uses the new itemorder field so updates are reflected on this page.

I also needed to update the system to correctly process multiple DEAF links, which I’d forgotten to do previously, made some changes to the ordering of items (e.g. so that entries with a number appear before entries with the same text but without a number) and added in an option to hide certain fields by adding a special character into the field.  Also for the AND I updated the XML of an entry and continued to migrate blog posts from the old blog to our new system.

I then began work on the pages of the CMS that will be used for uploading, viewing and downloading entries.  I added an option to the CMS that allows the editors to choose an entry to view all of the data stored about it and to download its XML for editing.  This consists of a textbox into which an entry’s slug can be entered.  After entering the slug and pressing ‘Go’ a page loads that lists all of the data stored about the entry in the system, such as its ID, the ID from the old system, last editor and date of last edit.  You can also access the XML of the entry if you scroll down to the ‘XML’ section of the page.  The XML is hidden in a collapsed section of the page and if you click on the header it expands.  I’ve added in styles to make it easier to read, using a very nice JavaScript library called prism.js (https://prismjs.com/).  There is also a button to download the XML.  Pressing on this prompts you to save the file, and the filename consists of the entryorder plus the entry ID.  This section of the page will also keep a record of all previous versions of the XML when a new version is uploaded into the system (once I develop the upload feature).  This will allow you to access, check and download older versions of the XML, if some mistake has been made when uploading a new version.

Beneath the XML section you can view all of the information that is extracted from the XML and used in the system for search and display purposes: forms, parts of speech, cross references, labels, citations and translations.  This is to enable the editors to check that the data extracted and used by the system is correct.  I could possibly add in options for you to edit this data, but any edits made would then be overwritten the next time an XML file is uploaded for the entry, so I’m not sure how useful this would be.  I think it would be better to limit the editing of this information to via a new XML file upload only.

However, we may want to make some of the information in this page directly editable, specifically some of the fields in the first table on the page.  The editors may want to change the lemma or homonym number, or the slug or entry order.  Similarly the editors may want to manually override the earliest date for the entry (although this would then be overwritten when a new XML version is uploaded) or change the ‘phase’ information.

The scripts to upload a new XML entry are going to take some time to get working, but at least for now you can view and download entries as required. Here’s a screenshot of how the facility works:

Also this week I dealt with a few queries about the Symposium for Seventeenth-Century Scottish Literature, which was taking place online this week and for which I had set up the website.  I also spoke to Arts IT Support about getting a test server set up for the Historical Thesaurus.  I spent a bit of time working for the Books and Borrowing project, processing images for a ledger from Edinburgh University Library, uploading these to the server and generating page records and links between pages for the ledger.  I also gave some advice to the Scots Language Policy RA about how to use the University’s VPN, spoke to Jennifer Smith about her SCOSYA follow-on funding proposal and had a chat with Thomas Clancy about how we will use GIS systems in the Iona project.

Week Beginning 8th February 2021

I was on holiday from Monday to Wednesday this week to cover the school half-term, so only worked on Thursday and Friday.  On Thursday I had a Zoom call with the Historical Thesaurus team to discuss further imports of new data from the OED and how to export our data (such as the revised category hierarchy) in a format that the OED team would be able to use.  We have a meeting with the OED the week after next so it was good to go over some of the issues and refresh my memory about where things were left off as it’s been several months since I last did any major work on the HT.  As a result of the meeting I also did some further work, namely exporting the current version of the online database and making it available for Fraser to download and access on his own PC, and updating some of the earlier scripts I’d created to generate statistics about the unmatched categories and words so that they used the most recent versions of the database.

Also this week I made some further tweaks to the SCOSYA website and created a user account for a researcher who is going to work with some of the data that is only available in the project’s CMS rather than the public website.  I also read through a new funding proposal that Wendy Anderson is involved with and have her some feedback on that and reported a couple of issues with expired SSL certificates that were affecting some websites.

I spent some time on the Books and Borrowing project on two data-related tasks.  First was to look through the new set of digitised images from Edinburgh University Library and decide what we should do with them.  Each image is of an open book, featuring both recto and verso pages in one image.  We may need to split these up into individual images, or we may just create page records that cover both pages.  I alerted the project PI Katie Halsey to the issue and the team will make a decision about which approach to take next week.  The second task was to look through the data from Selkirk library that another project had generated.  We had previously imported data for Selkirk that another researcher had compiled a few years before our project began, but recently discovered that this data did not include several thousand borrowing records of French prisoners of war, as the focus of the researcher was on Scottish borrowers.  We need these missing records and another project has agreed to let us use their data.  I had intended to completely replace the database I’d previously ingested with this new data, but on closer inspection of the new data I have a number of reservations about doing so.

The data from the other project has been compiled in an Excel spreadsheet and as far as I can tell there is no record of the ledger volume or page that each borrowing record was originally located on.  In the data we already have there is a column for ‘source ref’, containing the ledger volume (e.g. ‘volume 1’) and a column for ‘page number’, containing a unique ID for each page in the spreadsheet (e.g. ‘1010159r’).  Looking through the various sheets in the new spreadsheet there is nothing comparable to this, which is vital for our project, as borrowing records must be associated with page records, which in turn must be associated with a ledger.  It also would make it extremely difficult to trace a record back to the original physical record.

Another issue is that in our existing data the researcher has very handily used unique identifiers for readers (e.g. ‘brodie_james’), borrowing records (e.g. ‘1’) and books (e.g. ‘adam_view_religion’) that tie the various records together very nicely.  The new project’s data does not appear to use any unique identifiers to connect bits of data together.  For example, there are three ‘John Anderson’ borrowers and in the data we’re currently using these are differentiated by their IDs as ‘anderson_john’, ‘anderson_john2’ and ‘anderson_john3’.  This means it’s easy to tell which borrower appears in the borrowing records.  In the new project’s data three different fields are required to identify the borrower:  surname, forename and residence.  This data is stored in separate columns in the ‘All loans’ sheet (e.g. ‘Anderson’, ‘John’, ‘Cramalt’), but in the ‘Members’ sheet everything is joined together in one ‘Name’ field, e.g. ‘Anderson, John (Cramalt)’.  This lack of unique identifiers combined with the inconsistent manner of recording name and place will make it very difficult to automatically join up records and I’ve flagged this up with Katie for further discussion with the team.  It’s looking like we may want to try and identify the POW records from the new project’s data and amalgamate these with the data we already have, rather than replacing everything.

I also spent a bit of time on the Anglo-Norman Dictionary this week, making some changes to homonym numbers for a few entries and manually updating a couple of commentaries.  I also worked for the Dictionary of the Scots Language, preparing the SND and DOST datasets for import into the new editing system that the project is now going to use.  This was a little trickier than anticipated as initially I zipped up the data that I’d exported from the old editing system in November when I worked on the new ‘V4’ version of the online API, but we realised that this still contained duplicates that I’d stripped out when uploading the data into the new online database.  So instead I exported the XML from the online database, but it turned out that during the upload process a section of the entry XML was being removed.  This section (<meta>) contained all of the forms and URLs and my upload process exported these to a separate table and reformatted the XML so that it matched the structure that was defined during the creation of the first version of the API.  However, the new editing system requires this <meta> section so that data I’d prepared was not usable.  Instead I took the XML exported from the old editing system back in November and ran it through the script I’d written to strip out duplicates, then prepared the resulting XML dataset for transfer.  It looks like this approach has worked, but I’ll find out more next week.

Week Beginning 1st February 2021

I had two Zoom calls this week, the first on Wednesday with Kirsteen McCue to discuss a new, small project to publish a selection of musical settings to Burns poems and the second on Friday with Joanna Kopaczyk and her RA on the Scots Language Policy project to give a tutorial on how to use WordPress.

The majority of my week was divided between the Anglo-Norman Dictionary, the Dictionary of the Scots Language and the Place-names of Iona projects.  For the AND I made a few tweaks to the static content of the site and migrated some more blog posts across to the new site (these are not live yet).  I also added commentaries to more than 260 entries, which took some time to test.  I also worked on the DTD file that the editors reference from their XML editing software to ensure that all of the elements and attributes found within commentaries are ‘allowed’ in the XML.  Without doing this it was possible to add the tags in, but this would give errors in the editing software.  I also batch updated all of the entries on the site to reference the new DTD and exported all of the files, zipped them up and sent them to the editors so they can work on them as required.  I also began to think about migrating the TextBase from the old site to the new one, and managed to source the XML files that comprise this system.  It looks like it may be quite tricky to work with these as there are more than 70 book-length XML files to deal with and so far I have not managed to locate the XSLT that was originally used to process these files.

For the DSL I completed work on the new bibliography search pages that use the new ‘V4’ data.  These pages allow the authors and titles of bibliographical items to be searched, results to be viewed and individual items to be displayed.  I also made some minor tweaks to the live site and had a discussion with Ann Fergusson about transferring the project’s data to the people who have set up a new editing interface for them, something I’m hoping to be able to tackle next week.

For the Place-names of Iona project I had a discussion about implementing a new ‘work of the month’ feature and spent quite a bit of time investigating using 10-digit OS grid references in the project’s CMS.  The team need to use up to 10-digit grid references to get 1m accuracy for individual monuments, but the library I use in the CMS to automatically generate latitude and longitude from the supplied grid reference will only work with a 6-digit NGR.  The automatically generated latitude and longitude are then automatically passed to Google Maps to ascertain the altitude of the location and all of this information is stored in the database whenever a new place-name record is created or an existing record is edited.

As the library currently in use will only accept 6-digit NGRs I had to do a bit of research into alternative libraries, and I managed to find one that can accept NGRs of 2,4,6,8 or 10 digits.  Information about the library, including text boxes where you can enter an NGR and see the results can be found here: http://www.movable-type.co.uk/scripts/latlong-os-gridref.html along with an awful lot of description about the calculations and some pretty scary looking formulae.

The library is written in JavaScript, which runs in the client’s browser, whereas the previous library was written in PHP, which runs on the server.  This means I needed to change the way the CMS works – previously you’d enter an NGR and then when the form was submitted to the server the PHP library would generate the latitude and longitude whereas now the latitude and longitude need to be generated in the browser as soon as the NGR is entered into the textbox, and two further textboxes for latitude and longitude will appear in the form and will then be automatically populated with the results.

 

This does mean the person filling out the form can see the generated latitude and longitude and also tweak it if required before submitting the form, which is a potentially useful thing.  I may even be able to add a Google Map to the form so you can see (and possibly tweak) the point before submitting the form, but I’ll need to look into this further.  I also still need to work on the format of the latitude and longitude as the new library generates them with a compass point (e.g. 6.420848° W) and we need to store them as a purely decimal value (e.g. -6.420848) with ‘W’ and ‘S’ figures being negatives.

However, whilst researching this I discovered a potentially worrying thing that needs discussion with the wider team.  The way the Ordnance Survey generates latitude and longitude from their grid references was changed in 2014.  Information about this can be found in the page linked to above in the ‘Latitude/longitudes require a datum’ section.  Previously the OS used ‘OSGB-36’ to generate latitude and longitude, but in 2014 this was changed to ‘WGS84’, which is used by GPS systems.  The difference in the latitude / longitude figures generated by the two systems is about 100 metres, which is quite a lot if you’re intending to pinpoint individual monuments.

The new library has facilities to generate latitude and longitude using either the new or old systems, but defaults to the new system.  I’ve checked the output of the library we currently use and it uses the old ‘OSGB-36’ system.  This means all of the place-names in the system so far (and all those for the previous projects) have latitudes and longitudes generated using the now obsolete (since 2014) system. To give an example of the difference, the place-name A’ Mhachair in the CMS has this location: https://www.google.com/maps/place/56%C2%B019’33.2%22N+6%C2%B025’11.4%22W/@56.3258889,-6.422022,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325885!4d-6.419828 and with the newer ‘WGS84’ system it would have this location: https://www.google.com/maps/place/56%C2%B019’32.7%22N+6%C2%B025’15.1%22W/@56.325744,-6.4230367,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325744!4d-6.420848

So what we need to decide before I replace the old library with the new one in the CMS is whether we switch to using ‘WGS84’ or we keep using ‘OSGB-36’.  As I say, this will need further discussion before I implement any changes.

Also this week I responded to a query from Cris Sarg of the Medical Humanities Network project, spoke to Fraser Dallachy about future updates to the HT’s data from the OED, made some tweaks to the structure of the SCOSYA website for Jennifer Smith, added a plugin to the Editing Burns site for Craig Lamont and had a chat with the Books and Borrowing people about cleaning the authors data, importing the Craigston data and how to deal with a lot of borrowers that were excluded from the Selkirk data that I previously imported.

Next week I’ll be on holiday from Monday to Wednesday to cover the school half term.

 

Week Beginning 25 January 2021

I headed into the University for the first time this year on Wednesday this week to collect a new iPad that I’d ordered and to get some files from my office.  It was great to see the old place again, but it did take quite a chunk out of my day to travel there and back, especially as I’m still home-schooling either a morning or an afternoon each day at the moment too.

As with last week, I mainly divided my time this week between the Dictionary of the Scots Language, the Anglo-Norman Dictionary and the Books and Borrowing project, with a few other bits and bobs added in as well.  For the DSL I retrieved the source code for my original Scots School Dictionary app from my office so we can host this somewhere on the DSL website.  This is because the DSL have commissioned someone else to make a new School Dictionary app, which launched this week, but doesn’t include an ‘English to Scots’ feature as the old app does, so we’re going to make the old app available as a website for those people who miss the feature.  I also made a few minor tweaks to the main DSL site, and then focussed on adding bibliography search facilities to the new version of the API, a task that I’d begun last week.

I created a new table for the bibliographical data that includes the various fields used for DOST (note, author, editor, date, longtitle etc) and a field for the XML data used for SND.  I then created two further tables for searching, one that contains every author and editor name for each item (for DOST there may be different names in the author, editor, longauthor and longeditor fields while for SND there may be any number of <author> tags) and the other containing every title for each item (DOST may have different text in title and longtitle while SND items can have any number of <title> tags).  These tables allow you to search for any variant author, editor or title and find the item.

I also created two additional fields in the bibliography table that contain the ‘display author’ and ‘display title’.  These are the forms that get displayed in the search results before you click on an item to open the full bibliographical entry.  I then updated the V4 API to add in facilities to search and retrieve the bibliographies.  I didn’t have the time to connect to this API and to implement the search on the Sienna test site, which is something I hope to do next week, but the logic behind the search and display of bibliographies is all there.  There is a predictive search that will be used to generate the autocomplete list, similar to how the live site currently works:  You will be able to select whether your search is for authors, titles or both and when you start typing in some text a list of matching items will appear, e.g. typing in ‘ham’ for authors in both dictionaries will display the following all items containing ‘ham’ and when you select an item this will then perform a search for the specific text.  You will then be able to click on an item to view the full bibliography.  This is a bit different to how the live site currently works, as with these if you enter ‘ham’ and select (for example) ‘Hamilton, J,’ from the autocomplete list you are taken directly to a page that lists all of the items for the author.  However, we can’t do that any more as we no longer have unique identifiers that group bibliographical items by author.  I may be able to do something similar with the page that comes up when you select an author, but this would have to rely on the name to group items together and a name may not be unique.

For the AND I made some tweaks to the website, such as adding a link to the search page if you type some text into the ‘jump to entry’ option and no matching entries are found.  I then spent the rest of my time continuing to develop the new content management system, specifically the pages for managing source texts.  I finished work on this, adding in facilities to add, edit, browse and delete source texts from the database.  I then migrated the DTD to the new site, which is referenced by the editors’ XML editor when they work on the entry XML files.  The DTD on the old server referenced several lists of things that are then used to populate drop-down lists of options in the XML editor.  I migrated these too, making them dynamically generated from the underlying database rather than statis lists, meaning when (for example) new source texts are added to the CMS these will automatically become available when using the XML editor.

For the Books and Borrowing project I participated in the project’s Zoom call on Monday to discuss the project’s CMS and how to amalgamate the various duplicate author records that resulted from data uploads from different libraries.   After the call I made some required changes to the CMS, such as making the editor’s notes fields visible by default again, and worked on the duplicate authors matching script to add in further outputs when comparing the author names with Levenshtein ratings of 1 and 2.  I also reviewed some content that was sent to us from another library.

Also this week I responded to an email from James Caudle in Scottish Literature about a potential project he’s setting up, made a couple of changes to the Scots Language Policy website, made some tweaks to the menu structure for the Scots Syntax Atlas project and gave some advice to a post-grad student who had contacted me about setting up a corpus.

Week Beginning 18th January 2021

I worked on many different projects this week, with most of my time being split between the Dictionary of the Scots Language, the Anglo-Norman Dictionary, the Books and Borrowing project and the Scots Language Policy project.  For the DSL I began investigating adding the bibliographical data to the new API and developing bibliographical search facilities.  Ann Ferguson had sent me spreadsheets containing the current bibliographical data for DOST and SND and I migrated this data into a database and began to think about how the data needs to be processed in order to be used on the website.  At the moment links to bibliographies from SND entries are not appearing in the new version of the API, while DOST bibliographical links do appear but don’t lead anywhere.  Fixing the latter should be fairly straightforward but the former looks to be a bit trickier.

For SND for the live site using the original V1 API it looks like the bibliographical links are stored in a database table and these are then injected into the XML entries whenever an entry is displayed.  A column in the table contains the order the citation appears in the entry and this is how the system knows which bibliographical ID to assign to which link in the entry.  This raises some questions about what happens when an entry is edited.  If the order of the citations in the XML is changed, or a new citation is added then all of the links to the bibliographies will be out of sync.  Plus, unless the database table is edited no new bibliographical links will ever display.  It is possible that the data in bibliographical links table is already out of date and we are going to need to try and find a way to add these bibliographical links into the actual XML entries rather than retaining the old system of storing them separately and then injected then each time the entry is requested.  I emailed Ann for further discussion about these points.  Also this week I made a few updates to the live DSL website, changing the logos that are used and making ‘Dictionary’ in the title plural.

For the AND this week I added in the missing academic articles that Geert had managed to track down and then began focusing on updating the source texts and working with the commentaries for the R data.  The commentaries were sent to me in two Word files, and although we had hoped to be able to work out a mechanism for automatically extracting these and adding them to their corresponding entries it looks like this will be very difficult to achieve with any accuracy.  I concluded that I could split the entries up in Geert’s document based on the ‘**’ characters between commentaries and possibly split Heather’s up based on blank lines.  I could possibly retain the formatting (bold, italic, superscript text etc) and convert this to HTML, although even this would be tricky, time consuming and error-prone.  The commentaries include links to other entries in bold, and I would possibly be able to automatically add in links to other entries based on entries appearing in bold in the commentaries, but again this would be highly error-prone as bold text is used for things other than entries, and sometimes the entry number follows a hash while at other times it’s superscript.  It would also be difficult to automatically ascertain which entry a commentary belongs to as there is some inconsistency here too – e.g. the commentary for ‘remuement’ is listed as ‘[remuement]??’ and there are other occasions where the entry doesn’t appear on its own on a line – e.g. ‘Retaillement xref with recelement’ and ‘Reverdure—Geert says to omit’.  Then there are commentaries that are all crossed out, e.g. ‘resteot’.  We decided that attempting to automatically process the commentaries would not be feasible and instead the editors would add them to the entry XML files manually, adding the tags for bold, italic, superscript and other formatting as required.  Geert added commentaries to two entries to see how this would work and it worked very well.

For the source texts, we had originally discussed the editors editing these via a spreadsheet that I’d generated from the online data last year, but I decided it would be better if I just start work on the new online Dictionary Management System (DMS) and create the means of adding, listing and editing the source texts as the first thing that can be managed via the new DMS.  This seemed preferable to establishing a new, temporary workflow that may take some time to set up and may end up not being used for very long.  I therefore created the login and initial pages for the DMS (by repurposing earlier content management systems I’d created).  I then set up database tables for holding the new source text data, which includes multiple potential items for each source and a range of new fields that the original source text data does not contain.  With this in place I created the DMS pages for browsing the source texts and deleting them, and I’m midway through writing the scripts for editing existing and adding new source texts.  I aim to have this finished next week.

For the Books and Borrowing project I continued to make refinements to the CMS, namely reducing the number of books and borrowers from 500 to 200 to speed up page loads, adding in the day of the week that books were borrowed and returned, based on the date information already in the system, removing tab characters for edition titles as these were causing some issues for the system, replacing the editor’s notes rich text box with a plain text area to save space on the edit page and adding a new field to the borrowing record that allows the editor to note when certain items appear for display only and should otherwise be overlooked, for example when generating stats.  This is to be used for duplicate lines and lines that are crossed out.  I also had a look through the new sample data from Craigston that was sent to us this week.

For the Scots Language Policy project I set up the project’s website, including the user interface, adding in fonts, plugins, initial page structure, site graphics, logos etc.  Also this week I fixed an issue with song downloads on the Burns website (the plugin the controls the song downloads is very old and had broken.  I needed to install a newer version and upgrade the song data for the downloads to work again.  I also continued my email conversation with Rachel Fletcher about a project she’s putting together and created a user account to allow Simon Taylor to access the Ayr Placenames CMS.

Week Beginning 11th January 2021

This was my first full week back of the year, although it was also the first week of a return to homeschooling, which made working a little trickier than usual.  I also had a dentist’s appointment on Tuesday and lost some time to that due to my dentist being near the University rather than where I live.  However, despite these challenges I was able to achieve quite a lot this week.  I had two Zoom calls, the first on Monday to discuss a new ESRC grant that Jane Stuart-Smith is putting together with colleagues at Strathclyde while the second on Wednesday was with a partner in Joanna Kopaczyk’s new RSC funded project about Scots Language Policy to discuss the project’s website and the survey they’re going to put out.  I also made a few tweaks to the DSL website, replied to Kirsteen McCue about the AHRC proposal she’s currently putting together, replied to a query regarding the technologies behind the Scots Syntax Atlas, made a few further updates to the Burns Supper map and replied to a query from Rachel Fletcher in English Language about lemmatising Old English.

Other than these various tasks I split my time between the Anglo-Norman Dictionary and the Books and Borrowing projects.  For the former I completed adding explanatory notes to all of the ‘Introducing the AND’ pages.  This was a very time consuming task as there were probably about 150 explanatory notes in total to add in, each appearing in a Bootstrap dialog box, and each requiring me to copy the note form the old website, add in any required HTML formatting, find and check all of the links to AND entries on the old site and add these in as required.  It was pretty tedious to do, but it feels great to get it done, as the notes were previously just giving 404 errors on the new live site, and I don’t like having such things on a site I’m responsible for.  I also migrated the academic articles from the old site to the new one (https://anglo-norman.net/articles/) which also required some manual formatting of the content.  There are five other articles that I haven’t managed to migrate yet as they are full of character encoding errors on the old site.  Geert is looking for copies of these articles that actually work and I’ll add them in once he’s able to get them to me.  I also begin migrating the blog posts to the new site too.  Currently the blog is hosted on Blogspot and there are 55 entries, but we’d like these to be an internal part of the new site.  Migrating these is going to take some time as it means copying the text (which thankfully retains formatting) and then manually saving and embedding any images in the posts.  I’m just going to do a few of these a week until they’re all done and so far I’ve migrated seven.  I also needed to look into how the blogs page works in the WordPress theme I created for the AND, as to start with the page was just listing the full text of every post rather than giving summaries and links through to the full text of each.  After some investigation I figured out that in my theme there is a script called ‘home.php’ and this is responsible for displaying all of the blog posts on the ‘blog’ page.  It in turn calls another template called ‘content-blog.php’ which was previously set to display the full content of the blog.  Instead I set it to display the title as a link through to the full post, the date and then an excerpt from the full blog, which can be accessed through a handy WordPress function called ‘the_excerpt()’.

For the Books and Borrowing project I made some improvements and fixes to the Content Management System.  I’d been meaning to enhance the CMS for some time, but due to other commitments to other projects I didn’t have the time to delve into it.  It felt good to find the time to return to the project this week.

I updated the ‘Books’ and ‘Borrowers’ tabs when viewing a library in the CMS.  I added in pagination to speed up the loading of the pages.  Pages are now split into 500 record blocks and you can navigate between pages using the links above and below the tables.  For some reason the loading of the page is still a bit slow on the Stirling server whereas it was fine on the Glasgow server I was using for test purposes.  I’m not entirely sure why as I’d copied the database over too – presumably the Stirling server is slower.  However, it is still a massive improvement on the speed of the page previously.

I also changed the way tables scroll horizontally.  Previously if a table was wider than the page a scrollbar appeared above and below the table, but this was rather awkward to use if you were looking at the middle of the table (you had to scroll up or down to the beginning or end of the table, then use the horizontal scrollbar to move the table along a bit, then navigate back to the section of the page you were interested in).  Now the scrollbar just appears at the bottom of the browser window and can always be accessed no matter where in the table you are.

I also removed the editorial notes from tables by default to reduce clutter, and added in a button for showing / hiding the editors’ notes near the top of each page.  I also added a limit option in the ‘Books’ and ‘Borrowers’ pages within a library to limit the displayed records to only those found in a specific ledger.  I added in a further option to display those records that are not currently associated with any ledgers too.

I then deleted the ‘original borrowed date’ and ‘original returned date’ fields in St Andrews data as these were no longer required.  I deleted these additional fields from the system and all data that were contained in these fields.

It had been noted that the book part numbers were not being listed numerically.  As part numbers can contain text as well as numbers (e.g. ‘Vol. II’), this field in the database needed to be set as text rather than an integer.  Unfortunately the database doesn’t order numbers correctly when they are contained in a non-numerical field  – instead all the ones come first (1, 10, 11) then all the twos (2, 20, 22) etc.  However, I managed to find a way to ensure that the numbers are ordered correctly.

I also fixed the ‘Add another Edition/Work to this holding’ button that was not working.  This was caused by the Stirling server running a different version of PHP that doesn’t allow functions to have variable numbers of arguments.  The autocomplete function was also not working at edition level and I investigated this.  The issue was being caused by tab characters appearing in edition titles, and I updated my script to ensure these characters are stripped out before the data is formatted as JSON.

There may be further tweaks to be made – I’ll need to hear back from the rest of the team before I know more, but for now I’m up to date with the project.  Next week I intend to get back into some of the larger and more trickier outstanding AND tasks (of which there are, alas, many) and to begin working towards adding the DSL bibliography data into the new version of the API.

Week Beginning 4th January 2021

This was my first week back after the Christmas holidays, and I only worked the Thursday and the Friday.  We’re back in full lockdown and homeschooling again now, so it’s not the best of starts to the new year.  I spent my two days this week catching up with emails and finishing off some outstanding tasks from last year.  I spoke to Joanna Kopaczyk about her new RSE funded project that I need to set up a website for, and I had a chat with the DSL people about the outstanding tasks that still need to be tackled for the Dictionary of the Scots Language.  I also added a few more Burns Suppers to the Supper Map that I created over the past year for Paul Malgrati in Scottish Literature, which was a little time consuming as the data is contained in a spreadsheet featuring more than 70 columns.

I spent the remainder of the week continuing to work on the new Anglo-Norman Dictionary site, which we launched just before Christmas.  The editors, Geert and Heather, had spotted some issues with the site whilst using it so I had a few more things to add to my ‘to do’ list, some of which I ticked off.  One such thing was that entries with headwords that consisted of multiple words weren’t loading.  This required an update to the way the API handles variables passed in URL strings, and after I implemented that such entries then loaded successfully.

A bigger issue was the fact that some citations were not appearing in the entries.  This took some time to investigate but I eventually tracked down the problem.  I’d needed to write a script that reordered all of the citations in every sense in every entry by date, as previously the citations were not in date order.  However, when looking at the entries that had missing citations it would appear that where a sense has more than one citation in the same year only one of these citations was appearing.  This is because within each sense I was placing the citations in an array with the year as the key, e.g:

$citation[“1134”] = citation 1

$citation[“1362”] = citation 2

$citation[“1247”] = citation 3

I was then reordering the array based on the key to get things in year order.  But where there were multiple citations in a single year for a sense this approach wasn’t working as the array key needs to be unique.  So if there were two ‘1134’ citations only one was being retained.  To fix this I updated the reordering script to add a further incrementing number to the key, so if there are two ‘1134’ citations the key for the first is ‘1134-1’ and the second is ‘1134-2’.  This ensures all citations for a year are retained and the sorting by key still works.  After implementing the fix and rerunning the citation ordering script I updated the XML in the online database and the missing citations are now thankfully appearing online.

I ended the week by continuing to work through the ancillary pages of the dictionary, focusing on the ‘Introducing the AND’ pages (https://anglo-norman.net/introducing-the-and/).  I’d managed to get the main content of the pages in place before Christmas, but explanatory notes and links were not working.  There are about 50 explanatory notes in the ‘Magna Carta’ page and I needed to copy all of these from the old site and add them to a Bootstrap dialog pop-up, which was rather time-consuming.  I also had to update the links through to the dictionary entries as although I’d added redirects to ensure the old links worked, some of the links in these pages didn’t feature an entry number where one was required.  For example on the page about food there was a link to ‘pere’ but the dictionary contains three ‘pere’ entries and the correct one is actually the third (the fruit pear).  I still need to fix links and explanatory notes in the two remaining pages of the introduction, which I will try to get sorted next week.

Week Beginning 14th December 2020

This was my last week before the Christmas holidays, and it was a four-day week as I’d taken Friday off to use up some unspent holidays.  Despite only being four days long it was a very hectic week, as I had lots of loose ends to tie up before the launch of the new Anglo-Norman Dictionary website on Wednesday.  This included tweaking the appearance of ‘Edgloss’ tags to ensure they always have brackets (even if they don’t in the XML), updating the forms to add line breaks between parts of speech and updating the source texts pop-ups and source texts page to move the information about the DEAF website.

I also added in a lot of the ancillary page data, including the help text, various essays, the ‘history’ page, copyright and privacy pages, the memorial lectures and the multi-section ‘introduction to the AND’.  I didn’t quite manage to get all of the links working in the latter and I’ll need to return to this next year.  I also overhauled the homepage and footer, adding in the project’s Twitter feed, a new introduction and adding links to Twitter and Facebook to the footer.

I also identified and fixed an error with the label translations, which were sometimes displaying the wrong translation.  My script that extracted the labels was failing to grab the sense ID for subsenses.  This ID is only used to pull out the appropriate translation, but because of the failure the ID of the last main sense was being used instead.  I therefore had to update my script and regenerate the translation data.  I also updated the label search to add in citations as well as translations.  This means the search results page can get very long as both labels and translations are applied at sense level, so we end up with every citation in a matching sense listed, but apparently this is what’s wanted.

I also fixed the display of ‘YBB’ sources, which for some unknown reason are handled differently to all other sources in the system and fixed the issue with deviant forms and their references and parts of speech.

On Wednesday we made the site live, replacing the old site with the new one, which you can now access here:  https://anglo-norman.net/.  It wasn’t entirely straightforward to get the DNS update working, but we got there in the end, and after making some tweaks to paths and adding in Google Analytics the site was ready to use, which is quite a relief.  There is still a lot of work to do on the site, but I’m very happy with the progress I’ve made with the site since I began the redevelopment in October.

Also this week I set up a new website for phase two of the ‘Editing Burns for the 21st Century’ project and upgraded all of the WordPress sites I manage to the most recent version.  I also arranged a meeting with Jane Stuart-Smith to discuss a new project in the New Year, replied to Kirsteen McCue about a proposal she’s finishing off, replied to Simon Taylor about a new place-name project he wants me to be involved with and replied to Carolyn Jess-Cooke about a project of hers that will be starting next year.

That’s all for 2020.  Here’s hoping 2021 is not going to be quite so crazy!

Week Beginning 7th December 2020

I spent most of the week working on the Anglo-Norman Dictionary as we’re planning on launching this next week and there was still much to be done before that.  One of the big outstanding tasks was to reorder all of the citations in all senses within all entries so they are listed by their date.  This was a pretty complex task as each entry may any number of up to four different types of sense:  main senses, subsenses and then main senses and subsenses within locutions.  My script needed to be able to extract the dates for each citation within each of these blocks, figure out their date order, rearrange the citations by this order and then overwrite the XML section with the reordered data.  Any loss of or mangling of the data would be disastrous and with almost 60,000 entries being updated it would not be possible to manually check that everything worked in all circumstances.

Updating the XML proved to be a little tricky as I had been manipulating the data with PHP’s simplexml functionsand it doesn’t include a facility to replace a child node.  This meant that I couldn’t tell the script to identify a sense and replace its citations with a new block.  In addition, the XML was not structured to include a ‘citations’ element that contained all of the individual citations for an entry but instead just listed each citation as an ‘attestation’ element within the sense, therefore it wasn’t straightforwardly possible to replace the clock of citations with an updated block.  Instead I needed to reconstruct the sense XML in its entirety, including both the complete set of citations and all other elements and attributes contained within the sense, such as IDs, categories and labels.  With a completely new version of the sense XML stored in memory by the script I then needed to write this to the XML, and for this I needed to use PHP’s DOM manipulation functions because (as mentioned earlier) simplexml has no means of identifying and replacing a child node.

I managed to get a version of my script working and all seemed to be well with the entries I was using for test purposes so I ran the script on the full dataset and replaced the data on the website (ensuring that I kept a record of the pre-reordered data handy in case of any problems).  When the editors reviewed the data they noticed that while the reordering had worked successfully for some senses, it had not reordered others.  This was a bit strange and I therefore had to return to my script to figure out what had gone wrong.  I noticed that only the citations in the first sense / subsense / locution sense / locution subsense had been reordered, with others being skipped.  But when I commented out the part of the script that updated the XML all senses were successfully being picked out.  This seemed strange to me as I didn’t see why the act of identifying senses should be affected by the writing of data.  After some investigation I discovered that with PHP’s simplexml implementation if you iterate through nodes using a ‘foreach’ and then update the item picked out by the loop (so for example in ‘foreach($sense as $s)’ updating $s) then subsequent iterations fail.  It would appear that updating $s in this example changes the XML string that’s loaded into memory which then means the loop reckons it’s reached the end of the matching elements and stops.  My script had different loops for going through senses / subsenses / locution senses / locution subsenses which is why the first of each type was being updated while others weren’t.  After I figured this out I updated my script to use a ‘for’ loop instead of a ‘foreach’ and stored $s within the scope of the loop only and this worked.  With the change in place I reran the script on the full dataset and uploaded it to the website and all thankfully appears to have worked.

For the rest of the week I worked through my ‘to do’ list, ticking items off. I updated the ‘Blog’ menu item to point to the existing blog site (this will eventually be migrated across).  The ‘Textbase’ menu item now loads a page stating that this feature will be added in 2021.  I managed to implement the ‘source texts’ page as it turns out that I’d already developed much of the underpinnings for this page whilst developing other features.  As with citation popups, it links into the advanced search and also to the DEAF website.  I figured out how to ensure that words with accented characters is citation searches now appear separately in the list from their non-accented versions.  E.g. a search for ‘apres*’ now has ‘apres (28)’ separate from ‘après (4)’ and ‘aprés (2229)’.  We may need to think about the ordering, though, as accented characters are currently appearing at the end of the list.  I also made the words lower case here – they were previously being transformed into upper case.  Exact searches (surrounded by quotes) are still accent-sensitive.  This is required so that the link through the list of forms to the search results works (otherwise the results display all accented and non-accented forms).  I also ensured that word highlighting in snippets in results now works as it should with accented characters and upper case initial letters are now retained too.

I added in an option to return to the list of forms (i.e. the intermediate page) from the search results.  In addition to ‘Refine your search’ there is also a ‘Select another form’ button and I ensured that the search results page still appears when there is only one search result for citation and translation searches now.  I also figured out why multiple words were sometimes being returned in the citation and translation searches.  This was because what looked like spaces between words in the XML were sometimes not regular spaces but non-breaking space characters (\u00a0).  As my script split up citations and translations on spaces these were not being picked up as divisions between words.  I needed to update my script to deal with these characters and then regenerate all of the citation and translation data again in order to fix this.

I also ensured that when conducting a label search the matching labels in an entry page are now highlighted and the page automatically scrolls down to the first matching label.  I also made several tweaks to the XSLT, ensuring that where there are no dates for citations the text ‘TBD’ appears instead and ensuring a number of tags that were not getting properly transformed were handled.

Also this week I made some final changes to the interactive map of Burns Suppers, including tweaking the site icon so it looks a bit nicer and adding in a ‘read more’ button to the intro text and fixing the scrolling issue on small screens, plus updating the text to show 17 filters.  I fixed the issue with the attendance filter and have also updated the layout of the filters so they look better on both monitors and mobile devices.

My other main task of the week was to restructure the Mapping Metaphor website based on suggestions for REF from Wendy and Carole.  This required a lot of work as the visualisations needed to be moved to different URLs and the Old English map, which was previously a separate site in a subdirectory, needed to be amalgamated with the main site.

I removed the top-level tabs that linked between MM, MMOE and MetaphorIC and also the ‘quick search’ box.  The ‘metaphor of the day’ page now displays both a main and an OE connection and the ‘Metaphor Map of English’ / ‘Metaphor Map of Old English’ text in the header has been removed.  I reworked the navigation bar in order to allow a sub-navigation bar to appear.  It is now positioned within the header and is centre-aligned.  ‘Home’ now features introductory text rather than the visualisation.  ‘About the project’ now has the new secondary menu rather than the old left-panel menu.  This is because the secondary menu on the map pages couldn’t have links in the left-hand panel as it’s already used for something else.  It’s better to have the sub-menu displaying consistently across different sections of the site.  I updated the text within several ‘About’ pages and ‘How to Use’, which also now has the new secondary menu.  The main metaphor map is now in the ‘Metaphor Map of English’ menu item.  This has sub-menu items for ‘search’ and ‘browse’.  The OE metaphor map is now in the ‘Metaphor Map of Old English’ menu item.  It also has sub-menu items for ‘search’ and ‘browse’.  The OE pages retain their purple colour to make a clear distinction between the OE map and the main one.  MetaphorIC retains the top-level navigation bar but now only features one link back to the main MM site.  This is right-aligned to avoid getting in the way of the ‘Home’ icon that appears in the top left of sub-pages.  The new site replaced the old one on Friday and I also ensured that all of the old URLs continue to work (e.g. the ‘cite this’ will continue to work.

Week Beginning 30th November 2020

I took Friday off again this week as I needed to go and collect a new pair of glasses from my opticians in the West End, which is quite a trek from my house.  Although I’d taken the day off I ended up working for about three hours as on Thursday Fraser Dallachy emailed me to ask about the location of the semantically tagged EEBO dataset that we’d worked on a couple of years ago.  I didn’t have this at home but I was fairly certain I had it on a computer in my office so I decided to take the opportunity to pop in and locate the data.  I managed to find a 10Gb tar.gz file containing the data on my desktop PC, along with the unzipped contents (more than 25,000 files) in another folder.  I’d taken an empty external hard drive with me and began the process of copying the data, which took hours.  I’d also remembered that I’d developed a website where the tagged data could be searched and that this was on the old Historical Thesaurus server, but unfortunately it no longer seemed to be accessible.  I also couldn’t seem to find the code or data for it on my desktop PC, but I remembered the previously I’d set up one of the four old desktop PCs I have sitting in my office as a server and the system was running on this.  It took me a while to get the old PC connected and working, but I managed to get it to boot up.  It didn’t have a GUI installed so everything needed to be done at the command line, but I located the code and the database.  I had planned to copy this to a USB stick, but the server wasn’t recognising USB drives (in either NTFS or FAT format) so I couldn’t actually get the data off the machine.  I decided therefore to install Ubuntu Linux on a bootable USB stick and to get the old machine to boot into this rather than run the operating system on the hard drive.  Thankfully this worked and I could then access the PC’s hard drive from the GUI that ran from the USB stick.  I was able to locate the code and the data and copy them onto the external hard drive, which I then left somewhere that Fraser would be able to access it.  Not a bad bit of work for a supposed holiday.

As with previous week’s I split my time mostly between the Anglo-Norman Dictionary and the Dictionary of the Scots Language.  For the AND I finally updated the user interface.  I added in the AND logo and updated the colour schemes to reflect the colours used in the logo.  I’m afraid the colours used in the logo seem to be straight out of a late 1990s website so unfortunately the new interface has that sort of feel about it too.  The header area now has a white background as the logo needs a white background to work.  The ‘quick search’ option is now better positioned and there is a new bar for the navigation buttons.  The active navigation button and other site buttons are now the ‘A’ red, panels are generally the ‘N’ blue and the footer is the ‘D’ green.  The main body is now slightly grey so that the entry area stands out from it.  I replaced the header font (Cinzel) with a Cormorant Garamond as this more closely resembles the font used in the logo.

The left-hand panel has been reworked so that entries are smaller and their dates and right-aligned.  I also added stripes to make it easier to keep your eye on an entry and its date.  The fixed header that appears when you scroll down a longer entry now features the AND logo/  The ‘Top’ button that appears when you scroll down a long entry now appears to the right so it doesn’t interfere with the left-hand panel.  The footer now only features the logos for Aberystwyth and AHRC and these appear on the right, with links to some pages on the left.

I have also updated the appearance of the ‘Try and Advanced Search’ button so it only appears on the ‘quick search’ results page (which is what should have happened originally).  I also removed the semantic tags that were added from the XML but need to be edited out of the XML.  I have also ticked a few more things off my ‘to do’ list, including replacing underscores with spaces in parts of speech and language tags and replacing ‘v.a.’ and ‘v.n’ as requested.  I also updated the autocomplete styles (when you type into the quick search box) so they fit in with the site a bit better.

I then began looking into reordering the citations in the entries so they appear in date order within their senses, but I remembered that Geert wanted some dates to be batch processed and realised that this should be attempted first.  I had a conversation with Geert about this, but the information he sent wasn’t well structured enough to be used and it looks like the batch updating of dates will need to wait until after the launch.  Instead I moved on to updating the source text pop-ups in the entry.  These feature links to the DEAF website and a link to search the AND entries for all others that feature the source.

On the old site the DEAF links linked through to another page on the old site that included the DEAF text and then linked through to the DEAF website.  I figured it would be better to cut out this middle stage and link directly through to DEAF.  This meant figuring out which DEAF page should be linked to and formatting the link so their page jumps to the right place.  I also added in a note about the link under it.

This was pretty straightforward but the ‘AND Citations’ link was not.  On the old site clicking on this link ran a search that displayed the citations.  We had nothing comparable to this developed for the new site.  So I needed to update the citation search to allow the user to search based on the sigla (source text).  This in turn meant updating my citations table to add a field for holding the citation siglum and regenerating the citations and citation search words and then updating the API to allow a citation search to be limited by a siglum ID.  I then updated the ‘Citations’ tab of the ‘Advanced Search’ page to add a new box for ‘citation siglum’.  This is an autocomplete box – you type some text and a list of matching sigla are displayed, from which you can select one.  This in turn meant updating the API to allow the sigla to be queried for this autocomplete.  But for example type in ‘a-n’ into the box and a list of all sigla containing this are displayed.  Select ‘A-N Falconry’ and you can then find all entries where this siglum appears.  You can also combine this with citation text and date (although the latter won’t be much use).

I’ve also tweaked the search results tab on the entry page so that the up and down buttons don’t appear if you’re at the top or bottom of the results, and I’ve ensured if you’re looking at an entry towards the end of the results a sufficient number of results before the one you’re looking at appear.  I’ve also ensured that the entry lemma and hom appear in the <title> of the web page (in the browser tab) so you can easily tell which tab contains which entry.  Here’s a screenshot of the new interface:

For the DSL I spent some time answering emails about a variety of issues.  I also completed my work on the issue of accents in the search, updating the search forms so that any accented characters that a user adds in are converted to their non-accented version before the search runs, ensuring someone searching for ‘Privacé’ will find all ‘privace’ in the full text.  I also tweaked the wording of the search results to remove the ‘supplementary’ text from it as all supplementary items have now either been amalgamated or turned into main entries.  I also put in redirects from all of the URLs for the deleted child entries to their corresponding main entries.  This was rather time consuming to do as I needed to go through each deleted child entry, get each of its URLs, then get the main URL of the corresponding main entry, add these to a new database table and then add a new endpoint to the V4 API that accepts a child URL, then checks the database for any main URL and returns this.  Then I needed to update the entry page so that the URL is passed to this new redirect checking API endpoint and if it matches a deleted item the page needs to redirect to the proper URL.

Also this week I had a conversation with Wendy Anderson about updates to the Mapping Metaphor website.  I had thought these would just be some simple tweaks to the text of existing pages, but instead the site structure needs to be updated, which might prove to be tricky.  I’m hoping to be able to find the time to do this next week.

Finally, I continued to work on the Burns Supper map, adding in the remaining filters.  I also fixed a few dates, added in the introductory text and a ‘favicon’.  I still need to work on the layout a bit, which I’ll hopefully do next week, but the bulk of the work for the map is now complete.