Week Beginning 22nd March 2021

I continued to develop the ‘Dictionary Management System’ for the Anglo-Norman Dictionary this week, following on with the work I began last week to allow the editors to drag and drop sets of entry XML files into the system.  I updated the form to add in another option underneath the selection of phase statement called ‘Phase Statements for existing records’.  Here the editor can choose whether to retain existing statements or replace them.  If ‘retain’ is selected then any XML entries attached to the form that either have an existing entry ID in their filename or have a slug that matches an existing entry in the system will retain whatever phase statement the existing entry has, no matter what phase statement is selected in the form.  The phase statement selected in the form will still be applied to any XML entries attached to the form that don’t have an existing entry in the system.  Selecting ‘replace existing statements’ will ignore all phase statements of existing entries and will overwrite them with whatever phase statement is selected in the form.  I also updated the system so that it extracts the earliest date for an entry at the point of upload.  I added two new columns to the holding area (for earliest date and the date that is displayed for this) and have ensured that the display date appears on the ‘review’ page too.  In addition, I added in an option to download the XML of an entry in the holding area, if it needs further work.

I ran a large-scale upload test, comprising of around 3,200 XML files from the ‘R’ data to see how the system would cope with this, but unfortunately I ran into difficulties with the server rejecting too many requests in a short space of time and only about 600 of the files made it through.  I asked Arts IT Support to see whether the server limits can be removed for this script, but haven’t heard anything back yet.  I ran into a similar issue when processing files for the Mull and Ulva place-names project in January last year and Raymond was able to update the whitelist for the Apache module mod_evasive that was blocking such uploads and I’m hoping he’ll be able to do something similar this time.  Alternatively, I’ll need to try and throttle the speed of uploads in the browser.

In the meantime, I continued with the scripts for publishing entries that had been uploaded to the holding area, using a test version of the site that I set up on my local PC to avoid messing up the live database.  I updated the ‘holding area’ page quite significantly.  At the top of the page is a box for publishing selected items, and beneath this is the table containing the holding items.  Each row now features a checkbox, and there is an option above the table to select / deselect all rows on the page (so currently up to 200 entries can be published in one batch as 200 is the page limit).    The ‘preview’ button has been replaced with an ‘eye’ icon but the preview page works in the same way as before.  I was intending to add the ‘publish’ options to this page but I’ve moved this to the holding area page instead to allow multiple entries to be selected for publication at any one time.

Selecting one or more items for publication and then pressing the ‘publish selected holdings’ button runs some JavaScript that grabs the ID of each holding item and then submits this to a script on the server via AJAX, and the server-side script then processes each selected item for publication in turn.  I limited the processing of this to one item per second to hopefully avoid the server rejecting requests.  Rather a lot happens when an item is published: The holding item is copied to the live entry table and then its XML is analysed to extract and store for search purposes: Citations, attestation dates and word counts of every word in each citation; translations and word counts of every word in each translation; semantic and usage labels (including adding new labels to the system if the XML contains new ones); word forms and their types (lemma, variant, deviant); parts of speech; cross references in xref entries.

If there is an existing live entry that matches the current entry (either because of the stored ‘Existing ID’ or because it has the same slug as the holding item) then this entry is deactivated in the database, its XML is copied to the ‘history’ table and associated with the new item record and all search data for the live entry as mentioned above is deleted.  At this point the holding item record is deleted and the server-side script finishes executing, returning its output to the JavaScript, which then adds a row to the ‘publication log’ on the holding entries page; decreases the count of the number of holding entries on the page by one and removes the row containing the holding item from the table on the page.

Once all of the selected items are published there is one final task that the page performs, which is to completely regenerate the cross references data.  This is something that unfortunately needs to be done after each batch (even if it’s only one record) because cross references rely on database IDs and when a new version of an existing entry is published it receives a new ID.  This means any existing cross references to that item will no longer work.  The publication log will state that the regeneration is taking place and then after about 30 seconds another statement will say it is complete.  I tested this process on my local PC, publishing single items, a few items and entire pages (200 items) at a time and all seemed to be working fine so I then copied the new scripts to the server.

Also this week I continued with the processing of library registers for the Books and Borrowing project.  These are coming in rather quickly now and I’m getting a bit of a backlog.  This is because I have to download the image files, then process then to generate tilesets, and then upload all of the images and their tilesets to the server.  It’s the tilesets that are the real sticking point, as these consist of thousands of small files.  I’m only getting an upload speed of about 70KB/s and I’m having to upload many gigabytes of data.  I did a test where I zipped up some of the images and uploaded this zip file instead and was getting a speed of around 900KB/s and as it looks like I can get command-line access to the server I’m going to investigate whether zipping up the files, then uploading them then unzipping them will be a quicker process.  I also had to spend some time sorting out connection issues to the server as the Stirling VPN wasn’t letting me connect.  It turned out that they had switched to multi-factor authentication and I needed to set this up before I could continue.

Also this week I wrote a summary of the work I’ve done so far for the Place-names of Iona project for a newsletter they’re putting together, spoke to people about the new ‘Comparative Kingship’ place-names project I’m going to be involved with, spoke to the Scots Language Policy people about setting up a mailing list for the project(it turns out that the University has software to handle this, available here: https://www.gla.ac.uk/myglasgow/it/emaillists/) and fixed an issue relating to the display of citations that have multiple dates for the DSL.

Week Beginning 1st February 2021

I had two Zoom calls this week, the first on Wednesday with Kirsteen McCue to discuss a new, small project to publish a selection of musical settings to Burns poems and the second on Friday with Joanna Kopaczyk and her RA on the Scots Language Policy project to give a tutorial on how to use WordPress.

The majority of my week was divided between the Anglo-Norman Dictionary, the Dictionary of the Scots Language and the Place-names of Iona projects.  For the AND I made a few tweaks to the static content of the site and migrated some more blog posts across to the new site (these are not live yet).  I also added commentaries to more than 260 entries, which took some time to test.  I also worked on the DTD file that the editors reference from their XML editing software to ensure that all of the elements and attributes found within commentaries are ‘allowed’ in the XML.  Without doing this it was possible to add the tags in, but this would give errors in the editing software.  I also batch updated all of the entries on the site to reference the new DTD and exported all of the files, zipped them up and sent them to the editors so they can work on them as required.  I also began to think about migrating the TextBase from the old site to the new one, and managed to source the XML files that comprise this system.  It looks like it may be quite tricky to work with these as there are more than 70 book-length XML files to deal with and so far I have not managed to locate the XSLT that was originally used to process these files.

For the DSL I completed work on the new bibliography search pages that use the new ‘V4’ data.  These pages allow the authors and titles of bibliographical items to be searched, results to be viewed and individual items to be displayed.  I also made some minor tweaks to the live site and had a discussion with Ann Fergusson about transferring the project’s data to the people who have set up a new editing interface for them, something I’m hoping to be able to tackle next week.

For the Place-names of Iona project I had a discussion about implementing a new ‘work of the month’ feature and spent quite a bit of time investigating using 10-digit OS grid references in the project’s CMS.  The team need to use up to 10-digit grid references to get 1m accuracy for individual monuments, but the library I use in the CMS to automatically generate latitude and longitude from the supplied grid reference will only work with a 6-digit NGR.  The automatically generated latitude and longitude are then automatically passed to Google Maps to ascertain the altitude of the location and all of this information is stored in the database whenever a new place-name record is created or an existing record is edited.

As the library currently in use will only accept 6-digit NGRs I had to do a bit of research into alternative libraries, and I managed to find one that can accept NGRs of 2,4,6,8 or 10 digits.  Information about the library, including text boxes where you can enter an NGR and see the results can be found here: http://www.movable-type.co.uk/scripts/latlong-os-gridref.html along with an awful lot of description about the calculations and some pretty scary looking formulae.

The library is written in JavaScript, which runs in the client’s browser, whereas the previous library was written in PHP, which runs on the server.  This means I needed to change the way the CMS works – previously you’d enter an NGR and then when the form was submitted to the server the PHP library would generate the latitude and longitude whereas now the latitude and longitude need to be generated in the browser as soon as the NGR is entered into the textbox, and two further textboxes for latitude and longitude will appear in the form and will then be automatically populated with the results.

 

This does mean the person filling out the form can see the generated latitude and longitude and also tweak it if required before submitting the form, which is a potentially useful thing.  I may even be able to add a Google Map to the form so you can see (and possibly tweak) the point before submitting the form, but I’ll need to look into this further.  I also still need to work on the format of the latitude and longitude as the new library generates them with a compass point (e.g. 6.420848° W) and we need to store them as a purely decimal value (e.g. -6.420848) with ‘W’ and ‘S’ figures being negatives.

However, whilst researching this I discovered a potentially worrying thing that needs discussion with the wider team.  The way the Ordnance Survey generates latitude and longitude from their grid references was changed in 2014.  Information about this can be found in the page linked to above in the ‘Latitude/longitudes require a datum’ section.  Previously the OS used ‘OSGB-36’ to generate latitude and longitude, but in 2014 this was changed to ‘WGS84’, which is used by GPS systems.  The difference in the latitude / longitude figures generated by the two systems is about 100 metres, which is quite a lot if you’re intending to pinpoint individual monuments.

The new library has facilities to generate latitude and longitude using either the new or old systems, but defaults to the new system.  I’ve checked the output of the library we currently use and it uses the old ‘OSGB-36’ system.  This means all of the place-names in the system so far (and all those for the previous projects) have latitudes and longitudes generated using the now obsolete (since 2014) system. To give an example of the difference, the place-name A’ Mhachair in the CMS has this location: https://www.google.com/maps/place/56%C2%B019’33.2%22N+6%C2%B025’11.4%22W/@56.3258889,-6.422022,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325885!4d-6.419828 and with the newer ‘WGS84’ system it would have this location: https://www.google.com/maps/place/56%C2%B019’32.7%22N+6%C2%B025’15.1%22W/@56.325744,-6.4230367,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325744!4d-6.420848

So what we need to decide before I replace the old library with the new one in the CMS is whether we switch to using ‘WGS84’ or we keep using ‘OSGB-36’.  As I say, this will need further discussion before I implement any changes.

Also this week I responded to a query from Cris Sarg of the Medical Humanities Network project, spoke to Fraser Dallachy about future updates to the HT’s data from the OED, made some tweaks to the structure of the SCOSYA website for Jennifer Smith, added a plugin to the Editing Burns site for Craig Lamont and had a chat with the Books and Borrowing people about cleaning the authors data, importing the Craigston data and how to deal with a lot of borrowers that were excluded from the Selkirk data that I previously imported.

Next week I’ll be on holiday from Monday to Wednesday to cover the school half term.

 

Week Beginning 25 January 2021

I headed into the University for the first time this year on Wednesday this week to collect a new iPad that I’d ordered and to get some files from my office.  It was great to see the old place again, but it did take quite a chunk out of my day to travel there and back, especially as I’m still home-schooling either a morning or an afternoon each day at the moment too.

As with last week, I mainly divided my time this week between the Dictionary of the Scots Language, the Anglo-Norman Dictionary and the Books and Borrowing project, with a few other bits and bobs added in as well.  For the DSL I retrieved the source code for my original Scots School Dictionary app from my office so we can host this somewhere on the DSL website.  This is because the DSL have commissioned someone else to make a new School Dictionary app, which launched this week, but doesn’t include an ‘English to Scots’ feature as the old app does, so we’re going to make the old app available as a website for those people who miss the feature.  I also made a few minor tweaks to the main DSL site, and then focussed on adding bibliography search facilities to the new version of the API, a task that I’d begun last week.

I created a new table for the bibliographical data that includes the various fields used for DOST (note, author, editor, date, longtitle etc) and a field for the XML data used for SND.  I then created two further tables for searching, one that contains every author and editor name for each item (for DOST there may be different names in the author, editor, longauthor and longeditor fields while for SND there may be any number of <author> tags) and the other containing every title for each item (DOST may have different text in title and longtitle while SND items can have any number of <title> tags).  These tables allow you to search for any variant author, editor or title and find the item.

I also created two additional fields in the bibliography table that contain the ‘display author’ and ‘display title’.  These are the forms that get displayed in the search results before you click on an item to open the full bibliographical entry.  I then updated the V4 API to add in facilities to search and retrieve the bibliographies.  I didn’t have the time to connect to this API and to implement the search on the Sienna test site, which is something I hope to do next week, but the logic behind the search and display of bibliographies is all there.  There is a predictive search that will be used to generate the autocomplete list, similar to how the live site currently works:  You will be able to select whether your search is for authors, titles or both and when you start typing in some text a list of matching items will appear, e.g. typing in ‘ham’ for authors in both dictionaries will display the following all items containing ‘ham’ and when you select an item this will then perform a search for the specific text.  You will then be able to click on an item to view the full bibliography.  This is a bit different to how the live site currently works, as with these if you enter ‘ham’ and select (for example) ‘Hamilton, J,’ from the autocomplete list you are taken directly to a page that lists all of the items for the author.  However, we can’t do that any more as we no longer have unique identifiers that group bibliographical items by author.  I may be able to do something similar with the page that comes up when you select an author, but this would have to rely on the name to group items together and a name may not be unique.

For the AND I made some tweaks to the website, such as adding a link to the search page if you type some text into the ‘jump to entry’ option and no matching entries are found.  I then spent the rest of my time continuing to develop the new content management system, specifically the pages for managing source texts.  I finished work on this, adding in facilities to add, edit, browse and delete source texts from the database.  I then migrated the DTD to the new site, which is referenced by the editors’ XML editor when they work on the entry XML files.  The DTD on the old server referenced several lists of things that are then used to populate drop-down lists of options in the XML editor.  I migrated these too, making them dynamically generated from the underlying database rather than statis lists, meaning when (for example) new source texts are added to the CMS these will automatically become available when using the XML editor.

For the Books and Borrowing project I participated in the project’s Zoom call on Monday to discuss the project’s CMS and how to amalgamate the various duplicate author records that resulted from data uploads from different libraries.   After the call I made some required changes to the CMS, such as making the editor’s notes fields visible by default again, and worked on the duplicate authors matching script to add in further outputs when comparing the author names with Levenshtein ratings of 1 and 2.  I also reviewed some content that was sent to us from another library.

Also this week I responded to an email from James Caudle in Scottish Literature about a potential project he’s setting up, made a couple of changes to the Scots Language Policy website, made some tweaks to the menu structure for the Scots Syntax Atlas project and gave some advice to a post-grad student who had contacted me about setting up a corpus.

Week Beginning 11th January 2021

This was my first full week back of the year, although it was also the first week of a return to homeschooling, which made working a little trickier than usual.  I also had a dentist’s appointment on Tuesday and lost some time to that due to my dentist being near the University rather than where I live.  However, despite these challenges I was able to achieve quite a lot this week.  I had two Zoom calls, the first on Monday to discuss a new ESRC grant that Jane Stuart-Smith is putting together with colleagues at Strathclyde while the second on Wednesday was with a partner in Joanna Kopaczyk’s new RSC funded project about Scots Language Policy to discuss the project’s website and the survey they’re going to put out.  I also made a few tweaks to the DSL website, replied to Kirsteen McCue about the AHRC proposal she’s currently putting together, replied to a query regarding the technologies behind the Scots Syntax Atlas, made a few further updates to the Burns Supper map and replied to a query from Rachel Fletcher in English Language about lemmatising Old English.

Other than these various tasks I split my time between the Anglo-Norman Dictionary and the Books and Borrowing projects.  For the former I completed adding explanatory notes to all of the ‘Introducing the AND’ pages.  This was a very time consuming task as there were probably about 150 explanatory notes in total to add in, each appearing in a Bootstrap dialog box, and each requiring me to copy the note form the old website, add in any required HTML formatting, find and check all of the links to AND entries on the old site and add these in as required.  It was pretty tedious to do, but it feels great to get it done, as the notes were previously just giving 404 errors on the new live site, and I don’t like having such things on a site I’m responsible for.  I also migrated the academic articles from the old site to the new one (https://anglo-norman.net/articles/) which also required some manual formatting of the content.  There are five other articles that I haven’t managed to migrate yet as they are full of character encoding errors on the old site.  Geert is looking for copies of these articles that actually work and I’ll add them in once he’s able to get them to me.  I also begin migrating the blog posts to the new site too.  Currently the blog is hosted on Blogspot and there are 55 entries, but we’d like these to be an internal part of the new site.  Migrating these is going to take some time as it means copying the text (which thankfully retains formatting) and then manually saving and embedding any images in the posts.  I’m just going to do a few of these a week until they’re all done and so far I’ve migrated seven.  I also needed to look into how the blogs page works in the WordPress theme I created for the AND, as to start with the page was just listing the full text of every post rather than giving summaries and links through to the full text of each.  After some investigation I figured out that in my theme there is a script called ‘home.php’ and this is responsible for displaying all of the blog posts on the ‘blog’ page.  It in turn calls another template called ‘content-blog.php’ which was previously set to display the full content of the blog.  Instead I set it to display the title as a link through to the full post, the date and then an excerpt from the full blog, which can be accessed through a handy WordPress function called ‘the_excerpt()’.

For the Books and Borrowing project I made some improvements and fixes to the Content Management System.  I’d been meaning to enhance the CMS for some time, but due to other commitments to other projects I didn’t have the time to delve into it.  It felt good to find the time to return to the project this week.

I updated the ‘Books’ and ‘Borrowers’ tabs when viewing a library in the CMS.  I added in pagination to speed up the loading of the pages.  Pages are now split into 500 record blocks and you can navigate between pages using the links above and below the tables.  For some reason the loading of the page is still a bit slow on the Stirling server whereas it was fine on the Glasgow server I was using for test purposes.  I’m not entirely sure why as I’d copied the database over too – presumably the Stirling server is slower.  However, it is still a massive improvement on the speed of the page previously.

I also changed the way tables scroll horizontally.  Previously if a table was wider than the page a scrollbar appeared above and below the table, but this was rather awkward to use if you were looking at the middle of the table (you had to scroll up or down to the beginning or end of the table, then use the horizontal scrollbar to move the table along a bit, then navigate back to the section of the page you were interested in).  Now the scrollbar just appears at the bottom of the browser window and can always be accessed no matter where in the table you are.

I also removed the editorial notes from tables by default to reduce clutter, and added in a button for showing / hiding the editors’ notes near the top of each page.  I also added a limit option in the ‘Books’ and ‘Borrowers’ pages within a library to limit the displayed records to only those found in a specific ledger.  I added in a further option to display those records that are not currently associated with any ledgers too.

I then deleted the ‘original borrowed date’ and ‘original returned date’ fields in St Andrews data as these were no longer required.  I deleted these additional fields from the system and all data that were contained in these fields.

It had been noted that the book part numbers were not being listed numerically.  As part numbers can contain text as well as numbers (e.g. ‘Vol. II’), this field in the database needed to be set as text rather than an integer.  Unfortunately the database doesn’t order numbers correctly when they are contained in a non-numerical field  – instead all the ones come first (1, 10, 11) then all the twos (2, 20, 22) etc.  However, I managed to find a way to ensure that the numbers are ordered correctly.

I also fixed the ‘Add another Edition/Work to this holding’ button that was not working.  This was caused by the Stirling server running a different version of PHP that doesn’t allow functions to have variable numbers of arguments.  The autocomplete function was also not working at edition level and I investigated this.  The issue was being caused by tab characters appearing in edition titles, and I updated my script to ensure these characters are stripped out before the data is formatted as JSON.

There may be further tweaks to be made – I’ll need to hear back from the rest of the team before I know more, but for now I’m up to date with the project.  Next week I intend to get back into some of the larger and more trickier outstanding AND tasks (of which there are, alas, many) and to begin working towards adding the DSL bibliography data into the new version of the API.

Week Beginning 14th December 2020

This was my last week before the Christmas holidays, and it was a four-day week as I’d taken Friday off to use up some unspent holidays.  Despite only being four days long it was a very hectic week, as I had lots of loose ends to tie up before the launch of the new Anglo-Norman Dictionary website on Wednesday.  This included tweaking the appearance of ‘Edgloss’ tags to ensure they always have brackets (even if they don’t in the XML), updating the forms to add line breaks between parts of speech and updating the source texts pop-ups and source texts page to move the information about the DEAF website.

I also added in a lot of the ancillary page data, including the help text, various essays, the ‘history’ page, copyright and privacy pages, the memorial lectures and the multi-section ‘introduction to the AND’.  I didn’t quite manage to get all of the links working in the latter and I’ll need to return to this next year.  I also overhauled the homepage and footer, adding in the project’s Twitter feed, a new introduction and adding links to Twitter and Facebook to the footer.

I also identified and fixed an error with the label translations, which were sometimes displaying the wrong translation.  My script that extracted the labels was failing to grab the sense ID for subsenses.  This ID is only used to pull out the appropriate translation, but because of the failure the ID of the last main sense was being used instead.  I therefore had to update my script and regenerate the translation data.  I also updated the label search to add in citations as well as translations.  This means the search results page can get very long as both labels and translations are applied at sense level, so we end up with every citation in a matching sense listed, but apparently this is what’s wanted.

I also fixed the display of ‘YBB’ sources, which for some unknown reason are handled differently to all other sources in the system and fixed the issue with deviant forms and their references and parts of speech.

On Wednesday we made the site live, replacing the old site with the new one, which you can now access here:  https://anglo-norman.net/.  It wasn’t entirely straightforward to get the DNS update working, but we got there in the end, and after making some tweaks to paths and adding in Google Analytics the site was ready to use, which is quite a relief.  There is still a lot of work to do on the site, but I’m very happy with the progress I’ve made with the site since I began the redevelopment in October.

Also this week I set up a new website for phase two of the ‘Editing Burns for the 21st Century’ project and upgraded all of the WordPress sites I manage to the most recent version.  I also arranged a meeting with Jane Stuart-Smith to discuss a new project in the New Year, replied to Kirsteen McCue about a proposal she’s finishing off, replied to Simon Taylor about a new place-name project he wants me to be involved with and replied to Carolyn Jess-Cooke about a project of hers that will be starting next year.

That’s all for 2020.  Here’s hoping 2021 is not going to be quite so crazy!

Week Beginning 9th November 2020

I took Friday off this week as I had a dentist’s appointment across town in the West End and I decided to take the opportunity to do some Christmas shopping whilst all the shops in Glasgow are still open (there’s some talk of us having greater Covid restrictions imposed in the next week or so).  I spent a couple of days this week working on the Dictionary of the Scots Language, a project I’ve been meaning to return to for many months but have been too busy with other work to really focus on.  Thankfully in November with the launch of the second edition of the Historical Thesaurus out of the way I have a bit of time to get back into the outstanding DSL issues.

Rhona Alcorn had sent a list of outstanding tasks a while back and I spent some time going through this and commenting on each item.  I then began to work through each item, starting with fixing cross references in our ‘V3’ test site (which features data that the editors have been working on in recent years).  Cross references appear differently in the XML for this version so I needed to update the XSLT in order to make them work correctly.  I then updated the full-text extraction script that prepares data for inclusion in the Solr search engine.  Previously this was stripping out all of the XML tags in order to leave the plain text, but unfortunately there were occasions where the entries contains words separated by tags but no spaces, meaning when the tags were removed the words ended up joined together.  I fixed this by adding a space character before every XML tag before the tags were stripped out.  This resulted in plain text that often contained multiple spaces between words, but thankfully Solr ignores these when it indexes the text.  I asked Raymond of Arts IT Support to upload the new text to the server and tested things out and all worked perfectly.

After this I moved on to creating a new ordering for the ‘browse’ feature.  This new ordering takes into consideration parts of speech and ensures that supplemental entries appear below main entries.  It also correctly positions entries beginning with a yogh.  I’d created a script to generate the new browse order many months ago, so I could just tweak this and then use it to update the database.  After that I needed to make some updates to the V2 and V3 front-ends to use the new ordering fields, which took a little time, but it seems to have worked successfully.  I may need to tweak the ordering further, but will await feedback before I make any changes.

I then moved on to investigating searches for accented characters, that were apparently not working correctly.  I noticed that the htaccess script was not set up to accept accented characters so I updated this.  However, the advanced headword search itself was finding forms with accented characters in them if the non-accented version was passed.  The ‘privace’ example was redirecting to the entry page as only one result was matched, but if you perform a search for ‘*vace’ it finds and displays the accented headword in both V2 and V3 but not the live site.  Therefore I think this issue is now sorted.  However, we should perhaps strip out accents from any submitted search terms as allowing accented characters to be submitted (e.g. for *vacé) gives the impression that we allow accented characters to be searched for distinctly from their unaccented versions and the results including both accented and unaccented might confuse people.

The last DSL issue I looked at involved hiding superscript characters in certain circumstances (after ‘geo’ tags in ‘cref’ tags).  There are 3093 SND entries that include the text ‘</geo><su>’ or ‘</geo> <su>’ and I updated the XSLT file that transforms the XML into HTML to deal with these.  Previously it transformed the <su> tag into the HTML superscript tag <sup>.  I’ve updated it so that it now checks to see what the tag’s preceding sibling is.  If it’s a <geo> tag it now adds the class ‘noSup’ to the generated <sup>.  Currently I’ve set <sup> elements with this class to have a pink background so the editors can check to see how the match is performing, and once they’re happy with it I can update the CSS to hide ‘noSup’ elements.

Other than DSL work I also spent some time continuing to work on the redevelopment of the Anglo-Norman Dictionary and completed an initial version of the label search that I began working on last week.  The search form as discussed last week hasn’t changed, but it’s now possible to submit the search, navigate through the search results, return to the search form to make changes to your selection and view entries.  I have needed to overhaul how the search page works to accommodate the label search, which required some pretty major changes behind the scenes, but hopefully none of the other searches will have been affected by this.  You can select a single label and search for that, e.g. ‘archit.’ and if you then refine your search you will see that the label is ‘remembered’ in the form so you can add to it or remove it, for example if you’re interested in all of the entries that are labelled ‘archit.’ and ‘mil’.  As mentioned last week, adding or changing a citation year resets the boxes as different labels are displayed depending on the years chosen.  The chosen year is remembered by the form if you choose to refine your search and the labels and selected labels and Booleans are pulled in alongside the remembered year.  So for example if you want to find entries that feature a sense labelled ‘agricultural’ or ‘bot.’ that have a citation between 1400 and 1410 you can do this.  On the entry page both semantic and usage labels are now links that lead through to the search results for the label in question.  I’ve currently given both label types a somewhat garish pink colour, but this can be changed, or we could use two different colours for the two types.

Other than these projects, I fixed an issue with the 18th century Glasgow borrowers site (https://18c-borrowing.glasgow.ac.uk/) and made some tweaks to the place-names of Iona site, fixing the banner and creating Gaelic versions of the pages and menu items.  The site is not live yet, but I’m pretty happy with how it’s looking.  Here’s an image of the banner I created:

Also this week I spoke to Kirsteen McCue about the project she’s currently preparing a proposal for and I created a new version of the Burns Suppers map for Paul Malgrati.  This was rather tricky as his data is contained in a spreadsheet that has more than 2,500 rows and more than 90 columns, and it took some time to process this in a way that worked, especially as some fields contained carriage returns which resulted in lines being split where they shouldn’t be when the data was exported.  However, I got there in the end, and next week I hope to develop the filters for the data.

Week Beginning 2nd November 2020

I spent a lot of this week continuing to work on the redevelopment of the Anglo-Norman Dictionary website, focussing on the search facilities.  I made some tweaks to the citation search that I’d developed last week, ensuring that the intermediate ‘pick a form’ screen appears even if only one search word is returned and updating the search results to omit forms and labels but to include the citation dates and siglums, the latter opening up pop-ups as with the entry pages.  I also needed to regenerate the search terms as I’d realised that due to a typo in my script a number of punctuation marks that should have been stripped out were remaining, meaning some duplicate forms were being listed, sometimes with a punctuation mark such as a colon and othertimes ‘clean’.

I also realised that I needed to update the way apostrophes were being handled.  In my script these were just being stripped out, but this wasn’t working very well as forms like ‘s’oreille’ were then becoming ‘soreille’ when really it’s the ‘oreille’ part that’s important.  However, I couldn’t just split words up on an apostrophe and use the part on the right as apostrophes appear elsewhere in the citations too.  I managed to write a script that successfully split words on apostrophes and retained the sections on both sides as individual search word forms (if they are alphanumeric).  Whilst writing this script I also fixed an issue with how the data stripped of XML tags is processed.  Occasionally there are no spaces between a word and a tag that contains data, and when my script removed tags to generate the plain text required for extracting the search words this led to a word and the contents of the following tag being squashed together, resulting in forms such as ‘apresentDsxiii1’.  By adding spaces between tags I managed to get around this problem.

With these tweaks in place I then moved onto the next advanced search option:  the English translations.  I extracted the translations from the XML and generated the unique words found in each (with a count of their occurrences), also storing the Sense IDs for the senses in which the translations were found so that I could connect the translations up to the citations found within the senses in order to enable a date search (i.e. limiting a search to only those translations that are in a sense that has a citation in a particular year or range of years).  The search works in a similar way to the citation search, in that you can enter a search term (e.g. ‘bread’) and this will lead you to an intermediary page that lists all words in translations that match ‘bread’.  You can then select one to view all of the entries with their translation that feature the word, with it highlighted.  If you supply a year or a range of years then the search connects to the citations and only returns translations for senses that have a citation date in the specified year or range.  This connects citations and translations via the ‘senseid’ in the XML.  So for example, if you only want to find translations containing ‘bread’ that have a citation between 1350 and 1400 you can do so.  There are still some tweaks that need to be done.  For example, one inconsistency we might need to address is that the number in brackets in the intermediary page refers to the number of translations / citations the word is found in, but when you click through to the full results the ‘matched results’ number will likely be different because this refers to matched entries, and an entry may contain more than one matching translation / citation.

I then moved onto the final advanced search option, the label search.  This proved to be a pretty tricky undertaking, especially when citation dates also have to be taken into consideration.  I didn’t manage to get the search working this week, but I did get the form where you can build your label query up and running on the advanced search page.  If you select the ‘Semantic & Usage Labels’ tab you should see a page with a ‘citation date’ box, a section on the left that lists the labels and a section on the right where your selection gets added.  I considered using tooltips for the semantic label descriptions, but decided against it as tooltips don’t work so well on touchscreens and I thought the information would be pretty important to see.  Instead the description (where available) appears in a smaller font underneath the label, with all labels appearing in a scrollable area.  The number on the right is the number of senses (not entries) that have the label applied to them, as you can see in the following screenshot:

As mentioned above, things are seriously complicated by the inclusion of citation dates.  Unlike with other search options, choosing a date or a range here affects the search options that are available.  E.g. if you select the years 1405-1410 then the labels used in this period and the number of times they are used differs markedly from the full dataset.  For this reason the ‘citation date’ field appears above the label section, and when you update the ‘citation date’ the label section automatically updates to only display labels and counts that are relevant to the years you have selected.  Removing everything from the ‘citation date’ resets the display of labels.

When you find labels you want to search for pressing on the label area adds it to the ‘selected labels’ section on the right.  Pressing on it a second time deselects the label and removes it from the ‘selected labels’ section.  If you select more than one label then a Boolean selector appears between the selected label and the one before, allowing you to choose AND, OR, or NOT, as you can see in the above screenshot.

I made a start on actually processing the search, but it’s not complete yet and I’ll have to return to this next week.  However, building complex queries is going to be tricky as without a formal querying language like SQL there are ambiguities that can’t automatically be sorted out by the interface I’m creating.  E.g. how should ‘X AND Y NOT Z OR B’ be interpreted?  Is it ‘(X AND Y) NOT (Z OR B)’ or ‘((X AND Y) NOT Z) OR B’ or ‘(X AND (Y NOT Z)) OR B’ etc.  Each would give markedly different results.  Adding more than two or possibly three labels is likely to lead to confusing results for people.

Other than working on the AND I spent some time this week working on the Place-names of Iona project.  We had a team meeting on Friday morning and after that I began work on the interface for the website.  This involved the usual installing a theme, customising fonts, selecting colour schemes, adding in logos, creating menus and an initial site structure.  As with the Mull site, the Iona site is going to be bilingual (English and Gaelic) so I needed to set this up too.  I also worked on the banner image, combining a lovely photo of Iona from Shutterstock with a map image from the NLS.  It’s almost all in place now, but I’ll need to make a few further tweaks next week.  I also set up the CMS for the project, as we have decided to not just share the Mull CMS.  I migrated the CMS and all of its data across and then worked on a script that would pick out only those place-names from the Mull dataset that are of relevance to the Iona project.  I did this by drawing a box around the island using this handy online interface: https://geoman.io/geojson-editor and then grabbing the coordinates.  I needed to reverse the latitude and longitude of these due to GeoJSON using them the other way around to other systems, and then I plugged these into a nice little algorithm I discovered for working out which coordinates are within a polygon (see https://assemblysys.com/php-point-in-polygon-algorithm/).  This resulted in about 130 names being identified, but I’ll need to tweak this next week to see if my polygon area needs to be increased.

For the remainder of the week I upgraded all of the WordPress sites I manage to the most recent version (I manage 39 such sites so this took a little while).  I also helped Simon Taylor to access the Berwickshire and Kirkcudbrightshire place-names systems again and fixed an access issue with the Books and Borrowing CMS.  I also looked into an issue with the DSL test sites as the advanced searches on each of these had stopped working.  This was caused by an issue with the Solr indexing server that thankfully Arts IT Support were able to address.

Next week I’ll continue with the AND redevelopment and also return to working on the DSL for the first time in quite a while.

Week Beginning 26th October 2020

This was something of an odd week as I tested positive for Covid.  I’m not entirely sure how I managed to get it, but I’d noticed on Friday last week that I’d lost my sense of taste and thought it would be sensible to get tested and the result came back positive.  I’d been feeling a bit under the weather last week and this continued throughout this week too, but thankfully the virus never affected my chest or throat and I managed to more or less work all week.  However, with our household in full-on in isolation our son was off school all week, and will be all next week, which did impact on the work I could do.

My biggest task of the week was to complete the work in preparation for the launch of the second edition of the Historical Thesaurus.  This included fixing the full-size timelines to ensure that words that have been updated to have post-1945 end dates display properly.  As we had changed the way these were stored to record the actual end date rather than ‘9999’ the end points of the dates on the timeline were stopping short and not having a pointy end to signify ‘current’.  New words that only had post-1999 dates were also not displaying properly.  Thankfully I managed to get these issues sorted.  I also updated the search terms to fix some of the unusual characters that had not migrated over properly but had been replaced by question marks.  I then updated the advanced search options to provide two checkboxes to allow a user to limit their search to new word or words that have been updated (or both), which is quite handy, as it means you can fine out all of the new words in a particular decade, for example all of the new words that have a first date some time in the 1980s:

 

https://ht.ac.uk/category-selection/?word=&label=&category=&year=&startf=1980&endf=1989&startl=&endl=&twoEdNew=Y

 

I also tweaked the text that appears beside the links to the OED and added the Thematic Heading codes to the drop-down section of the main category.  We also had to do some last-minute renumbering of categories, which affected several hundred categories and subcategories in ’01.02’ and manually moved a couple of other categories to new locations, and after that we were all set for the launch.  The new second edition is now fully available, as you can see from the above link.

Other than I worked on a few other projects this week.  I helped to migrate a WordPress site for Bryony Randall’s Imprints of New Modernist Editing project, which is now available here: https://imprintsarteditingmodernism.glasgow.ac.uk/ and responded to a query about software purchased from Lisa Kelly in TFTS.

I spent the rest of the week continuing with the redevelopment of the Anglo-Norman Dictionary website.  I updated my script that extracts citations and their dates, which I’d started to work on last week.  I figured out why my script was not extracting all citations (it was only picking out the citations form the first sense and subsense in each entry rather than all senses) and managed to get all citations out.  With dates extracted for each entry I was then able to store the earliest date for each entry and update the ‘browse’ facility to display this date alongside the headword.

With this in place I moved on to looking at the advanced search options. I created the tab-based interface for the various advanced search options and implemented searches for headwords and citations.  The headword search works in a very similar way to the quick search – you can enter a term and use wildcards or double quotes for an exact search.  You can also combine this with a date search.  This allows you to limit your results to only those entries that have a citation in the year or range of years you specify.  I would imagine entering a range of years would be more useful than a single year.  You can also omit the headword and just specify a citation year to find all entries with a citation in the year or range, e.g. all entries with a citation in 1210.

The citation search is also in place and this works rather differently.  As mentioned in the overview document, this works in a similar (but not identical) way to the old ‘concordance search of citations’.  You can search for a word or a part of a word using the same wildcards as for the headword and limiting your search to particular citation dates.  When you submit the search this then loads an intermediary page that lists all of the word forms in citations that your search matches, plus a count of the number of citations each form is in.  From this page you can then select a specific form and view the results.  So, for example, a search for words beginning with ‘tre’ with a citation date between 1200 and 1250 lists 331 forms in citations will list all of the ‘tre’ words and you can then choose a specific form, e.g. ‘tref’ to see the results. The citation results include all of the citations for an entry that include the word, with the word highlighted in yellow.  I still need to think about how this might work better, as currently there is no quick way to get back to the intermediary list of forms.  But progress is being made.

Week Beginning 7th September 2020

This was a pretty busy week, involving lots of different projects.  I set up the systems for a new place-name project focusing on Ayrshire this week, based on the system that I initially developed for the Berwickshire project and has subsequently been used for Kirkcudbrightshire and Mull.  It didn’t take too long to port the system over, but the PI also wanted the system to be populated with data from the GB1900 crowdsourcing project.  This project has transcribed every place-name on the GB1900 Ordnance Survey maps across the whole of the UK and is an amazing collection of data totalling some 2.5 million names.  I had previously extracted a subset of names for the Mull and Ulva project so thankfully had all of the scripts needed to get the information for Ayrshire.  Unfortunately what I didn’t have was the data in a database, as I’d previously extracted it to my PC at work.  This meant that I had to run the extraction script again on my home PC, which took about three days to work through all of the rows in the monstrous CSV file.  Once this was complete I could then extract the names found in the Ayrshire parishes that the project will be dealing with, resulting in almost 4,000 place-names.  However, this wasn’t the end of the process as while the extracted place-names had latitude and longitude they didn’t have grid references or altitude.  My place-names system is set up to automatically generate these values and I could customise the scripts to automatically apply the generated data to each of the 4000 places.  Generating the grid reference was pretty straightforward but grabbing the altitude was less so, as it involved submitting a query to Google Maps and then inserting the returned value into my system using an AJAX call.  I ran into difficulties with my script exceeding the allowed number of Google Map queries and also the maximum number of page requests on our server, resulting in my PC getting blocked by the server and a ‘Forbidden’ error being displayed instead, but with some tweaking I managed to get everything working within the allowed limits.

I also continued to work on the Second Edition of the Historical Thesaurus.  I set up a new version of the website that we will work on for the Second Edition, and created new versions of the database tables that this new site connects to.  I also spent some time thinking about how we will implement some kind of changelog or ‘history’ feature to track changes to the lexemes, their dates and corresponding categories.  I had a Zoom call with Marc and Fraser on Wednesday to discuss the developments and we realised that the date matching spreadsheets I’d generated last week could do with some additional columns from the OED data, namely links through to the entries on the OED website and also a note to say whether the definition contains ‘(a)’ or ‘(also’ as these would suggest the entry has multiple senses that may need a closer analysis of the dates.

I then started to update the new front-end to use the new date structure that we will use for the Second Edition (with dates stored in a separate date table rather than split across almost 20 different date fields in the lexeme table).  I updated the timeline visualisations (mini and full) to use this new date table, and although this took quite some time to get my head around the resulting code is MUCH less complicated than the horrible code I had to write to deal with the old 20-odd date columns.  For example, the code to generate the data for the mini timelines is about 70 lines long now as opposed to over 400 previously.

The timelines use the new data tables in the category browse and the search results.  I also spotted some dates weren’t working properly with the old system but are working properly now.  I then updated the ‘label’ autocomplete in the advanced search to use the labels in the new date table.  What I still need to do is update the search to actually search for the new labels and also to search the new date tables for both ‘simple’ and ‘complex’ year searches.  This might be a little tricky, and I will continue on this next week.

Also this week I gave Gerry McKeever some advice about preserving the data of his Regional Romanticism project, spoke to the DSL people about the wording of the search results page, gave feedback on and wrote some sections for Matthew Creasy’s Chancellor’s Fund proposal, gave feedback to Craig Lamont regarding the structure of a spreadsheet for holding data about the correspondence of Robert Burns and gave some advice to Rob Maslen about the stats for his ‘City of Lost Books’ blog.  I also made a couple of tweaks to the content management system for the Books and Borrowers project based on feedback from the team.

I spent the remainder of the week working on the redevelopment of the Anglo-Norman dictionary.  I updated the search results page to style the parts of speech to make it clearer where one ends and the next begins.  I also reworked the ‘forms’ section to add in a cut-off point for entries that have a huge number of forms.  In such cases the long list of cut off and an ellipsis is added in, together with an ‘expand’ button.  Pressing on this scrolls down the full list of forms and the button is replaced with a ‘collapse’ button.  I also updated the search so that it no longer includes cross references (these are to be used for the ‘Browse’ list only) and the quick search now defaults to an exact match search whether you select an item from the auto-complete or not.  Previously it performed an exact match if you selected an item but defaulted to a partial match if you didn’t.  Now if you search for ‘mes’ (for example) and press enter or the search button your results are for “mes” (exactly).  I suspect most people will select ‘mes’ from the list of options, which already did this, though.  It is also still possible to use the question mark wildcard with an ‘exact’ search, e.g. “m?s” will find 14 entries that have three letter forms beginning with ‘m’ and ending in ‘s’.

I also updated the display of the parts of speech so that they are in order of appearance in the XML rather than alphabetically and I’ve updated the ‘v.a.’ and ‘v.n.’ labels as the editor requested.  I also updated the ‘entry’ page to make the ‘results’ tab load by default when reaching an entry from the search results page or when choosing a different entry in the search results tab.  In addition, the search result navigation buttons no longer appear in the search tab if all the results fit on the page and the ‘clear search’ button now works properly.  Also, on the search results page the pagination options now only appear if there is more than one page of results.

On Friday I began to process the entry XML for display on the entry page, which was pretty slow going, wading through the XSLT file that is used to transform the XML to HTML for display.  Unfortunately I can’t just use the existing XSLT file from the old site because we’re using the editor’s version of the XML and not the system version, and the two are structurally very different in places.

So far I’ve been dealing with forms and have managed to get the forms listed, with grammatical labels displayed where available and commas separating forms and semi-colons separating groups of forms.  Deviant forms are surrounded by brackets.  Where there are lots of forms the area is cut off as with the search results.  I still need to add in references where these appear, which is what I’ll tackle next week.  Hopefully now I’ve started to get my head around the XML a bit progress with the rest of the page will be a little speedier, but there will undoubtedly be many more complexities that will need to be dealt with.

Week Beginning 31st August 2020

I worked on many different projects this week, and the largest amount of my time went into the redevelopment of the Anglo-Norman Dictionary.  I processed a lot of the data this week and have created database tables and written extraction scripts to export labels, parts of speech, forms and cross references from the XML.  The data extracted will be used for search purposes, for display on the website in places such as the search results or will be used to navigate between entries.  The scripts will also be used when updating data in the new content management system for the dictionary when I write it.  I have extracted 85,397 parts of speech, 31,213 cross references, 150,077 forms and their types (lemma / variant / deviant) and 86,269 labels which correspond to one of 157 unique labels (usage or semantic), which I also extracted.

I have also finished work on the quick search feature, which is now fully operational.  This involved creating a new endpoint in the API for processing the search.  This includes the query for the predictive search (i.e. the drop-down list of possible options that appears as you type), which returns any forms that match what you’re typing in and the query for the full quick search, which allows you to use ‘?’ and ‘*’ wildcards (and also “” for an exact match) and returns all of the data about each entry that is needed for the search results page.  For example, if you type in ‘from’ in the ‘Quick Search’ box a drop-down list containing all matching forms will appear.  Note that these are forms not only headwords so they include lemmas but also variants and deviants.  If you select a form that is associated with one single entry then the entry’s page will load.  If you select a form that is associated with more than one entry then the search results page will load.  You can also choose to not select an item from the drop-down list and search for whatever you’re interested in.  For example, enter ‘*ment’ and press enter or the search button to view all of the forms ending in ‘ment’, as the following screenshot demonstrates (note that this is not the final user interface but one purely for test purposes):

With this example you’ll see that the results are paginated, with 100 results per page.  You can browse through the pages using the next and previous buttons or select one of the pages to jump directly to it.  You can bookmark specific results pages too.  Currently the search results display the lemma and homonym number (if applicable) and display whether the entry is an xref or not.  Associated parts of speech appear after the lemma.  Each one currently has a tooltip and we can add in descriptions of what each POS abbreviation means, although these might not be needed.  All of the variant / deviant forms are also displayed as otherwise it can be quite confusing for users if the lemma does not match the term the user entered but a form does.  All associated semantic / usage labels are also displayed.  I’m also intending to add in earliest citation date and possibly translations to the results as well, but I haven’t extracted them yet.

When you click on an entry from the search results this loads the corresponding entry page.  I have updated this to add in tabs to the left-hand column.  In addition to the ‘Browse’ tab there is a ‘Results’ tab and a ‘Log’ tab.  The latter doesn’t contain anything yet, but the former contains the search results.  This allows you to browse up and down the search results in the same way as the regular ‘browse’ feature, selecting another entry.  You can also return to the full results page.  I still need to do some tweaking to this feature, such as ensuring the ‘Results’ tab loads by default if coming from a search result.  The ‘clear’ option also doesn’t currently work properly.  I’ll continue with this next week.

For the Books and Borrowing project I spent a bit of time getting the page images for the Westerkirk library uploaded to the server and the page records created for each corresponding page image.  I also made some final tweaks to the Glasgow Students pilot website that Matthew Sangster and I worked on and this is now live and available here: https://18c-borrowing.glasgow.ac.uk/.

There are three new place-name related projects starting up at the moment and I spent some time creating initial websites for all of these.  I still need to add in the place-name content management systems for two of them, and I’m hoping to find some time to work on this next week.  I also spoke to Joanna Kopaczyk about a website for an RSE proposal she’s currently putting together and gave some advice to some people in Special Collections about a project that they are planning.

On Tuesday I had a Zoom call with the ‘Editing Robert Burns’ people to discuss developing the website for phase two of the Editing Robert Burns project.  We discussed how the website would integrate with the existing website (https://burnsc21.glasgow.ac.uk/) and discussed some of the features that would be present on the new site, such as an interactive map of Burns’ correspondence and a database of forged items.

I also had a meeting with the Historical Thesaurus people on Tuesday and spent some time this week continuing to work on the extraction of dates from the OED data, which will feed into a new second edition of the HT.  I fixed all of the ‘dot’ dates in the HT data.  This is where there isn’t a specific date but a dot is used instead (e.g. 14..) but sometimes a specific year is given in the year attribute (e.g. 1432) but at other times a more general year is given (e.g. 1400).  We worked out a set of rules for dealing with these and I created a script to process them.  I then reworked my script that extracts dates for all lexemes that match a specific date pattern (YYYY-YYYY, where the first year might be Old English and the last year might be ‘Current’) and sent this to Fraser so that the team can decide which of these dates should be used in the new version of the HT.  Next week I’ll begin work on a new version of the HT website that uses an updated dataset so we can compare the original dates with the newly updated ones.