Week Beginning 18th October 2021

I was back at work this week after having a lovely holiday in Northumberland last week.  I spent quite a bit of time in the early part of the week catching up with emails that had come in whilst I’d been off.  I fixed an issue with Bryony Randall’s https://imprintsarteditingmodernism.glasgow.ac.uk/ site, which was put together by an external contractor, but I have now inherited.  The site menu would not update via the WordPress admin interface and after a bit of digging around in the source files for the theme it would appear that the theme doesn’t display a menu anywhere, that is the menu which is editable from the WordPress Admin interface is not the menu that’s visible on the public site.  That menu is generated in a file called ‘header.php’ and only pulls in pages / posts that have been given one of three specific categories: Commissioned Artworks, Commissioned Text or Contributed Text (which appear as ‘Blogs’).  Any post / page that is given one of these categories will automatically appear in the menu.  Any post / page that is assigned to a different category or has no assigned category doesn’t appear.  I added a new category to the ‘header’ file and the missing posts all automatically appeared in the menu.

I also updated the introductory texts in the mockups for the STAR websites and replied to a query about making a place-names website from a student at Newcastle.  I spoke to Simon Taylor about a talk he’s giving about the place-name database and gave him some information on the database and systems I’d created for the projects I’ve been involved with.  I also spoke to the Iona Place-names people about their conference and getting the website ready for this.

I also had a chat with Luca Guariento about a new project involving the team from the Curious Travellers project.  As this is based in Critical Studies Luca wondered whether I’d write the Data Management Plan for the project and I said I would.  I spent quite a bit of time during the rest of the week reading through the bid documentation, writing lists of questions to ask the PI, emailing the PI, experimenting with different technologies that the project might use and beginning to write the Plan, which I aim to complete next week.

The project is planning on running some pre-digitised images of printed books through an OCR package and I investigated this.  Google owns and uses a program called Tesseract to run OCR for Google Books and Google Docs and it’s freely available (https://opensource.google/projects/tesseract).  It’s part of Google Docs – if you upload an image of text into Google Drive then open it in Google Docs the image will be automatically OCRed.  I took a screenshot of one of the Welsh tour pages (https://viewer.library.wales/4690846#?c=&m=&s=&cv=32&manifest=https%3A%2F%2Fdamsssl.llgc.org.uk%2Fiiif%2F2.0%2F4690846%2Fmanifest.json&xywh=-691%2C151%2C4725%2C3632) and cropped the text and then opened it in Google Docs and even on this relatively low resolution image the OCR results are pretty decent.  It managed to cope with most (but not all) long ‘s’ characters and there are surprisingly few errors – ‘Englija’ and ‘Lotty’ are a couple and have been caused by issues with the original print quality.  I’d say using Tesseract is going to be suitable for the project.

I spent a bit of time working on the Speak For Yersel project.  We had a team meeting on Thursday to go through in detail how one of the interactive exercises will work.  This one will allow people to listed to a sound clip and then relisten to it in order to click whenever they hear something that identifies the speaker as coming from a particular location.  Before the meeting I’d prepared a document giving an overview of the technical specification of the feature and we had a really useful session discussing the feature and exactly how it should function.  I’m hoping to make a start on a mockup of the feature next week.

Also for the project I’d enquired with Arts IT Support as to whether the University held a license for ArcGIS Online, which can be used to publish maps online.  It turns out that there is a University-wide license for this which is managed by the Geography department and a very helpful guy called Craig MacDonell arranged for me and the other team members to be set up with accounts for it.  I spent a bit of time experimenting with the interface and managed to publish a test heatmap based on data from SCOSYA.  I can’t get it to work directly with the SCOSYA API as it stands, but after exporting and tweaking one of the sets of rating data as a CSV I pretty quickly managed to make a heatmap based on the ratings and publish it: https://glasgow-uni.maps.arcgis.com/apps/instant/interactivelegend/index.html?appid=9e61be6879ec4e3f829417c12b9bfe51 This is just a really simple test, but we’d be able to embed such a map in our website and have it pull in data dynamically from CSVs generated in real-time and hosted on our server.

Also this week I had discussions with the Dictionaries of the Scots Language people about how dates will be handled.  Citation dates are being automatically processed to add in dates as attributes that can then be used for search purposes.  Where there are prefixes such as ‘a’ and ‘c’ the dates are going to be given ranges based on values for these prefixes.  We had a meeting to discuss the best way to handle this.  Marc had suggested that having a separate prefix attribute rather than hard coding the resulting ranges would be best.  I agreed with Marc that having a ‘prefix’ attribute would be a good idea, not only because it means we can easily tweak the resulting date ranges at a later point rather than having them hard-coded, but also because it then gives us an easy way to identify ‘a’, ‘c’ and ‘?’ dates if we ever want to do this.  If we only have the date ranges as attributes then picking out all ‘c’ dates (e.g. show me all citations that have a date between 1500 and 1600 that are ‘c’) would require looking at the contents of each date tag for the ‘c’ character which is messier.

A concern was raised that not having the exact dates as attributes would require a lot more computational work for the search function, but I would envisage generating and caching the full date ranges when the data is imported into the API so this wouldn’t be an issue.  However, there is a potential disadvantage to not including the full date range as attributes in the XML, and this is that if you ever want to use the XML files in another system and search the dates through it the full ranges would not be present in the XML so would require processing before they could be used.  But whether the date range is included in the XML or not I’d say it’s important to have the ‘prefix’ as an attribute, unless you’re absolutely sure that being able to easily identify dates that have a particular prefix isn’t important.

We decided that prefixes would be stored as attributes and that the date ranges for dates with a prefix would be generated whenever the data is exported from the DSL’s editing system, meaning editors wouldn’t have to deal with noting the date ranges and all the data would be fully usable without further processing as soon as it’s exported.

Also this week I was given access to a large number of images of registers from the Advocates Library that had been digitised by the NLS.  I downloaded these, batch processed them to add in the register numbers as a prefix to the filenames, uploaded the images to our server, created register records for each register and page records for each page.  The registers, pages and associated images can all now be accessed via our CMS.

My final task of the week was to continue work on the Anglo-Norman Dictionary.  I completed work on the script identifies which citations have varlists and which may need to have their citation date updated based on one of the forms in the varlist.  What the script does is to retrieve all entries that have a <varlist> somewhere in them.  It then grabs all of the forms in the <head> of the entry.  It then goes through every attestation (main sense and subsense plus locution sense and subsense) and picks out each one that has a <varlist> in it.

For each of these it then extracts the <aform> if there is one, or if there’s not then it extracts the final word before the <varlist>.  It runs a Levenshtein test on this ‘aform’ to ascertain how different it is from each of the <head> forms, logging the closest match (0 = exact match of one form, 1 = one character different from one of the forms etc).  It then picks out each <ms_form> in the <varlist> and runs the same Levenshtein test on each of these against all forms in the <head>.

If the score for the ‘aform’ is lower or equal to the lowest score for an <ms_form> then the output is added to the ‘varlist-aform-ok’ spreadsheet.  If the score for one of the <ms_form> words is lower than the ‘aform’ score the output is added to the ‘varlist-vform-check’ spreadsheet.

My hope is that by using the scores we can quickly ascertain which are ok and which need to be looked at by ordering the rows by score and dealing with the lowest scores first.  In the first spreadsheet there are 2187 rows that have a score of 0.  This means the ‘aform’ exactly matches one of the <head> forms.  I would imagine that these can safely be ignored.  There are a further 872 that have a score of 1, and we might want to have a quick glance through these to check they can be ignored, but I suspect most will be fine.  The higher the score the greater the likelihood that the ‘aform’ is not the form that should be used for dating purposes and one of the <varlist> forms should instead.  These would need to be checked and potentially updated.

The other spreadsheet contains rows where a <varlist> form has a lower score than the ‘aform’ – i.e. one of the <varlist> forms is closer to one of the <head> forms than the ‘aform’ is.  These are the ones that are more likely to have a date that needs updated. The ‘Var forms’ column lists each var form and its corresponding score.  It is likely that the var form with the lowest score is the form that we would need to pick the date out for.

In terms of what the editors could do with the spreadsheets:  My plan was that we’d add an extra column to note whether a row needs updated or not – maybe called ‘update’ – and be left blank for rows that they think look ok as they are and containing a ‘Y’ for rows that need to be updated.  For such rows they could manually update the XML column to add in the necessary date attributes.  Then I could process the spreadsheet in order to replace the quotation XML for any attestations that needs updated.

For the ‘vform-check’ spreadsheet I could update my script to automatically extract the dates for the lowest scoring form and attempt to automatically add in the required XML attributes for further manual checking, but I think this task will require quite a lot of manual checking from the onset so it may be best to just manually edit the spreadsheet here too.

Week Beginning 4th October 2021

I spent a fair amount of time on the new ‘Speak for Yersel’ project this week, reading through materials produced by similar projects, looking into ArcGIS Online as a possible tool to use to create the map-based interface and thinking through some of the technical challenges the project will face.  I also participated in a project Zoom call on Thursday where we discussed the approaches we might take and clarified the sorts of outputs the project intends to produce.

I also had further discussions with the Sofia from the Iona place-names project about their upcoming conference in December and how the logistics for this might work, as it’s going to be an online-only conference.  I had a Zoom call with Sofia on Thursday to go through these details, which really helped us to shape up a plan.  I also dealt with a request from another project that wants to set up a top-level ‘ac.uk’ domain, which makes three over the past couple of weeks, and make a couple of tweaks to the text of the Decadence and Translation website.

I had a chat with Mike Black about the new server that Arts IT Support are currently setting up for the Anglo-Norman Dictionary and had a chat with Eleanor Lawson about adding around 100 or so Gaelic videos to the Seeing Speech resource on a new dedicated page.

For the Books and Borrowing project I was sent a batch of images of a register from Dumfries Presbytery Library and I needed to batch process them in order to fix the lighting levels and rename them prior to upload.  It took me a little time to figure out how to run a batch process in the ancient version of Photoshop I have.  After much hopeless Googling I found some pages from ‘Photoshop CS2 For Dummies’ on Google Books that discussed Photoshop Actions (see https://books.google.co.uk/books?id=RLOmw2omLwgC&lpg=PA374&dq=&pg=PA332#v=onepage&q&f=false) which made me realise the ‘Actions’, which I’d failed to find in any of the menus, were available via the tabs on the right of the screen, and I could ‘record’ and action via this.  After running the images through the batch I uploaded them to the server and generated the page records for each corresponding page in the register.

I spent the rest of the week working on the Anglo-Norman Dictionary, considering how we might be able to automatically fix entries with erroneous citation dates caused by a varlist being present in the citation with a different date that should be used instead of the main citation date.  I had been wondering whether we could use a Levenshtein test (https://en.wikipedia.org/wiki/Levenshtein_distance) to automatically ascertain which citations may need manual editing, or even as a means of automatically adding in the new tags after testing.  I can already identify all entries that feature a varlist, so I can create a script that can iterate through all citations that have a varlist in each of these entries. If we can assume that the potential form in the main citation always appears as the word directly before the varlist then my script can extract this form and then each <ms_form> in the <varlist>.  I can also extract all forms listed in the <head> of the XML.

So for example for https://anglo-norman.net/entry/babeder my script would extract the term ‘gabez’ from the citation as it is the last word before <varlist>.  It would then extract ‘babedez’ and ‘bauboiez’ from the <varlist>.  There is only one form for this entry: <lemma>babeder</lemma> so this would get extracted too.  The script would then run a Levenshtein test on each possible option, comparing them to the form ‘babeder’, the results of which would be:

gabez: 4

babedez: 1

bauboiez: 4

The script would then pick out ‘babedez’ as the form to use (only one character different to the form ‘babeder’) and would then update the XML to note that the date from this <ms_form> is the one that needs to be used.

With a more complicated example such as https://anglo-norman.net/entry/bochet_1 that has multiple forms in <head> the test would be run against each and the lowest score for each variant would be used.  So for example for the citation where ‘buchez’ is the last word before the <varlist> the two <ms_form> words would be extracted (huchez and buistez) and these plus ‘buchez’ would be compared against every form in <head>, with the overall lowest Leveshtein score getting logged.  The overall calculations in this case would be:

buchez:

bochet = 2

boket = 4

bouchet = 2

bouket = 4

bucet = 2

buchet = 1

buket = 3

bokés = 5

boketes = 5

bochésç = 6

buchees = 2

huchez:

bochet = 3

boket = 5

bouchet = 3

bouket = 5

bucet = 3

buchet = 2

buket = 4

bokés = 6

boketes = 6

bochésç = 7

buchees = 3

buistez:

bochet = 5

boket = 5

bouchet = 5

bouket = 5

bucet = 4

buchet = 4

buket = 4

bokés = 6

boketes = 4

bochésç = 8

buchees = 4

Meaning ‘buchez’ would win with a score of 1 and in this case no <varlist> form would therefore be marked.  If the main citation form and a varlist form both have the same lowest score then I guess we’d set it to the main citation form ‘winning’, although in such cases the citation could be flagged for manual checking.  However, this algorithm does entirely depend on the main citation form being the word before the <varlist> tag and the editor confirmed that this is not always the case, but despite this I think the algorithm could correctly identify the majority of cases, and if the output was placed in a CSV it would then be possible for someone to quickly check through each citation and tick off those that should be automatically updated and manually fix the rest.  I made a start on the script that would work through all of the entries and output the CSV during the remainder of the week, but didn’t have the time to finish it.  I’m going to be on holiday next week but will continue with this when I return.

 

Week Beginning 27th September 2021

I had two Zoom calls on Monday this week.  The first was with the Burns people to discuss the launch of the website for the ‘letters and poems’ part of ‘Editing Burns’, to complement the existing ‘Prose and song’ website (https://burnsc21.glasgow.ac.uk/).  The new website will launch in January with some video content and blogs, plus I will be working on a content management system for managing the network of Burns’ letter correspondents, which I will put together some time in November, assuming the team can send me on some sample data by then.  This system will eventually power the ‘Burns letter writing trail’ interactive maps that I’ll create for the new site sometime next year.

My second Zoom call was for the Books and Borrowing project to discuss adding data from a new source to the database.  The call gave us an opportunity to discuss the issues with the data that I’d highlighted last week.  It was good to catch up with the team again and to discuss the issues with the researcher who had originally prepared the spreadsheet containing the data.  We managed to address all of the issues and the researcher is going to spend a bit of time adapting the spreadsheet before sending it to me to be batch uploaded into our system.

I spent some further time this week investigating the issue of some of the citation dates in the Anglo-Norman Dictionary being wrong, as discussed last week.  The issue affects some 4309 entries where at least one citation features the form only in a variant text.  This means that the citation date should not be the date of the manuscript in the citation, but the date when the variant of the manuscript was published.  Unfortunately this situation was never flagged in the XML, and there was never any means of flagging the situation.  The variant date should only ever be used when the form of the word in the main manuscript is not directly related to the entry in question but the form in the variant text is.  The problem is it cannot be automatically ascertained when the form in the main manuscript is the relevant one and when the form in the variant text is as there is so much variation in forms.

For example, the entry https://anglo-norman.net/entry/bochet_1 there is a form ‘buchez’ in a citation and then two variant texts for this where the form is ‘huchez’ and ‘buistez’.  None of these forms are listed in the entry’s XML as variants so it’s not possible for a script to automatically deduce which is the correct date to use (the closest is ‘buchet’).  In this case the main citation form and its corresponding date should be used.  Whereas in the entry https://anglo-norman.net/entry/babeder the main citation form is ‘gabez’ while the variant text has ‘babedez’ and so this is the form and corresponding date that needs to be used.  It would be difficult for a script to automatically deduce this.  In this case a Levenstein test (which test how many letters need to be changed to turn one string into another) could work, but this would still need to be manually checked.

The editor wanted me to focus on those entries where the date issue affects the earliest date for an entry, as these are the most important as the issue results in an incorrect date being displayed for the entry in the header and the browse feature.  I wrote a script that finds all entries that feature ‘<varlist’ somewhere in the XML (the previously exported 4309 entries).  It then goes through all attestations (in all sense, subsense and locution sense and subsense sections) to pick out the one with the earliest date, exactly as the code for publishing an entry does.  What it then does is checks the quotation XML for the attestation with the earliest date for the presence of ‘<varlist’ and if it finds this it outputs information for the entry, consisting of the slug, the earliest date as recorded in the database, the earliest date of the attestation as found by the script, the ID of the  attestation and then the XML of the quotation.  The script has identified 1549 entries that have a varlist in the earliest citation, all of which will need to be edited.

However, every citation has a date associated with it and this is used in the advanced search where users have the option to limit their search to years based on the citation date.  Only updating citations that affect the entry’s earliest date won’t fix this, as there will still be many citations with varlists that haven’t been updated and will still therefore use the wrong date in the search.  Plus any future reordering of citations would require all citations with varlists to be updated to get entries in the correct order.  Fixing the earliest citations with varlists in entries based on the output of my script will fix the earliest date as used in the header of the entry and the ‘browse’ feature only, but I guess that’s a start.

Also this week I sorted out some access issues for the RNSN site, submitted the request for a new top-level ‘ac.uk’ domain for the STAR project and spent some time discussing the possibilities for managing access to videos of the conference sessions for the Iona place-names project.  I also updated the page about the Scots Dictionary for Schools app on the DSL website (https://dsl.ac.uk/our-publications/scots-dictionary-for-schools-app/) after it won the award for ‘Scots project of the year’.

I also spent a bit of time this week learning about the statistical package R (https://www.r-project.org/).  I downloaded and installed the package and the R Studio GUI and spent some time going through a number of tutorials and examples in the hope that this might help with the ‘Speak for Yersel’ project.

For a few years now I’ve been meaning to investigate using a spider / radar chart for the Historical Thesaurus, but I never found the time.  I unexpectedly found myself with some free time this week due to ‘Speak for Yersel’ not needing anything from me yet so I thought I’d do some investigation.  I found a nice looking d3.js template for spider / radar charts here: http://bl.ocks.org/nbremer/21746a9668ffdf6d8242  and set about reworking it with some HT data.

My idea was to use the chart to visualise the distribution of words in one or more HT categories across different parts of speech in order to quickly ascertain the relative distribution and frequency of words.  I wanted to get an overall picture of the makeup of the categories initially, but to then break this down into different time periods to understand how categories changed over time.

As an initial test I chose the categories 02.04.13 Love and 02.04.14 Hatred, and in this initial version I looked only at the specific contents of the categories – no subcategories and no child categories.  I manually extracted counts of the words across the various parts of speech and then manually split them up into words that were active in four broad time periods: OE (up to 1149), ME (1150-1449), EModE (1450-1799) and ModE (1800 onwards) and then plotted them on the spider / radar chart, as you can see in this screenshot:

You can quickly move through the different time periods plus the overall picture using the buttons above the visualisation, and I think the visualisation does a pretty good job of giving you a quick and easy to understand impression of how the two categories compare and evolve over time, allowing you to see, for example, how the number of nouns and adverbs for love and hate are pretty similar in OE:

but by ModE the number of nouns for Love have dropped dramatically, as have the number of adverbs for Hate:

We are of course dealing with small numbers of words here, but even so it’s much easier to use the visualisation to compare different categories and parts of speech than it is to use the HT’s browse interface.  Plus if such a visualisation was set up to incorporate all words in child categories and / or subcategories it could give a very useful overview of the makeup of different sections of the HT and how they develop over time.

There are some potential pitfalls to this visualisation approach, however.  The scale used currently changes based on the largest word count in the chosen period, meaning unless you’re paying attention you might get the wrong impression of the number of words.  I could change it so that the scale is always fixed as the largest, but that would then make it harder to make out details in periods that have much fewer words.  Also, I suspect most categories are going to have many more nouns than other parts of speech, and a large spike of nouns can make it harder to see what’s going on with the other axes.  Another thing to note is that the order of the axes is fairly arbitrary but can have a major impact on how someone may interpret the visualisation.  If you look at the OE chart the ‘Hate’ area looks massive compared to the ‘Love’ area, but this is purely because there is only one ‘Love’ adjective compared to 5 for ‘Hate’.  If the adverb axis had come after the noun one instead the shapes of ‘Love’ and ‘Hate’ would have been more similar.  You don’t necessarily appreciate on first glance that ‘Love’ and ‘Hate’ have very similar numbers of nouns in OE, which is concerning.  However, I think the visualisations have a potential for the HT and I’ve emailed the other HT people to see what they think.