Week Beginning 18th October 2021

I was back at work this week after having a lovely holiday in Northumberland last week.  I spent quite a bit of time in the early part of the week catching up with emails that had come in whilst I’d been off.  I fixed an issue with Bryony Randall’s https://imprintsarteditingmodernism.glasgow.ac.uk/ site, which was put together by an external contractor, but I have now inherited.  The site menu would not update via the WordPress admin interface and after a bit of digging around in the source files for the theme it would appear that the theme doesn’t display a menu anywhere, that is the menu which is editable from the WordPress Admin interface is not the menu that’s visible on the public site.  That menu is generated in a file called ‘header.php’ and only pulls in pages / posts that have been given one of three specific categories: Commissioned Artworks, Commissioned Text or Contributed Text (which appear as ‘Blogs’).  Any post / page that is given one of these categories will automatically appear in the menu.  Any post / page that is assigned to a different category or has no assigned category doesn’t appear.  I added a new category to the ‘header’ file and the missing posts all automatically appeared in the menu.

I also updated the introductory texts in the mockups for the STAR websites and replied to a query about making a place-names website from a student at Newcastle.  I spoke to Simon Taylor about a talk he’s giving about the place-name database and gave him some information on the database and systems I’d created for the projects I’ve been involved with.  I also spoke to the Iona Place-names people about their conference and getting the website ready for this.

I also had a chat with Luca Guariento about a new project involving the team from the Curious Travellers project.  As this is based in Critical Studies Luca wondered whether I’d write the Data Management Plan for the project and I said I would.  I spent quite a bit of time during the rest of the week reading through the bid documentation, writing lists of questions to ask the PI, emailing the PI, experimenting with different technologies that the project might use and beginning to write the Plan, which I aim to complete next week.

The project is planning on running some pre-digitised images of printed books through an OCR package and I investigated this.  Google owns and uses a program called Tesseract to run OCR for Google Books and Google Docs and it’s freely available (https://opensource.google/projects/tesseract).  It’s part of Google Docs – if you upload an image of text into Google Drive then open it in Google Docs the image will be automatically OCRed.  I took a screenshot of one of the Welsh tour pages (https://viewer.library.wales/4690846#?c=&m=&s=&cv=32&manifest=https%3A%2F%2Fdamsssl.llgc.org.uk%2Fiiif%2F2.0%2F4690846%2Fmanifest.json&xywh=-691%2C151%2C4725%2C3632) and cropped the text and then opened it in Google Docs and even on this relatively low resolution image the OCR results are pretty decent.  It managed to cope with most (but not all) long ‘s’ characters and there are surprisingly few errors – ‘Englija’ and ‘Lotty’ are a couple and have been caused by issues with the original print quality.  I’d say using Tesseract is going to be suitable for the project.

I spent a bit of time working on the Speak For Yersel project.  We had a team meeting on Thursday to go through in detail how one of the interactive exercises will work.  This one will allow people to listed to a sound clip and then relisten to it in order to click whenever they hear something that identifies the speaker as coming from a particular location.  Before the meeting I’d prepared a document giving an overview of the technical specification of the feature and we had a really useful session discussing the feature and exactly how it should function.  I’m hoping to make a start on a mockup of the feature next week.

Also for the project I’d enquired with Arts IT Support as to whether the University held a license for ArcGIS Online, which can be used to publish maps online.  It turns out that there is a University-wide license for this which is managed by the Geography department and a very helpful guy called Craig MacDonell arranged for me and the other team members to be set up with accounts for it.  I spent a bit of time experimenting with the interface and managed to publish a test heatmap based on data from SCOSYA.  I can’t get it to work directly with the SCOSYA API as it stands, but after exporting and tweaking one of the sets of rating data as a CSV I pretty quickly managed to make a heatmap based on the ratings and publish it: https://glasgow-uni.maps.arcgis.com/apps/instant/interactivelegend/index.html?appid=9e61be6879ec4e3f829417c12b9bfe51 This is just a really simple test, but we’d be able to embed such a map in our website and have it pull in data dynamically from CSVs generated in real-time and hosted on our server.

Also this week I had discussions with the Dictionaries of the Scots Language people about how dates will be handled.  Citation dates are being automatically processed to add in dates as attributes that can then be used for search purposes.  Where there are prefixes such as ‘a’ and ‘c’ the dates are going to be given ranges based on values for these prefixes.  We had a meeting to discuss the best way to handle this.  Marc had suggested that having a separate prefix attribute rather than hard coding the resulting ranges would be best.  I agreed with Marc that having a ‘prefix’ attribute would be a good idea, not only because it means we can easily tweak the resulting date ranges at a later point rather than having them hard-coded, but also because it then gives us an easy way to identify ‘a’, ‘c’ and ‘?’ dates if we ever want to do this.  If we only have the date ranges as attributes then picking out all ‘c’ dates (e.g. show me all citations that have a date between 1500 and 1600 that are ‘c’) would require looking at the contents of each date tag for the ‘c’ character which is messier.

A concern was raised that not having the exact dates as attributes would require a lot more computational work for the search function, but I would envisage generating and caching the full date ranges when the data is imported into the API so this wouldn’t be an issue.  However, there is a potential disadvantage to not including the full date range as attributes in the XML, and this is that if you ever want to use the XML files in another system and search the dates through it the full ranges would not be present in the XML so would require processing before they could be used.  But whether the date range is included in the XML or not I’d say it’s important to have the ‘prefix’ as an attribute, unless you’re absolutely sure that being able to easily identify dates that have a particular prefix isn’t important.

We decided that prefixes would be stored as attributes and that the date ranges for dates with a prefix would be generated whenever the data is exported from the DSL’s editing system, meaning editors wouldn’t have to deal with noting the date ranges and all the data would be fully usable without further processing as soon as it’s exported.

Also this week I was given access to a large number of images of registers from the Advocates Library that had been digitised by the NLS.  I downloaded these, batch processed them to add in the register numbers as a prefix to the filenames, uploaded the images to our server, created register records for each register and page records for each page.  The registers, pages and associated images can all now be accessed via our CMS.

My final task of the week was to continue work on the Anglo-Norman Dictionary.  I completed work on the script identifies which citations have varlists and which may need to have their citation date updated based on one of the forms in the varlist.  What the script does is to retrieve all entries that have a <varlist> somewhere in them.  It then grabs all of the forms in the <head> of the entry.  It then goes through every attestation (main sense and subsense plus locution sense and subsense) and picks out each one that has a <varlist> in it.

For each of these it then extracts the <aform> if there is one, or if there’s not then it extracts the final word before the <varlist>.  It runs a Levenshtein test on this ‘aform’ to ascertain how different it is from each of the <head> forms, logging the closest match (0 = exact match of one form, 1 = one character different from one of the forms etc).  It then picks out each <ms_form> in the <varlist> and runs the same Levenshtein test on each of these against all forms in the <head>.

If the score for the ‘aform’ is lower or equal to the lowest score for an <ms_form> then the output is added to the ‘varlist-aform-ok’ spreadsheet.  If the score for one of the <ms_form> words is lower than the ‘aform’ score the output is added to the ‘varlist-vform-check’ spreadsheet.

My hope is that by using the scores we can quickly ascertain which are ok and which need to be looked at by ordering the rows by score and dealing with the lowest scores first.  In the first spreadsheet there are 2187 rows that have a score of 0.  This means the ‘aform’ exactly matches one of the <head> forms.  I would imagine that these can safely be ignored.  There are a further 872 that have a score of 1, and we might want to have a quick glance through these to check they can be ignored, but I suspect most will be fine.  The higher the score the greater the likelihood that the ‘aform’ is not the form that should be used for dating purposes and one of the <varlist> forms should instead.  These would need to be checked and potentially updated.

The other spreadsheet contains rows where a <varlist> form has a lower score than the ‘aform’ – i.e. one of the <varlist> forms is closer to one of the <head> forms than the ‘aform’ is.  These are the ones that are more likely to have a date that needs updated. The ‘Var forms’ column lists each var form and its corresponding score.  It is likely that the var form with the lowest score is the form that we would need to pick the date out for.

In terms of what the editors could do with the spreadsheets:  My plan was that we’d add an extra column to note whether a row needs updated or not – maybe called ‘update’ – and be left blank for rows that they think look ok as they are and containing a ‘Y’ for rows that need to be updated.  For such rows they could manually update the XML column to add in the necessary date attributes.  Then I could process the spreadsheet in order to replace the quotation XML for any attestations that needs updated.

For the ‘vform-check’ spreadsheet I could update my script to automatically extract the dates for the lowest scoring form and attempt to automatically add in the required XML attributes for further manual checking, but I think this task will require quite a lot of manual checking from the onset so it may be best to just manually edit the spreadsheet here too.