Week Beginning 3rd January 2022

This was my first week back after the Christmas holidays, and it was a three-day week.  I spent the days almost exclusively on the Books and Borrowing project.  We had received a further batch of images for 23 library registers from the NLS, which I needed to download from the NLS’s server and process.  This involved renaming many thousands of images via a little script I’d written in order to give the images more meaningful filenames and stripping out several thousand images of blank pages that had been included but are not needed by the project.  I then needed to upload the images to the project’s web server and then generate all of the necessary register and page records in the CMS for each page image.

I also needed up update the way folio numbers were generated for the registers.  For the previous batch of images from the NLS I had just assigned the numerical part of the image’s filename as the folio number, but it turns out that most of the images have a hand-written page number in the top-right which starts at 1 for the first actual page of borrowing records.  There are usually a few pages before this, and these need to be given Roman numerals as folio numbers.  I therefore had to write another script that would take into consideration the number of front-matter pages in each register, assign Roman numerals as folio numbers to them and then begin the numbering of borrowing record pages from 1 after that, incrementing through the rest of the volume.

I guess it was inevitable with data of this sort, but I ran into some difficulties whilst processing it.  Firstly, there were some problems with the Jpeg images the NLS had sent for two of the volumes.  These didn’t match the Tiff images for the volumes, with each volume having an incorrect number of files.  Thankfully the NLS were able to quickly figure out what had gone wrong and were able to supply updated images.

The next issue to crop up occurred when I began to upload the images to the server.  After uploading about 5Gb of images the upload terminated, and soon after that I received emails from the project team saying they were unable to log into the CMS.  It turns out that the server had run out of storage.  Each time someone logs into the CMS the server needs a tiny amount of space to store a session variable, but there wasn’t enough space to store this, meaning it was impossible to log in successfully.  I emailed the IT people at Stirling (Where the project server is located) to enquire about getting some further space allocated but I haven’t heard anything back yet.  In the meantime I deleted the images from the partially uploaded volume which freed up enough space to enable the CMS to function again.  I also figured out a way to free up some further space:  The first batch of images from the NLS also included images of blank pages across 13 volumes – several thousand images.  It was only after uploading these and generating page records that we had decided to remove the blank pages, but I only removed the CMS records for these pages – the image files were still stored on the server.  I therefore wrote another script to identify and delete all of the blank page images from the first batch that was uploaded, which freed up 4-5Gb of space from the server, which was enough to complete the upload of the second batch of registers from the NLS.  We will still need more space, though, as there are still many thousands of images left to add.

I also took the opportunity to update the folio numbers of the first batch of NLS registers to bring them into line with the updated method we’d decided on for the second batch (Roman numerals for front-matter and then incrementing page numbers from the first page of borrowing records).  I wrote a script to renumber all of the required volumes, which was mostly a success.

However, I also noticed that the automatically generated folio numbers often became out of step with the hand-written folio numbers found in the top-right corner of the images.  I decided to go through each of the volumes to identify all that became unaligned and to pinpoint on exactly which page or pages the misalignment occurred.  This took some time as there were 32 volumes that needed checked, and each time an issue was spotted I needed to look back through the pages and associated images from the last page until I found the point where the page numbers correctly aligned.  I discovered that there were numbering issues with 14 of the 32 volumes, mainly due to whoever wrote the numbers in getting muddled.  There are occasions where a number is missed, or a number is repeated.  In once volume the page numbers advance by 100 from one page to the next.  It should be possible for me to write a script that will update the folio numbers to bring them into alignment with the erroneous handwritten numbers (for example where a number is repeated these will be given ‘a’ and ‘b’ suffixes).  I didn’t have time to write the script this week but will do so next week.

Also for the project this week I looked through the spreadsheet of borrowing records from the Royal High School of Edinburgh that one of the RAs has been preparing.  I had a couple of questions about the spreadsheet, and I’m hoping to be able to process it next week.  I also exported the records from one register for Gerry McKeever to work on, as these records now need to be split across two volumes rather than one.

Also this week I had an email conversation with Marc Alexander about a few issues, during which he noted that the Historical Thesaurus website was offline.  Further investigation revealed that the entire server was offline, meaning several other websites were down too.  I asked Arts IT Support to look into this, which took a little time as it was a physical issue with the hardware and they were all still working remotely.  However, the following day they were able to investigate and address the issue, which they reckon was caused by a faulty network port.

Week Beginning 20th December 2021

This was the last week before Christmas and it’s a four-day week as the University has generously given us all an extra day’s holiday on Christmas Eve.  I also lost a bit of time due to getting my Covid booster vaccine on Wednesday.  I was booked in for 9:50 and got there at 9:30 to find a massive queue snaking round the carpark.  It took an hour to queue outside, plus about 15 minutes inside, but I finally got my booster just before 11.  The after-effects kicked in during Wednesday night and I wasn’t feeling great on Thursday, but I managed to work.

My major task of the week was to deal with the new Innerpeffray data for the Books and Borrowing project.  I’d previously uploaded data from an existing spreadsheet in the early days of the project, but it turns out that there were quite a lot of issues with the data and therefore one of the RAs has been creating a new spreadsheet containing reworked data.  The RA Kit got back to me this week after I’d checked some issues with her last week and I therefore began the process of deleting the existing data and importing the new data.

I was a pretty torturous process but I managed to finish deleting the existing Innerpeffray data and imported the new data.  This required a pretty complex amount of processing and checking via a script I wrote this week.  I managed to retain superscript characters in the transcriptions, something that proved to be very tricky as there is no way to find and replace superscript characters in Excel.  Eventually I ended up copying the transcription column into Word, then saving the table as HTML, stripping out all of the rubbish Word adds in when it generates an HTML file and then using this resulting file alongside the main spreadsheet file that I saved as a CSV.  After several attempts at running the script on my local PC, then fixing issues, then rerunning, I eventually reckoned the script was working as it should – adding page, borrowing, borrower, borrower occupation, book holding and book item records as required.  I then ran the script on the server and the data is now available via the CMS.

There were a few normalised occupations that weren’t right and I updated these.  There were also 287 standardised titles that didn’t match any existing book holding records in Innerpeffray.  For these I created a new holding record and (if there’s an ESTC number) linked to a corresponding edition.

Also this week I completed work on the ‘Guess the Category’ quizzes for the Historical Thesaurus.  Fraser had got back to me about the spreadsheets of categories and lexemes that might cause offence and should therefore never appear in the quiz.  I added a new ‘inquiz’ column to both the category and lexeme table which has been set to ‘N’ for each matching category and lexeme.  I also updated the code behind the quiz so that only categories and lexemes with ‘inquiz’ set to ‘Y’ are picked up.

The category exclusions are pretty major – a total of 17,111 are now excluded.  This is due to including child categories where noted, and 8340 of these are within ’03.08 Faith’.  For lexemes there are a total of 2174 that are specifically noted as excluded based on both tabs of the spreadsheet (but note that all lexemes in excluded categories are excluded by default – a total of 69099).  The quiz picks a category first and then a lexeme within it, so there should never be a case where a lexeme in an excluded category is displayed.  I also ensured that when a non-noun category is returned if there isn’t a full trail of categories (because there isn’t a parent in the same part of speech) then the trail is populated from the noun categories instead.

The two quizzes (a main one and an Old English one) are now live and can be viewed here:

https://ht.ac.uk/guess-the-category/

https://ht.ac.uk/guess-the-oe-category/

Also this week I made a couple of tweaks to the Comparative Kingship place-names systems, adding in Pictish as a language and tweaking how ‘codes’ appear in the map.  I also helped Raymond migrate the Anglo-Norman Dictionary to the new server that was purchased earlier this year.  We had to make a few tweaks to get the site to work at a temporary URL but it’s looking good now.  We’ll update the DNS and make the URL point to the new server in the New Year.

That’s all for this year.  If there is anyone reading this (doubtful, I know) I wish you a merry Christmas and all the best for 2022!

Week Beginning 13th December 2021

My big task of the week was to return to working for the Speak For Yersel project after a couple of weeks when my services haven’t been required.  I had a meeting with PI Jennifer Smith and RA Mary Robinson on Monday where we discussed the current status of the project and the tasks I should focus on next.  Mary had finished work on the geographical areas we are going to use.  These are based on postcode areas but a number of areas have been amalgamated.  We’ll use these to register where a participant is from and also to generate a map marker representing their responses at a random location within their selected area based on the research I did a few weeks ago about randomly positioning a marker in a polygon.

The original files that Mary sent me were plus two exports from ArcGIS as JSON and GeoJSON.  Unfortunately both files used a different coordinates system rather than latitude and longitude, the GeoJSON file didn’t include any identifiers for the areas so couldn’t really be used and while the JSON file looked promising when I tried to use it in Leaflet it gave me an ‘invalid GeoJSON object’ error.  Mary then sent me the original ArcGIS file for me to work with and I spent some time in ArcGIS figuring out how to export the shapefile data as GeoJSON with latitude and longitude.

Using ArcGIS I exported the data by typing in ‘to json’ in the ‘Geoprocessing’ pane on the right of the map then selecting ‘Features to JSON’.  I selected ‘output to GeoJSON’ and also checked ‘Project to WGS_1984’ which converts the ArcGIS coordinates to latitude and longitude.  When not using the ‘formatted JSON option’ (which adds in line breaks and tabs) this gave me a file size of 115Mb.  As a starting point I created a Leaflet map that uses this GeoJSON file but I ran into a bit of a problem:  the data takes a long time to load into the map – about 30-60 seconds for me – and the map feels a bit sluggish to navigate around even after it’s loaded in. And this is without there being any actual data.  The map is going to be used by school children, potentially on low-spec mobile devices connecting to slow internet services (or even worse, mobile data that they may have to pay for per MB).  We may have to think about whether using these areas is going to be feasible.  A option might be to reduce the detail in the polygons, which would reduce the size of the JSON file.  The boundaries in the current file are extremely detailed and each twist and turn in the polygon requires a latitude / longitude pair in the data, and there are a lot of twists and turns.  The polygons we used in SCOSYA are much more simplified (see for example https://scotssyntaxatlas.ac.uk/atlas/?j=y#9.75/57.6107/-7.1367/d3/all/areas) but would still suit our needs well enough.  However, manually simplifying each and every polygon would be a monumental and tedious task.  But perhaps there’s a method in ArcGIS that could do this for us.  There’s a tool called ‘Simplify Polygon’: https://desktop.arcgis.com/en/arcmap/latest/tools/cartography-toolbox/simplify-polygon.htm which might work.

I spoke to Mary about this and she agreed to experiment with the tool.  Whilst she worked on this I continued to work with the data.  I extracted all of the 411 areas and stored these in a database, together with all 954 postcode components that are related to these areas.  This will allow us to generate a drop-down list of options as the user types – e.g.  type in ‘G43’ and options ‘G43 2’ and ‘G43 3’ will appear, and both of these are associated with ‘Glasgow South’.

I also wrote a script to generate sample data for each of the 411 areas using the ‘turf.js’ script I’d previously used.  For each of the 411 areas a random number of markers between 0 and 100 are generated and stored in the database, each with a random rating of between 1 and 4.  This has resulted in 19946 sample ratings, which I then added to the map along with the polygonal area data, as you can see here:

Currently these are given the colours red=1, orange=2, light blue=3, dark blue=4, purely for test purposes.  As you can see, including almost 20,000 markers swamps the map when it’s zoomed out, but when you zoom in things look better.  I also realised that we might not even need to display the area boundaries to users.  They can be used in the background to work out where a marker should be positioned (as is the case with the map above) but perhaps they’re not needed for any other reasons?  It might be sufficient to include details of area in a popup or sidebar and if so we might not need to rework the areas at all.

However, whilst working on this Mary had created four different versions of the area polygons using four different algorithms.  These differ in how the simplify the polygons and therefore result in different boundaries – some missing out details such as lochs and inlets.  All four versions were considerably smaller in file size than the original, ranging from 4Mb to 20Mb.  I created new maps for each of the four simplified polygon outputs.  For each of these I regenerated new random marker data.  For algorithms ‘DP’ and ‘VW’ I limited the number of markers to between 0 and 20 per area, giving around 4000 markers in each map.  For ‘WM’ and ‘ZJ’ I limited the number to between 0 and 50 per area, giving around 10,000 markers per map.

All four new maps look pretty decent to me, with even the smaller JSON files (‘DP’ and ‘VW’) containing a remarkable level of detail.  I think the ‘DP’ one might be the one to go for.  It’s the smallest (just under 4MB compared to 115MB for the original) yet also seems to have more detail than the others.  For example for the smaller lochs to the east of Loch Ness the original and ‘DP’ include the outline of four lochs while the other three only include two.  ‘DP’ also includes more of the smaller islands around the Outer Hebrides.

We decided that we don’t need to display the postcode areas on the map to users but instead we’ll just use these to position the map markers.  However, we decided that we do want to display the local authority area so people have a general idea of where the markers are positioned.  My next task was to add these in.  I downloaded the administrative boundaries for Scotland from here: https://raw.githubusercontent.com/martinjc/UK-GeoJSON/master/json/administrative/sco/lad.json as referenced on this website: https://martinjc.github.io/UK-GeoJSON/ and added them into my ‘DP’ sample map, giving the boundaries a dashed light green that turns a darker green when you hover over the area, as you can see from the screenshot below:

Also this week I added in a missing text to the Anglo-Norman Dictionary’s Textbase.  To do this I needed to pass the XML text through several scripts to generate page records and all of the search words and ‘keyword in context’ data for search purposes.  I also began to investigate replacing the Innerpeffray data for Books and Borrowing with a new dataset that Kit has worked on.  This is going to be quite a large and complicated undertaking and after working through the data I had a set of questions to ask Kit before I proceeded to delete any of the existing data.  Unfortunately she is currently on jury duty so I’ll need to wait until she’s available again before I can do anything further.  Also this week a huge batch of images became available to us from the NLS and I spent some time downloading these and moving them to an external hard drive as they’d completely filled up the hard drive of my PC.

I also spoke to Fraser about the new radar diagrams I had been working on for the Historical Thesaurus and also about the ‘guess the category’ quiz that we’re hoping to launch soon.  Fraser sent on a list of categories and words that we want to exclude from the quiz (anything that might cause offence) but I had some questions about this that will need clarification before I take things further.  I’d suggested to Fraser that I could update the radar diagrams to include not only the selected category but also all child categories and he thought this would be worth investigating so I spent some time updating the visualisations.

I was a little worried about the amount of processing that would be required to include child categories but thankfully things seem pretty speedy, even when multiple top-level categories are chosen.  See for example the visualisation of everything within ‘Food and drink’, ‘Faith’ and ‘Leisure’:

This brings back many tens of thousands of lexemes but doesn’t take too long to generate.  I think including child categories will really help make the visualisations more useful as we’re now visualising data at a scale that’s very difficult to get a grasp on simply by looking at the underlying words.  It’s interesting to note in the above visualisation how ‘Leisure’ increases in size dramatically throughout the time periods while ‘Faith’ shrinks in comparison (but still grows overall).  With this visualisation the ‘totals’ rather than the ‘percents’ view is much more revealing.

Week Beginning 6th December 2021

I spent a bit of time this week writing as second draft of a paper for DH2022 after receiving feedback from Marc.  This one targets ‘short papers’ (500-750 words) and I managed to get it submitted before the deadline on Friday.  Now I’ll just need to see if it gets accepted – I should find out one way or the other in February.  I also made some further tweaks to the locution search for the Anglo-Norman Dictionary, ensuring that when a term appears more than once the result is repeated for each occurrence, appearing in the results grouped by each word that matches the term.  So for example ‘quatre tempres, tens’ now appears twice, once amongst the ‘tempres’ and once amongst the ‘tens’ results.

I also had a chat with Heather Pagan about the Irish Dictionary eDIL (http://www.dil.ie/) who are hoping to rework the way they handle dates in a similar way to the AND.  I said that it would be difficult to estimate how much time it would take without seeing their current data structure and getting more of an idea of how they intend to update it, and also what updates would be required to their online resource to incorporate the updated date structure, such as enhanced search facilities and whether further updates to their resource would also be part of the process.  Also whether any back-end systems would also need to be updated to manage the new data (e.g. if they have a DMS like the AND).

Also this week I helped out with some issues with the Iona place-names website just before their conference started on Thursday.  Someone had reported that the videos of the sessions were only playing briefly and then cutting out, but they all seemed to work for me, having tried them on my PC in Firefox and Edge and on my iPad in Safari.  Eventually I managed to replicate the issue in Chrome on my desktop and in Chrome on my phone, and it seemed to be an issue specifically related to Chrome, and didn’t affect Edge, which is based on Chrome.  The video file plays and then cuts out due to the file being blocked on the server.  I can only assume that the way Chrome accesses the file is different to other browsers and it’s sending multiple requests to the server which is then blocking access due to too many requests being sent (the console in the browser shows a 403 Forbidden error).  Thankfully Raymond at Arts IT Support was able to increase the number of connections allowed per browser and this fixed the issue.  It’s still a bit of a strange one, though.

I also had a chat with the DSL people about when we might be able to replace the current live DSL site with the ‘new’ site, as the server the live site is on will need to be decommissioned soon.  I also had a bit of a catch-up with Stevie Barrett, the developer in Celtic and Gaelic, and had a video call with Luca and his line-manager Kirstie Wild to discuss the current state of Digital Humanities across the College of Arts.  Luca does a similar job to me at college-level and it was good to meet him and Kirstie to see what’s been going on outside of Critical Studies.  I also spoke to Jennifer Smith about the Speak For Yersel project, as I’d not heard anything about it for a couple of weeks.  We’re going to meet on Monday to take things further.

I spent the rest of the week working on the radar diagram visualisations for the Historical Thesaurus, completing an initial version.  I’d previously created a tree browser for the thematic headings, as I discussed last week.  This week I completed work on the processing of data for categories that are selected via the tree browser.  After the data is returned the script works out which lexemes have dates that fall into the four periods (e.g. a word with dates 650-9999 needs to appear in all four periods).  Words are split by Part of speech, and I’ve arranged the axes so that N, V, Aj and Av appear first (if present), with any others following on.  All verb categories have also been merged.

I’m still not sure how widely useful these visualisations will be as they only really work for categories that have several parts of speech.  But there are some nice ones.  See for example a visualisation of ‘Badness/evil’, ‘Goodness, acceptability’ and ‘Mediocrity’ which shows words for ‘Badness/evil’ being much more prevalent in OE and ME while ‘Mediocrity’ barely registers, only for it and ‘Goodness, acceptability’ to grow in relative size EModE and ModE:

I also added in an option to switch between visualisations which use total counts of words in each selected category’s parts of speech and visualisations that use percentages.  With the latter the scale is fixed at a maximum of 100% across all periods and the points on the axes represent the percentage of the total words in a category that are in a part of speech in your chosen period.  This means categories of different sizes are more easy to compare, but does of course mean that the relative sizes of categories is not visualised.  I could also add a further option that fixes the scale at the maximum number of words in the largest POS so the visualisation still represents relative sizes of categories but the scale doesn’t fluctuate between periods (e.g. if there are 363 nouns for a category across all periods then the maximum on the scale would stay fixed at 363 across all periods, even if the maximum number of nouns in OE (for example) is 128.  Here’s the above visualisation using the percentage scale:

The other thing I did was to add in a facility to select a specific category and turn off the others.  So for example if you’ve selected three categories you can press on a category to make it appear bold in the visualisation and to hide the other categories.  Pressing on a category a second time reverts back to displaying all.  Your selection is remembered if you change the scale type or navigate through the periods.  I may not have much more time to work on this before Christmas, but the next thing I’ll do is to add in access to the lexeme data behind the visualisation.  I also need to fix a bug that is causing the ModE period to be missing a word in its counts sometimes.

 

Week Beginning 29th November 2021

I participated in the UCU strike action on Wednesday to Friday this week, so it was a two-day working week for me.  During this time I gave some help to the students who are migrating the International Journal of Scottish Theatre and Screen and talked to Gerry Carruthers about another project he’s hoping to put together.  I also passed on information about the DNS update to the DSL’s IT people, added a link to the DSL’s new YouTube site to the footer of the DSL site and dealt with a query regarding accessing the DSL’s Google Analytics data.  I also spoke with Luca about arranging a meeting with him and his line manager to discuss digital humanities across the college and updated the listings for several Android apps that I created a few years ago that had been taken down due to their information being out of date.  As central IT services now manages the University Android account I hadn’t received notifications that this was going to take place.  Hopefully the updates have done the trick now.

Other than this I made some further updates to the Anglo-Norman Dictionary’s locution search that I created last week.  This included changing the ordering to list results by the word that was search for rather than by headword, changing the way the search works so that a wildcard search such as ‘te*’ now matches the start of any word in the locution phrase rather than just the first work and fixing a number of bugs that had been spotted.

I spent the rest of my available time starting to work on an interactive version of the radar diagram for the Historical Thesaurus.  I’d made a static version of this a couple of months ago which looks at a the words in an HT category by part of speech and visualises how the numbers of words in each POS change over time.  What I needed to do was find a way to allow users to select their own categories to visualise.  We had decided to use the broader Thematic Categories for the feature rather than regular HT categories so my first task was to create a Thematic Category browser from ‘AA The World’ to ‘BK Leisure’.  It took a bit of time to rework the existing HT category browser to work with thematic categories, and also to then enable the selection of multiple categories by pressing on the category name.  Selected categories appear to the right of the browser, and I added in an option to remove a selected category if required.  With this in place I began work on the code to actually grab and process the data for the selected categories.  This finds all lexemes and their associated dates for each lexeme in each HT category in each of the selected thematic categories.  For now the data is just returned and I’m still in the middle of processing the dates to work out which period each word needs to appear in.  I’ll hopefully find some time to continue with this next week.  Here’s a screenshot of the browser:

Week Beginning 15th November 2021

I had an in-person meeting for the Historical Thesaurus on Tuesday this week – the first such meeting I’ve had since the first lockdown began.  It was a much more enjoyable experience than Zoom-based calls and we had some good discussions about the current state of the HT and where we will head next.  I’m going to continue to work on my radar chart visualisations when I have the time and we will hopefully manage to launch a version of the quiz before Christmas.  There has also been some further work on matching categories and we’ll be looking into this in the coming months.

We also discussed the Digital Humanities conference, which will be taking place in Tokyo next summer.  This is always a really useful conference for me to attend and I wondered about writing a paper about the redevelopment of the Anglo-Norman Dictionary.  I’m not sure at this point whether we would be able to afford to send me to the conference, and the deadline for paper submission is the end of this month.  I did start looking through these blog posts and I extracted all of the sections that relate to the redevelopment of the site.  It’s almost 35,000 words over 74 pages, which shows you how much effort has gone into the redevelopment process.

I also had a meeting with Gerry Carruthers and others about the setting up of an archive for the International Journal of Scottish Theatre and Screen.  I’d set up a WordPress site for this and explored how the volumes, issues and articles could be migrated over from PDFs.  We met with the two students who will now do the work.  I spent the morning before the meeting preparing an instruction document for the students to follow and at the meeting I talked through the processes contained in the document.  Hopefully it will be straightforward for the students to migrate the PDFs, although I suspect it may take them an article or two before they get into the swing of things.

Also this week I fixed an issue with the search results tabs in the left-hand panel of the entry page on the DSL website.  There’s a tooltip on the ‘Up to 1700’ link, but on narrow screens the tooltip was ending up positioned over the link, and when you pressed on it the code was getting confused as to whether you’d pressed on the link or the tooltip.  I repositioned the tooltips so they now appear above the links, meaning they should no longer get in the way on narrow screens.  I also looked into an issue with the DSL’s Paypal account, which wasn’t working.  This turned out to be an issue on the Paypal side rather than with the links through from the DSL’s site.

I also had to rerun the varlist date scripts for the AND as we’d noticed that some quotations had a structure that my script was not set up to deal with.  The expected structure is something like this:

<quotation>ou ses orribles pates paracrosçanz <varlist><ms_var id=””V-43aaf04a”” usevardate=””true””><ms_form>par acros</ms_form><ms_wit>BN</ms_wit><ms_date post=””1300″” pre=””1399″”>s.xiv<sup>in</sup></ms_date></ms_var></varlist> e par ateinanz e par encrés temptacions</quotation>

Where there is one varlist in the quotation, containing one or more ms_var tags.  But the entry ‘purprestur’ has multiple separate varlists in the quotation:

<quotation>Endreit de purprestures voloms qe les nusauntes <varlist><ms_var id=””V-66946b02″”><ms_form>nusantes porprestures</ms_form><ms_wit>W</ms_wit><ms_date>s.xiv</ms_date></ms_var></varlist> soint ostez a coustages de ceux qi lé averount fet <varlist><ms_var id=””V-67f91f67″”><ms_form>des provours</ms_form><ms_wit>A</ms_wit><ms_date>s.xiv</ms_date></ms_var><ms_var id=””V-ea466d5e””><ms_form>des fesours</ms_form><ms_wit>W</ms_wit><ms_date>s.xiv</ms_date></ms_var><ms_var id=””V-88b4b5c2″” usevardate=””true””><ms_form>dé purpresturs</ms_form><ms_wit>M</ms_wit><ms_date post=””1300″” pre=””1310″”>s.xiv<sup>in</sup></ms_date></ms_var><ms_var id=””V-769400cd””><ms_form>des purpernours</ms_form><ms_wit>C</ms_wit><ms_date>s.xiv<sup>1/3</sup></ms_date></ms_var></varlist> </quotation>

I wasn’t aware that this was a possibility, so my script wasn’t set up to catch such situations.  It therefore only looks at the first <varlist>. And the <ms_var> that needs to be used for dating isn’t contained in this, so gets missed.  I therefore updated the script and have run both spreadsheets through it again.  I also updated the DMS so that quotations with multiple varlists can be processed.

Also this week I updated all of the WordPress sites I manage and helped set up the Our Heritage, Our Stories site, and had a further discussion with Sofia about the conference pages for the Iona place-names project.

I spent the rest of the week continuing to work on the mockups for the Speak For Yerself project, creating a further mockup of the grammar quiz that now features all of the required stages.  The ‘word choice’ type of question now has a slightly different layout, with buttons closer together in a block, and after answering the second question there is now an ‘Explore the answers’ button under the map.  Pressing on this loads the summary maps for each question, which are not live maps yet, and underneath the maps is a button for starting the quiz.  There isn’t enough space to have a three-column layout for the quiz so I’ve placed the quiz above the summary maps.  The progress bar also gets reinstated for the quiz and I’ve added the  text ‘Use the maps below to help you’ just to make it clearer what those buttons are for.  The ‘Q1’, ‘Q2’ IDs will probably need to be altered as it just makes it look like the map refers to a particular question in the quiz, which isn’t the case.  It’s possible to keep a map open between quiz questions, and when you press an answer button the ones you didn’t press get greyed out.  If your choice is correct you get a tick, and if not you get a cross and the correct answer gets a tick.  The script keeps track of what questions have been answered correctly in the background and I haven’t implemented a timer yet.  After answering all of the questions (there doesn’t need to be 6 – the code will work with any number) you can finish the section, which displays your score and the ranking.  Here is a screenshot of how the quiz currently looks:

Week Beginning 27th September 2021

I had two Zoom calls on Monday this week.  The first was with the Burns people to discuss the launch of the website for the ‘letters and poems’ part of ‘Editing Burns’, to complement the existing ‘Prose and song’ website (https://burnsc21.glasgow.ac.uk/).  The new website will launch in January with some video content and blogs, plus I will be working on a content management system for managing the network of Burns’ letter correspondents, which I will put together some time in November, assuming the team can send me on some sample data by then.  This system will eventually power the ‘Burns letter writing trail’ interactive maps that I’ll create for the new site sometime next year.

My second Zoom call was for the Books and Borrowing project to discuss adding data from a new source to the database.  The call gave us an opportunity to discuss the issues with the data that I’d highlighted last week.  It was good to catch up with the team again and to discuss the issues with the researcher who had originally prepared the spreadsheet containing the data.  We managed to address all of the issues and the researcher is going to spend a bit of time adapting the spreadsheet before sending it to me to be batch uploaded into our system.

I spent some further time this week investigating the issue of some of the citation dates in the Anglo-Norman Dictionary being wrong, as discussed last week.  The issue affects some 4309 entries where at least one citation features the form only in a variant text.  This means that the citation date should not be the date of the manuscript in the citation, but the date when the variant of the manuscript was published.  Unfortunately this situation was never flagged in the XML, and there was never any means of flagging the situation.  The variant date should only ever be used when the form of the word in the main manuscript is not directly related to the entry in question but the form in the variant text is.  The problem is it cannot be automatically ascertained when the form in the main manuscript is the relevant one and when the form in the variant text is as there is so much variation in forms.

For example, the entry https://anglo-norman.net/entry/bochet_1 there is a form ‘buchez’ in a citation and then two variant texts for this where the form is ‘huchez’ and ‘buistez’.  None of these forms are listed in the entry’s XML as variants so it’s not possible for a script to automatically deduce which is the correct date to use (the closest is ‘buchet’).  In this case the main citation form and its corresponding date should be used.  Whereas in the entry https://anglo-norman.net/entry/babeder the main citation form is ‘gabez’ while the variant text has ‘babedez’ and so this is the form and corresponding date that needs to be used.  It would be difficult for a script to automatically deduce this.  In this case a Levenstein test (which test how many letters need to be changed to turn one string into another) could work, but this would still need to be manually checked.

The editor wanted me to focus on those entries where the date issue affects the earliest date for an entry, as these are the most important as the issue results in an incorrect date being displayed for the entry in the header and the browse feature.  I wrote a script that finds all entries that feature ‘<varlist’ somewhere in the XML (the previously exported 4309 entries).  It then goes through all attestations (in all sense, subsense and locution sense and subsense sections) to pick out the one with the earliest date, exactly as the code for publishing an entry does.  What it then does is checks the quotation XML for the attestation with the earliest date for the presence of ‘<varlist’ and if it finds this it outputs information for the entry, consisting of the slug, the earliest date as recorded in the database, the earliest date of the attestation as found by the script, the ID of the  attestation and then the XML of the quotation.  The script has identified 1549 entries that have a varlist in the earliest citation, all of which will need to be edited.

However, every citation has a date associated with it and this is used in the advanced search where users have the option to limit their search to years based on the citation date.  Only updating citations that affect the entry’s earliest date won’t fix this, as there will still be many citations with varlists that haven’t been updated and will still therefore use the wrong date in the search.  Plus any future reordering of citations would require all citations with varlists to be updated to get entries in the correct order.  Fixing the earliest citations with varlists in entries based on the output of my script will fix the earliest date as used in the header of the entry and the ‘browse’ feature only, but I guess that’s a start.

Also this week I sorted out some access issues for the RNSN site, submitted the request for a new top-level ‘ac.uk’ domain for the STAR project and spent some time discussing the possibilities for managing access to videos of the conference sessions for the Iona place-names project.  I also updated the page about the Scots Dictionary for Schools app on the DSL website (https://dsl.ac.uk/our-publications/scots-dictionary-for-schools-app/) after it won the award for ‘Scots project of the year’.

I also spent a bit of time this week learning about the statistical package R (https://www.r-project.org/).  I downloaded and installed the package and the R Studio GUI and spent some time going through a number of tutorials and examples in the hope that this might help with the ‘Speak for Yersel’ project.

For a few years now I’ve been meaning to investigate using a spider / radar chart for the Historical Thesaurus, but I never found the time.  I unexpectedly found myself with some free time this week due to ‘Speak for Yersel’ not needing anything from me yet so I thought I’d do some investigation.  I found a nice looking d3.js template for spider / radar charts here: http://bl.ocks.org/nbremer/21746a9668ffdf6d8242  and set about reworking it with some HT data.

My idea was to use the chart to visualise the distribution of words in one or more HT categories across different parts of speech in order to quickly ascertain the relative distribution and frequency of words.  I wanted to get an overall picture of the makeup of the categories initially, but to then break this down into different time periods to understand how categories changed over time.

As an initial test I chose the categories 02.04.13 Love and 02.04.14 Hatred, and in this initial version I looked only at the specific contents of the categories – no subcategories and no child categories.  I manually extracted counts of the words across the various parts of speech and then manually split them up into words that were active in four broad time periods: OE (up to 1149), ME (1150-1449), EModE (1450-1799) and ModE (1800 onwards) and then plotted them on the spider / radar chart, as you can see in this screenshot:

You can quickly move through the different time periods plus the overall picture using the buttons above the visualisation, and I think the visualisation does a pretty good job of giving you a quick and easy to understand impression of how the two categories compare and evolve over time, allowing you to see, for example, how the number of nouns and adverbs for love and hate are pretty similar in OE:

but by ModE the number of nouns for Love have dropped dramatically, as have the number of adverbs for Hate:

We are of course dealing with small numbers of words here, but even so it’s much easier to use the visualisation to compare different categories and parts of speech than it is to use the HT’s browse interface.  Plus if such a visualisation was set up to incorporate all words in child categories and / or subcategories it could give a very useful overview of the makeup of different sections of the HT and how they develop over time.

There are some potential pitfalls to this visualisation approach, however.  The scale used currently changes based on the largest word count in the chosen period, meaning unless you’re paying attention you might get the wrong impression of the number of words.  I could change it so that the scale is always fixed as the largest, but that would then make it harder to make out details in periods that have much fewer words.  Also, I suspect most categories are going to have many more nouns than other parts of speech, and a large spike of nouns can make it harder to see what’s going on with the other axes.  Another thing to note is that the order of the axes is fairly arbitrary but can have a major impact on how someone may interpret the visualisation.  If you look at the OE chart the ‘Hate’ area looks massive compared to the ‘Love’ area, but this is purely because there is only one ‘Love’ adjective compared to 5 for ‘Hate’.  If the adverb axis had come after the noun one instead the shapes of ‘Love’ and ‘Hate’ would have been more similar.  You don’t necessarily appreciate on first glance that ‘Love’ and ‘Hate’ have very similar numbers of nouns in OE, which is concerning.  However, I think the visualisations have a potential for the HT and I’ve emailed the other HT people to see what they think.

 

Week Beginning 24th May 2021

I had my first dose of the Covid vaccine on Tuesday morning this week (the AstraZeneca one), so I lost a bit of time whilst going to get that done.  Unfortunately I had a bit of a bad reaction to it and ended up in bed all day Wednesday with a pretty nasty fever.  I had Covid in October last year but only experienced mild symptoms and wasn’t even off work for a day with it, so in my case the cure has been much worse than the disease.  However, I was feeling much better again by Thursday, so I guess I lost a total of about a day and a half of work, which is a small price to pay if it helps to ensure I don’t catch Covid again and (what would be worse) pass it on to anyone else.

In terms of work this week I continued to work on the Anglo-Norman Dictionary, beginning with a few tweaks to the data builder that I had completed last week.  I’d forgotten to add a bit of processing to the MS date that was present in the Text Date section to handle fractions, so I added that in.  I also updated the XML output so that ‘pref’ and ‘suff’ only appear if they have content now, as the empty attributes were causing issues in the XML editor.

I then began work on the largest outstanding task I still have to tackle for the project: the migration of the textbase texts to the new site.  There are about 80 lengthy XML digital editions on the old site that can be searched and browsed, and I need to ensure these are also available on the new site.  I managed to grab a copy of all of the source XML files and I tracked down a copy of the script that the old site used to process the files.  At least I thought I had.  It turned out that this file actually references another file that must do most of the processing, including the application of an XSLT file to transform the XML into HTML, which is the thing I really could do with getting access to.  Unfortunately this file was no in the data from the server that I had been given access to, which somewhat limited what I could do.  I still have access to the old site and whilst experimenting with the old textbase I managed to make it display an error message that gives the location of the file: [DEBUG: Empty String at /var/and/reduce/and-fetcher line 486. ].  With this location available I asked Heather, the editor who has access to the server, if she might be able to locate this file and others in the same directory.  She had to travel to her University in order to be able to access the server, but once she did she was able to track the necessary directory down and get a copy to me.  This also included the XSLT file, which will help a lot.

I wrote a script to process all of the XML files, extracting titles, bylines, imprints, dates, copyright statements and splitting each file up into individual pages.  I then updated the API to create the endpoints necessary to browse the texts and navigate through the pages, for example the retrieval of summary data for all texts, or information about a specified texts, or information about a specific page (including its XML).  I also began working on a front-end for the textbase, which is still very much in progress.  Currently it lists all texts with options to open a text at the first available page or select a page from a drop-down list of pages.  There are also links directly into the AND bibliography and DEAF where applicable, as the following screenshot demonstrates:

It is also possible to view a specific page, and I’ve completed work on the summary information about the text and a navbar through which it’s possible to navigate through the pages (or jump directly to a different page entirely).  What I haven’t yet tackled is the processing of the XML, which is going to be tricky and I hope to delve into next week.   Below is a screenshot of the page view as it currently looks, with the raw XML displayed.

I also investigated and fixed an issue the editor Geert spotted, whereby the entire text of an entry was appearing in bold.  The issue was caused by an empty <link_form/> tag.  In the XSLT each <link_form> becomes a bold tag <b> with the content of the link form in the middle.  As there was no content it became a self-closed tag <b/> which is valid in XML but not valid in HTML, where it was treated as an opening tag with no corresponding closing tag, resulting in the remainder of the page all being bold.  I got around this by placing the space that preceded the bold tag “ <b></b>” within the bold tag instead “<b> </b>” meaning the tag is no longer considered empty and the XSLT doesn’t self-close it, but ideally if there is no <link_form> then the tag should just be omitted, which would also solve the problem.

I also looked into an issue with the proofreader that Heather encountered.  When she uploaded a ZIP file with around 50 entries in it some of the entries wouldn’t appear in the output, but would just display their title.  The missing entries would be random without any clear reason as to why some were missing.    After some investigation I realised what the problem was:  each time an XML file is processed for display the DTD referenced in the file was being checked.  When processing lots of files all at once this was exceeding the maximum number of file requests the server was allowing from a specific client and was temporarily blocking access to the DTD, causing the processing of some of the XML files to silently fail.  The maximum number would be reached at a different point each time, thus meaning a different selection of entries would be blank.  To fix this I updated the proofreader script to remove the reference to the DTD from the XML files in the uploaded ZIP before they are processed for display.  The DTD isn’t actually needed for the display of the entry – all it does is specify the rules for editing it.  With the DTD reference removed it looks like all entries are getting properly displayed.

Also this week I gave some further advice to Luca Guariento about a proposal he’s working on, fixed a small display issue with the Historical Thesaurus and spoke to Craig Lamont about the proposal he’s putting together.  Other than that I spent a bit of time on the Dictionary of the Scots Language, creating four different mockups of how the new ‘About this entry’ box could look and investigating why some of the bibliographical links in entries in the new front-end were not working.  The problem was being caused by the reworking of cref contents that the front-end does in order to ensure only certain parts of the text become a link.  In the XML the bib ID is applied to the full cref, (e.g. <cref refid=”bib018594″><geo>Sc.</geo> <date>1775</date> <title>Weekly Mag.</title> (9 Mar.) 329: </cref>) but we wanted the link to only appear around titles and authors rather than the full text.  The issue with the missing links was cropping up where there is no author or title for the link to be wrapped around (e.g. <cit><cref refid=”bib017755″><geo>Ayr.</geo><su>4</su> <date>1928</date>: </cref><q>The bag’s fu’ noo’ we’ll sadden’t.</q></cit>).  In such cases the link wasn’t appearing anywhere.  I’ve updated this now so that if no author or title is found then the link gets wrapped around the <geo> tag instead, and if there is no <geo> tag the link gets wrapped around the whole <cref>.

I also fixed a couple of advanced search issues that had been encountered with the new (and as yet not publicly available) site.  There was a 404 error that was being caused by a colon in the title.  The selected title gets added into the URL and colons are special characters in URLs, which was causing a problem.  However, I updated the scripts to allow colons to appear and the search now works.  It also turned out that the full-text searches were searching the contents of the <meta> tag in the entries, which is not something that we want.  I knew there was some other reason why I stripped the <meta> section out of the XML and this is it.  The contents of <meta> end up in the free-text search and are therefore both searchable and returned in the snippets.  To fix this I updated my script that generates the free-text search data to remove <meta> before the free-text search is generated.  This doesn’t remove it permanently, just in the context of the script executing.  I regenerated the free-text data and it no longer includes <meta>, and I then passed this on to Arts IT Support who have the access rights to update the Solr collection.  With this in place the advanced search no longer does anything with the <meta> section.

Week Beginning 10th May 2021

I continued to work on updates to the Anglo Norman Dictionary for most of this week, looking at fixing the bad citation dates in entries that were causing the display of ‘earliest date’ to be incorrect.  A number of the citation dates have a proper date in text form (e.g. s.xii/xiii) but have incorrect ‘post’ and ‘pre’ attributes (e.g. ‘00’ and ‘99’).  The system uses these ‘post’ and ‘pre’ attributes for date searching and for deciding which is the earliest date for an entry, and if one of these bad dates was encountered it was considering it to be the earliest date.  Initially I thought there were only a few entries that had ended up with an incorrect earliest date, because I was searching the database for all earliest dates that were less than 1000.  However, I then realised that the bulk of the entries with incorrect earliest dates had the earliest date field set to ‘null’ and in database queries ‘null’ is not considered less than 1000 but a separate thing entirely and so such entries were not being found.  I managed to identify several hundred entries that needed their dates fixed and wrote a script to do so.

It was slightly more complicated than a simple ‘find and replace’ as the metadata about the entry needed to be regenerated too – e.g. the dates extracted from the citations that are used in the advanced search and the earliest date display for entries.  I managed to batch correct several hundred entries using the script and also adapted it to look for other bad dates that needed fixing too.

I also created a new feature for the Dictionary Management System: an entry proofreader.  It allows an editor to attach a ZIP file containing XML entries and it then displays all of these in a similar manner to the live site, only with all entries on one long page.  The editor can then select all of the text, copy it and then paste it into Word and the major formatting elements will be retained (bold, italic, superscript etc.).  I tested the feature by zipping up 3,178 XML entries and although it took a few minutes to process, the page displayed properly and I was able to copy the text to Word (resulting in a 1,029 page Word file).  After finishing the initial version of the script I had to tweak it a bit, as I wrote the HTML and JavaScript with the expectation that there would be one dictionary item on the page and some aspects were not working when there were multiple items and needed updating.  I also ensured that links to sources in entries work.  In the actual dictionary they open a pop-up, which clearly isn’t going to work in Word so instead I made the link go to the relevant item in the bibliography page (e.g. https://anglo-norman.net/bibliography/B#bib-Best).  Links to other dictionaries, labels and other AND entries also all now work from Word.

In addition, cogrefs appear before variants and deviants, commentaries appear (as full text, not cut off), Xrefs at the bottom now have the ‘see also’ text above them as in the live site, editor initials now appear where they exist and numerals only appear where there is more than once sense in a POS.

Also this week I did some further work for the Dictionary of the Scots Language based on feedback after my upload of data from the DSL’s new editing system.  There was a query about the ‘slug’ used for referencing an entry in a URL.  When the new data is processed by the import script the ‘slug’ is generated from the first <url> entry in the XML.  If this <url> begins ‘dost’ or ‘snd’ it means a headword is not present in the <url> and therefore the new system ID is taken as the new ‘slug’ instead.  All <url> forms are also stored as alternative ‘slugs’ that can still be used to access the entry.  I checked the new database and there are 3258 entries that have a ‘slug’ beginning with ‘dost’ or ‘snd’, i.e. they have the new ID as their ‘slug’ because they had an old ID as their first <url> in the XML.  I checked a couple of these and they don’t seem to have the headword as a <url>, e.g. ‘beit’ (dost00052776) only has the old ID (twice) as URLs: <url>dost2543</url><url>dost2543</url>, ‘well-fired’ (snd00090568) only has the old ID (twice) as URLs: <url>sndns4098</url><url>sndns4098</url>.  I’ve asked the editors what should be done about this.

Also this week I wrote a script to generate a flat CSV from the Historical Thesaurus’s relational database structure, joining the lexeme and category tables together and appending entries from the new ‘date’ table as additional columns as required.  It took a little while to write the script and then a bit longer to run it, resulting in a 241MB CSV file.

I also gave some advice to Craig Lamont in Scottish Literature about a potential bid he’s putting together, and spoke to Luca about a project he’s been asked to write a DMP for.  I also looked through some journals that Gerry Carruthers is hoping to host at Glasgow and gave him an estimate of the amount of time it would take to create a website based on the PDF contents of the old journal items.

Week Beginning 8th March 2021

It was another Data Management Plan heavy week this week.  I created an initial version of a DMP for Kirsteen McCue’s project at the start of the week and then participated in a Zoom call with Kirsteen and other members of the proposed team on Thursday where the plan was discussed.  I also continued to think through the technical aspects of the metaphor-related proposal involving Wendy and colleagues at Duncan Jordanstone College of Art and Design at Dundee and reviewed another DMP that Katherine Forsyth in Celtic had asked me to look at.

Other than issues I arranged for Joanna Kopaczyk’s ‘The Future of Scots’ project website to be moved to its top-level ‘ac.uk’ domain, and it can now be found here: https://scotslanguagepolicy.ac.uk/.  Marc Alexander had also contacted me about a weird bug he’d encountered in the Historical Thesaurus.  One of the category pages was failing to display properly and after investigation I figured out that it was an issue with the timeline data for one of the words on this page, which was causing the JavaScript to break.  I pulled out the JSON embedded in the page and the data for the word seemed to be missing a closing ‘}’, which was causing the error.  It turned out that someone had entered the dates the wrong way round for the word.  It was listed as ‘a1400-c1386’.  My dates system had plucked out the dates and correctly ordered them, but this meant the system was left with ‘1400’ with a joining ‘-‘ and then nothing after it, which resulted in the JSON being malformed.  I swapped the dates around (both in the new dates table and in the display date) and everything started working as it should again.  It was a relief to know that it was an issue with the data rather than my code.

Also this week I spent a bit of time working on the Books and Borrowing project, generating more page image tilesets and their corresponding pages for two more of the Edinburgh ledgers and adding an ‘Events’ page to the project website and giving more members of the project team permission to edit the site.  I also had an email chat with Thomas Clancy about the Iona project and created a ‘Call for Papers’ page including submission form on the project website (it’s not live yet, though).

I spent the rest of my week continuing to work on the Anglo-Norman Dictionary.  We received the excellent news this week that our AHRC application for funding to complete the remaining letters of the dictionary (and carry out more development work) was successful.  This week I mage some further tweaks to the new blog pages, adding in the first image in the blog post to the right of the blog snippet on the blog summary page.  I also made the new blog pages live, and you can now access them here: https://anglo-norman.net/blog/.

I also made some updates to the bibliography system based on requests from the editors to separate out the display of links to the DEAF website from the actual URLs (previously just the URLs were displayed).  I updated the database, the DMS and the new bibliography page to add in a new ‘DEAF link text’ field for both main source text records and items within source text records.  I copied the contents of the DEAF field into this new field for all records, I updated the DMS to add in the new fields when adding / editing sources and I updated the new bibliography page so that the text that gets displayed for the DEAF link uses the new field, whereas the actual link through to the DEAF website uses the original field.

I also continued to work on the facilities to upload batches of new or updated entry XML files to the DMS.  I created a new ‘holding’ table for uploaded entries and created a page that allows the user to drag and drop XML files into the browser, with files then getting uploaded, processed and added into this new table.  This uses a handy JavaScript library called Dropzone (https://www.dropzonejs.com/) that I previously used for the Scots Syntax Atlas CMS.  The initial version of the upload is working well, but I needed to know exactly how uploaded files should be fully processed before I could proceed further, which required some lengthy email exchanges with the editors.

The scripts I written when uploading the new ‘R’ dataset needed to make changes to the data to bring it into line with the data already in the system as the ‘R’ data didn’t include some attributes that were necessary for the system to work with the XML files, namely:

In the <main_entry> tag the attribute ‘lead’, which is used to display the editor’s initials in the front end (e.g. “gdw”) and the ‘id’ attribute, which although not used to uniquely identify the entries in my new system is still used in the XML for things like cross-references and therefore is required and must be unique.  In the <sense> tag the attribute ‘n’, which increments from 1 within each part of speech and is used to identify senses in the front-end.  In the <senseInfo> tag the ID attribute, which is used in the citation and translation searches and the POS attribute which is used to generate the summary information at the top of each entry page.  In the <attestation> tag the ID attribute, which is used in the citation search.

We needed to decide how these will be handled in future – whether they will be manually added to the XML as the editors work on them or whether the upload script needs to add them in at the point of upload.  We also needed to consider updates to existing entries.  If an editor downloads an entry and then works on it (e.g. adding in a new sense or attestation) then the exported file will already include all of the above attributes, except for any new sections that are added.  In such cases should the new sections have the attributes added manually, or do I need to ensure my script checks for the existence of the attributes and only adds the missing ones as required?

We decided that I’d set up the systems to automatically check for the existence of the attributes and add them in if they’re not already present.  It will take more time to develop such a system but it will make it more robust and hopefully will result in fewer errors.  I’ll also add an option to specify the ‘lead’ initials for the batch of files that are being uploaded, but this will not overwrite the ‘lead’ attribute for any XML files in the batch that already have the attribute specified.

I’ll hopefully get a chance to work on this next week.  Thankfully this is the last week of home-schooling for us so I should have a bit more time from next week onwards.