Week Beginning 20th June 2022

I completed an initial version of the Chambers Library map for the Books and Borrowing project this week.  It took quite a lot of time and effort to implement the subscription period range slider.  Searching for a range when the data also has a range of dates rather than a single date means we needed to make a decision about what data gets returned and what doesn’t.  This is because the two ranges (the one chosen as a filter by the user and the one denoting the start and end periods of subscription for each borrower) can overlap in many different ways.  For example, the period chosen by the user is 05 1828 to 06 1829.  Which of the following borrowers should therefore be returned?

  1. Borrowers range is 06 1828 to 02 1829: Borrower’s range is fully within the period so should definitely be included
  2. Borrowers range is 01 1828 to 07 1828: Borrower’s range extends beyond the selected period at the start and ends within the selected period.  Presumably should be included.
  3. Borrowers range is 01 1828 to 09 1829: Borrower’s range extends beyond the selected period in both directions.  Presumably should be included.
  4. Borrowers range is 05 1829 to 09 1829: Borrower’s range begins during the selected period and ends beyond the selected period. Presumably should be included.
  5. Borrowers range is 01 1828 to 04 1828: Borrower’s range is entirely before the selected period. Should not be included
  6. Borrowers range is 07 1829 to 10 1829: Borrower’s range is entirely after the selected period. Should not be included.

Basically if there is any overlap between the selected period and the borrower’s subscription period the borrower will be returned.  But this means most borrowers will always be returned a lot of the time.  It’s a very different sort of filter to one that purely focuses on a single date – e.g. filtering the data to only those borrowers whose subscription periods *begins* between 05 1828 and 06 1829.

Based on the above assumptions I began to write the logic that would decide which borrowers to include when the range slider is altered.  It was further complicated by having to deal with months as well as years.  Here’s the logic in full if you fancy getting a headache:

if(((mapData[i].sYear>startYear || (mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth || mapData[i].sYear>startYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)) || (((mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth) || mapData[i].sYear>startYear) && ((mapData[i].sYear==endYear && mapData[i].sMonth <=endMonth) || mapData[i].sYear<endYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)))

I also added the subscription period to the popups.  The only downside to the range slider is that the occupation marker colours change depending on how many occupations are present during a period, so you can’t always tell an occupation by its colour. I might see if I can fix the colours in place, but it might not be possible.

I also noticed that the jQuery UI sliders weren’t working very well on touchscreens so installed the jQuery TouchPunch library to fix that (https://github.com/furf/jquery-ui-touch-punch).  I also made the library marker bigger and gave it a white border to more easily differentiate it from the borrower markers.

I then moved onto incorporating page images in the resource too.  Where a borrower has borrower records the relevant pages where these borrowing records are found now appear as thumbnails in the borrower popup.  These are generated by the IIIF server based on dimensions passed to it, which is much nicer than having to generate and store thumbnails directly.  I also updated the popup to make it wider when required to give more space for the thumbnails.  Here’s a screenshot of the new thumbnails in action:

Clicking on a thumbnail opens a further popup containing a zoomable / pannable image of the page.  This proved to be rather tricky to implement.  Initially I was going to open a popup in the page (outside of the map container) using a jQuery UI Dialog.  However, I realised that this wouldn’t work when the map was being viewed in full-screen mode, as nothing beyond the map container is visible in such circumstances.  I then considered opening the image in the borrower popup but this wasn’t really big enough.  I then wondered about extending the ‘Map options’ section and replacing the contents of this with the image, but this then caused issues for the contents of the ‘Map options’ section, which didn’t reinitialise properly when the contents were reinstated.  I then found a plugin for the Leaflet mapping library that provides a popup within the map interface (https://github.com/w8r/Leaflet.Modal) and decided to use this.  However, it’s all a little complex as the popup then has to include another mapping library called OpenLayers that enables the zooming and panning of the page image, all within the framework of the overall interactive map.  It is all working and I think it works pretty well, although I guess the map interface is a little cluttered, what with the ‘Map Options’ section, the map legend, the borrower popup and then the page image popup as well.  Here’s a screenshot with the page image open:

All that’s left to do now is add in the introductory text once Alex has prepared it and then make the map live.  We might need to rearrange the site’s menu to add in a link to the Chambers Map as it’s already a bit cluttered.

Also for the project I downloaded images for two further library registers for St Andrews that had previously been missed.  However, there are already records for the registers and pages in the CMS so we’re going to have to figure out a way to work out which image corresponds to which page in the CMS.  One register has a different number of pages in the CMS compared to the image files so we need to work out how to align the start and end and if there are any gaps or issues in the middle.  The other register is more complicated because the images are double pages whereas it looks like the page records in the CMS are for individual pages.  I’m not sure how best to handle this.  I could either try and batch process the images to chop them up or batch process the page records to join them together.  I’ll need to discuss this further with Gerry, who is dealing with the data for St Andrews.

Also this week I prepared for and gave a talk to a group of students from Michigan State University who were learning about digital humanities.  I talked to them for about an hour about a number of projects, such as the Burns Supper map (https://burnsc21.glasgow.ac.uk/supper-map/), the digital edition I’d created for New Modernist Editing (https://nme-digital-ode.glasgow.ac.uk/), the Historical Thesaurus (https://ht.ac.uk/), Books and Borrowing (https://borrowing.stir.ac.uk/) and TheGlasgowStory (https://theglasgowstory.com/).  It went pretty and it was nice to be able to talk about some of the projects I’ve been involved with for a change.

I also made some further tweaks to the Gentle Shepherd Performances page which is now ready to launch, and helped Geert out with a few changes to the WordPress pages of the Anglo-Norman Dictionary.  I also made a few tweaks to the WordPress pages of the DSL website and finally managed to get a hotel room booked for the DHC conference in Sheffield in September.  I also made a couple of changes to the new Gaelic Tongues section of the Seeing Speech website and had a discussion with Eleanor about the filters for Speech Star.  Fraser had been in touch with about 500 Historical Thesaurus categories that had been newly matched to OED categories so I created a little script to add these connections to the online database.

I also had a Zoom call with the Speak For Yersel team.  They had been testing out the resource at secondary schools in the North East and have come away with lots of suggested changes to the content and structure of the resource.  We discussed all of these and agreed that I would work on implementing the changes the week after next.

Next week I’m going to be on holiday, which I have to say I’m quite looking forward to.

Week Beginning 6th June 2022

I’d taken Monday off this week to have an extra-long weekend following the jubilee holidays on Thursday and Friday last week.  On Tuesday I returned to another meeting for Speak For Yersel and a list of further tweaks to the site, including many changes to three of the five activities and a new set of colours for the map marker icons, which make the markers much more easy to differentiate.

I spent most of the week working on the Books and Borrowing project.  We’d been sent a new library register from the NLS and I spent a bit of time downloading the 700 or so images, processing them and uploading them into our system.  As usual, page numbers go a bit weird.  Page 632 is written as 634 and then after page 669 comes not 670 but 700!  I ran my script to bring the page numbers in the system into line with the oddities of the written numbers.  On Friday I downloaded a further library register which I’ll need to process next week.

My main focus for the project was the Chambers Library interactive map sub-site.  The map features the John Ainslie 1804 map from the NLS, and currently it uses the same modern map as I’ve used elsewhere in the front-end for consistency, although this may change.  The map defaults to having a ‘Map options’ pane open on the left, and you can open and close this using the button above it.  I also added a ‘Full screen’ button beneath the zoom buttons in the bottom right.  I also added this to the other maps in the front-end too. Borrower markers have a ‘person’ icon and the library itself has the ‘open book’ icon as found on other maps.

By default the data is categorised by borrower gender, with somewhat stereotypical (but possibly helpful) blue and pink colours differentiating the two.  There is one borrower with an ‘unknown’ gender and this is set to green.  The map legend in the top right allows you to turn on and off specific data groups.  The screenshot below shows this categorisation:

The next categorisation option is occupation, and this has some problems.  The first is there are almost 30 different occupations, meaning the legend is awfully long and so many different marker colours are needed that some of them are difficult to differentiate.  Secondly, most occupations only have a handful of people.  Thirdly, some people have multiple occupations, and if so these are treated as one long occupation, so we have both ‘Independent Means > Gentleman’ and then ‘Independent Means > Gentleman, Politics/Office Holders > MP (Britain)’.  It would be tricky to separate these out as the marker would then need to belong to two sets with two colours, plus what happens if you hide one set?  I wonder if we should just use the top-level categorisation for the groupings instead?  This would result in 12 groupings plus ‘unknown’, meaning the legend would be both shorter and narrower.  Below is a screenshot of the occupation categorisation as it currently stands:

The next categorisation is subscription type, which I don’t think needs any explanation.  I then decided to add in a further categorisation for number of borrowings, which wasn’t originally discussed but as I used the page I found myself looking for an option to see who borrowed the most, or didn’t borrow anything.  I added the following groupings, but these may change: 0, 1-10, 11-20, 21-50, 51-70, 70+ and have used a sequential colour scale (darker = more borrowings).  We might want to tweak this, though, as some of the colours are a bit too similar.  I haven’t added in the filter to select subscription period yet, but will look into this next week.

At the bottom of the map options is a facility to change the opacity of the historical map so you can see the modern street layout.  This is handy for example for figuring out why there is a cluster of markers in a field where ‘Ainslie Place’ was presumably built after the historical map was produced.

I decided to not include the marker clustering option in this map for now as clustering would make it more difficult to analyse the categorisation as markers from multiple groupings would end up clustered together and lose their individual colours until the cluster is split.  Marker hover-overs display the borrower name and the pop-ups contain information about the borrower.  I still need to add in the borrowing period data, and also figure out how best to link out to information about the borrowings or page images.  The Chambers Library pin displays the same information as found in the ‘libraries’ page you’ve previously seen.

Also this week I responded to a couple of queries from the DSL people about Google Analytics and the icons that gets used for the site when posting on Facebook.  Facebook was picking out the University of Glasgow logo rather than the DSL one, which wasn’t ideal.  Apparently there’s a ‘meta’ tag that you need to add to the site header in order for Facebook to pick up the correct logo, as discussed here: https://stackoverflow.com/questions/7836753/how-to-customize-the-icon-displayed-on-facebook-when-posting-a-url-onto-wall

I also created a new user for the Ayr place-names project and dealt with a couple of minor issues with the CMS that Simon Taylor had encountered.  I also investigated a certificate error with the ohos.ac.uk website and responded to a query about QR codes from fellow developer David Wilson.  Also, Craig Lamont in Scottish Literature got in touch about a spreadsheet listed Burns manuscripts that he’s been working on with a view to turning it into a searchable online resource and I gave him some feedback about the structure of the spreadsheet.

Finally, I did a bit of work for the Historical Thesaurus, working on a further script to match up HT and OED categories based on suggestions by researcher Beth Beattie.  I found a script I’d produced in from 2018 that ran pattern matching on headings and I adapted this to only look at subcats within 02.02 and 02.03, picking out all unmatched OED subcats from these (there are 627) and then finding all unmatched HT categories where our ‘t’ numbers match the OED path.  Previously the script used the HT oedmaincat column to link up OED and HT but this no longer matches (e.g. HT ‘smarten up’ has ‘t’ nums 02.02.16.02 which matches OED 02.02.16.02 ‘to smarten up’ whereas HT ‘oedmaincat’ is ’02.04.05.02’).

The script lists the various pattern matches at the top of the page and the output is displayed in a table that can be copied and pasted into Excel.  Of the 627 OED subcats there are 528 that match an HT category.  However, some of them potentially match multiple HT categories.  These appear in red while one to one matches appear in green.  Some of these multiple matches are due to Levenshtein matches (e.g. ‘sadism’ and ‘sadist’) but most are due to there being multiple subcats at different levels with the exact same heading.  These can be manually tweaked in Excel and then I could run the updated spreadsheet through a script to insert the connections.  We also had an HT team meeting this week that I attended.

Week Beginning 16th May 2022

This week I finished off all of the outstanding work for the Speak For Yerself project. The other members of the team (Jennifer and Mary) are both on holiday so I finished off all of the tasks I had on my ‘to do’ list, although there will certainly be more to do once they are both back at work again.  The tasks I completed were a mixture of small tweaks and larger implementations.  I made tweaks to the ‘About’ page text and changed the intro text to the ‘more give your word’ exercise.  I then updated the age maps for this exercise, which proved to be pretty tricky and time-consuming to implement as I needed to pull apart a lot of the existing code.  Previously these maps showed ‘60+’ and ‘under 19’ data for a question, with different colour markers for each age group showing those who would say a term (e.g. ‘Scunnered’) and grey markers for each age group showing those who didn’t say the term.  We have completely changed the approach now.  The maps now default to showing ‘under 19’ data only, with different colours for each different term.  There is now an option in the map legend to switch to viewing the ‘60+’ data instead.  I added in the text ‘press to view’ to try and make it clearer that you can change the map.  Here’s a screenshot:

I also updated the ‘give your word’ follow-on questions so that they are now rated in a new final page that works the same way as the main quiz.  In the main ‘give your word’ exercise I updated the quiz intro text and I ensured that the ‘darker dots’ explanatory text has now been removed for all maps.  I tweaked a few questions to change their text or the number of answers that are selectable and I changed the ‘sounds about right’ follow-on ‘rule’ text and made all of the ‘rule’ words lower case.  I also made it so that when the user presses ‘check answers’ for this exercise a score is displayed to the right and the user is able to proceed directly to the next section without having to correct their answers.  They still can correct their answers if they want.

I then made some changes to the ‘She sounds really clever’ follow-on.  The index for this is now split into two sections, one for ‘stereotype’ data and one for ‘rating speaker’ data and you can view the speaker and speaker/listener results for both types of data.  I added in the option of having different explanatory text for each of the four perception pages (or maybe just two – one for stereotype data, one for speaker ratings) and when viewing the speaker rating data the speaker sound clips now appear beneath the map.  When viewing the speaker rating data the titles above the sliders are slightly different.  Currently when selecting the ‘speaker’ view the title is “This speaker from X sounds…” as opposed to “People from X sound…”.  When selecting the ‘speaker/listener’ view the title is “People from Y think this speaker from X sounds…” as opposed to “People from Y think people from X sound…”.  I also added a ‘back’ button to these perception follow-on pages so it’s easier to choose a different page.  Finally, I added some missing HTML <title> tags to pages (e.g. ‘Register’ and ‘Privacy’) and fixed a bug whereby the ‘explore more’ map sound clips weren’t working.

With my ‘Speak For Yersel’ tasks out of the way I could spend some time looking at other projects that I’d put on hold for a while.  A while back Eleanor Lawson contacted me about adding a new section to the Seeing Speech website where Gaelic speaker videos and data will be accessible, and I completed a first version this week.  I replicated the Speech Star layout rather than the /r/ & /l/ page layout as it seemed more suitable: the latter only really works for a limited number of records while the former works well with lots more (there are about 150 Gaelic records).  What this means is the data has a tabular layout and filter options.  As with Speech Star you can apply multiple filters and you can order the table by a column by clicking on its header (clicking a second time reverses the order).  I’ve also included the option to open multiple videos in the same window.  I haven’t included the playback speed options as the videos already include the clip at different speeds.  Here’s a screenshot of how the feature looks:

On Thursday I had a Zoom call with Laura Rattray and Ailsa Boyd to discuss a new digital edition project they are in the process of planning.  We had a really great meeting and their project has a lot of potential.  I’ve offered to give technical advice and write any technical aspects of the proposal as and when required, and their plan is to submit the proposal in the autumn.

My final major task for the week was to continue to work on the Ramsay ‘Gentle Shepherd’ data.  I overhauled the filter options that I implemented last week so they work in a less confusing way when multiple types are selected now.  I’ve also imported the updated spreadsheet, taking the opportunity to trim whitespace to cut down on strange duplicates in the filter options.  There are some typos you’ll need to fix in the spreadsheet, though (e.g. we have ‘Glagsgow’ and ‘Glagsow’) plus some dates still need to be fixed.

I then created an interactive map for the project and have incorporated the data for which there are latitude and longitude values.  As with the Edinburgh Gazetteer map of reform societies (https://edinburghgazetteer.glasgow.ac.uk/map-of-reform-societies/) the number of performances at a venue is displayed in the map marker.  Hover over a marker to see info about the venue.  Click on it to open a list of performances.  Note that when zoomed out it can be difficult to make out individual markers but we can’t really use clustering as on the Burns Supper map (https://burnsc21.glasgow.ac.uk/supper-map/) because this would get confusing:  we’d have clustered numbers representing the number of markers in a cluster and then induvial markers with a number representing the number of performances.  I guess we could remove the number of performances from the marker and just have this in the tooltip and / or popup, but it is quite useful to see all the numbers on the map.  Here’s a screenshot of how the map currently looks:

I still need to migrate all of this to the University’s T4 system, which I aim to tackle next week.

Also this week I had discussions about migrating an externally hosted project website to Glasgow for Thomas Clancy.  I received a copy of the files and database for the website and have checked over things and all is looking good.  I also submitted a request for a temporary domain and I should be able to get a version of the site up and running next week.  I also regenerated a list of possible duplicate authors in the Books and Borrowing system after the team had carried out some work to remove duplicates.  I will be able to use the spreadsheet I have now to amalgamate duplicate authors, a task which I will tackle next week.

Week Beginning 31st January 2022

I split my time over many different projects this week.  For the Books and Borrowing project I completed the work I started last week on processing the Wigtown data, writing a little script that amalgamated borrowing records that had the same page order number on any page.  These occurrences arose when multiple volumes of a book were borrowed by a person at the same time and each volume was recorded separately.  My script worked perfectly and many such records were amalgamated.

I then moved onto incorporating images of register pages from Leighton into the CMS.  This proved to be a rather complicated process for one of the four registers as around 30 pages for the register had already been manually created in the CMS and had borrowing records associated with them.  However, these pages had been created in a somewhat random order, starting at folio number 25 and mostly being in order down to 43, at which point the numbers are all over the place, presumably because the pages were created in the order that they were transcribed.    As it stands the CMS relies on the ‘page ID’ order when generating lists of pages as ‘Folio Number’ isn’t necessarily in numerical order (e.g. front / back matter with Roman numerals).  If out of sequence pages crop up a lot we may have to think about adding a new ‘page order’ column, or possibly use the ‘previous’ and ‘next’ IDs to ascertain the order pages should be displayed.  After some discussion with the team it looks like pages are usually created in page order and Leighton is an unusual case, so we can keep using the auto-incrementing page ID for listing pages in the contents page.  I therefore generated a fresh batch of pages for the Leighton register then moved the borrowing records from the existing mixed up pages to the appropriate new page, then deleted the existing pages so everything is all in order.

For the Speak For Yersel project I created a new exercise whereby users are presented with a map of Scotland divided into 12 geographical areas and there are eight map markers in a box in the sea to the east of Scotland.  Each marker is clickable, and clicking on it plays a sound file.  Each marker is also draggable and after listening to the sound file the user should then drag the marker to whichever area they think the speaker in the sound file is from.  After dragging all of the markers the user can then press a ‘check answers’ button to see which they got right, and press a ‘view correct locations’ button which animates the markers to their correct locations on the map.  It was a lot of fun making the exercise and I think it works pretty well.  It’s still just an initial version and no doubt we will be changing it, but here’s a screenshot of how it currently looks (with one answer correct and the rest incorrect):

For the Speech Star project I made some further changes to the speech database.  Videos no longer autoplay, as requested.  Also, the tables now feature checkboxes beside them.  You can select up to four videos by pressing on these checkboxes.  If you select more than four the earliest one you pressed is deselected, keeping a maximum of four no matter how many checkboxes you try to click on.  When at least one checkbox is pressed the tab contents will slide down and a button labelled ‘Open selected videos’ will appear.  If you press on this a wider popup will open, containing all of your chosen videos and the metadata about each.  This has required quite a lot of reworking to implement, but it seemed to be working well, until I realised that while the multiple videos load and play successfully in Firefox, in Chrome and MS Edge (which is based on Chrome) only the final video loads in properly, with only audio playing on the other videos.  I’ll need to investigate this further next week.  But here’s a screenshot of how things look in Firefox:

Also this week I spoke to Thomas Clancy about the Place-names of Iona project, including discussing how the front-end map will function (Thomas wants an option to view all data on a single map, which should work although we may need to add in clustering at higher zoom levels.  We also discussed how to handle external links and what to do about the elements database, that includes a lot of irrelevant elements from other projects.

I also had an email conversation with Ophira Gamliel in Theology about a proposal she’s putting together that will involve an interactive map, gave some advice to Diane Scott about cookie policy pages, worked with Raymond in Arts IT Support to fix an issue with a server update that was affecting the playback of videos on the Seeing Speech and Dynamic Dialects websites and updated a script that Fraser Dallachy needed access to for his work on a Scots Thesaurus.

Finally, I had some email conversations with the DSL people and made an update to the interface of the new DSL website to incorporate an ‘abbreviations’ button, which links to the appropriate DOST or SND abbreviations page.

 

 

Week Beginning 22nd November 2021

I spent a bit of time this week writing an abstract for the DH2022 conference.  I wrote about how I rescued the data for the Anglo-Norman Dictionary in order to create the new AND website.  The DH abstracts are actually 750-1000 words long so it took a bit of time to write.  I have sent it on to Marc for feedback and I’ll need to run it by the AND editors before submission as well (if it’s worth submitting).  I still don’t know whether there would be sufficient funds for me to attend the event, plus the acceptance rate for papers is very low, so I’ll just need to see how this develops.

Also this week I participated in a Zoom call for the DSL about user feedback and redeveloping the DSL website.  It was a pretty lengthy call, but it was interesting to be a part of.  Marc mentioned a service called Hotjar (https://www.hotjar.com/) that allows you to track how people use your website (e.g. tracking their mouse movements) and this seemed like an interesting way of learning about how an interface works (or doesn’t).  I also had a conversation with Rhona about the updates to the DSL DNS that need to be made to improve the security or their email systems.  Somewhat ironically, recent emails from their IT people had ended up in my spam folder and I hadn’t realised they were asking me for further changes to be made, which unfortunately has caused a delay.

I spoke to Gerry Carruthers about another new project he’s hoping to set up, and we’ll no doubt be having a meeting about this in the coming weeks.  I also gave some advice to the students who are migrating the IJOSTS articles to WordPress too and made some updates to the Iona Placenames website in preparation for their conference.

For the Anglo-Norman Dictionary I fixed an issue with one of the textbase texts that had duplicate notes in one of its pages and then I worked on a new feature for the DMS that enables the editors to search the phrases contained in locutions in entries.  Editors can either match locution phrases beginning with a term (e.g. ta*), ending with a term (e.g. *de) or without a wildcard the term can appear anywhere in the phrase.  Other options found on the public site (e.g. single character wildcards and exact matches) are not included in this search.

The first time a search is performed the system needs to query all entries to retrieve only those that feature a locution.  These results are then stored in the session for use the next time a search is performed.  This means subsequent searches in a session should be quicker, and also means if the entries are updated between sessions to add or remove locutions the updates will be taken into consideration.

Search results work in a similar way to the old DMS option:  Any matching locution phrases are listed, together with their translations if present (if there are multiple senses / subsenses for a locution then all translations are listed, separated by a ‘|’ character).  Any cross references appear with an arrow and then the slug of the cross referenced entry.  There is also a link to the entry the locution is part of, which opens in a new tab on the live site.  A count of the total number of entries with locutions, the number of entries your search matched a phrase in and the total number of locutions is displayed above the results.

I spent the rest of the week working on the Speak For Yersel project.  We had a Zoom call on Monday to discuss the mockups I’d been working on last week and to discuss the user interface that Jennifer and Mary would like me to develop for the site (previous interfaces were just created for test purposes).  I spent the rest of my available time developing a further version of the grammar exercise with the new interface, that included logos, new fonts and colour schemes, sections appearing in different orders and an overall progress bar for the full exercise rather than individual ones for the questionnaire and the quiz sections.

I added in UoG and AHRC logos underneath the exercise area and added both an ‘About’ and ‘Activities’ menu items with ‘Activities’ as the active item.  The active state of the menu wasn’t mentioned in the document but I gave it a bottom border and made the text green not blue (but the difference is not hugely noticeable).  This is also used when hovering over a menu item.  I made the ‘Let’s go’ button blue not green to make it consistent with the navigation button in subsequent stages.  When a new stage loads the page now scrolls to the top as on mobile phones the content was changing but the visible section remained as it was previously, meaning the user had to manually scroll up.  I also retained the ‘I would never say that!’ header in the top-left corner of all stages rather than having ‘activities’ so it’s clearer what activity the user is currently working on.  For the map in the quiz questions I’ve added the ‘Remember’ text above the map rather than above the answer buttons as this seemed more logical and on the quiz the map pane scrolls up and scrolls down when the next question loads so as to make it clearer that it’s changed.  Also, the quiz score and feedback text now scroll down one after the other and in the final ‘explore’ page the clicked on menu item now remains highlighted to make it clearer which map is being displayed.  Here’s a screenshot of how the new interface looks:

Week Beginning 8th November 2021

I spent a bit of time this week working for the DSL.  I needed to act as the go-between for the DSL’s new IT people who are updating their email system and the University’s IT people who manage the DNS record on behalf of the DSL.  IT took a few attempts before the required changes were successfully in place.  I also read through a document that had been prepared about automatically ‘fixing’ the DSL’s dates to make them machine readable, and gave some feedback on the many different procedures that will need to be performed on the various date forms to produce the desired structure.

I also looked into an issue with cross references within citations that work in the live site but are not functioning in the new site or in the DSL’s editing system.  After some investigation it seems like it’s another case of the original API ‘fixing’ the XML in some way each time it’s processed in order for these links to work.  The XML for ‘put_v’ stored in the original API is as follows:

<cit><cref><date>1591</date> <title>Edinb. B. Rec.</title> V 41 (see <ref>Putting</ref> <i>vbl. n.</i> 1 (1)).</cref></cit>

There is a <ref> tag but no other information in this tag.  This is the same for the XML exported from DPS and used in the new dsl site (which has an additional bibliographic reference in):

<cit><cref refid=”bib013153″><date>1591</date> <title>Edinb. B. Rec.</title> V 41 (see <ref>Putting</ref> <i>vbl. n.</i> 1 (1)).</cref></cit>

The XSLT for both the live and new sites doesn’t include anything to process a <ref> that doesn’t include any attributes so both the live and new sites shouldn’t be displaying a link through to ‘putting’.  But of course the live site does.  I had generated and stored the XML that the original API (which I did not develop) outputs whenever the live site asks for an entry.  When looking at this I found the following:

<cit><cref ref=”db674″><date>1591</date> <title>Edinb. B. Rec.</title> V 41 (see <ref action=”link” href=”dost/putting”>Putting</ref> <i>vbl. n.</i> 1 (1)).</cref></cit>

You can see that the original API is injecting both a bibliographical cross-reference and the ‘putting’ reference.  The former we previously identified and sorted but the latter unfortunately hasn’t, although references that are not in citations do seem to have been fixed.  I updated the XSLT on the new dsl site to process the <ref> so the link now works, however this is not an approach that can be relied upon as all the XSLT is currently doing is taking the contents of the tag (Putting) and making a link out of it.  If the ‘slug’ of the entry doesn’t match the display form then the link is not going to work.  The original API includes a table containing cross references, but this doesn’t differentiate ones in citations from regular ones, and as the ‘putting_v’ entry contains 83 references it’s not going to be easy to pick out from this the ones that still need to be added.  This will need further discussion with the editors.

Continuing on a dictionary theme, I also did some further work for the Anglo-Norman Dictionary.  Last week I processed entries where a varlist date needed to be used as the citation date, but we noticed that the earliest date for entries hadn’t been updated in many cases where it should have been.  This week I figured out what went wrong.  My script only updated the entry’s date if the new date from the varlist was earlier than the existing earliest date for the entry.  This is obviously not what we want as in the majority of cases the varlist date will be later and should replace the earlier date that is erroneous.  Thankfully it was easy to pick out all of the entries that have a ‘usevardate’ and I then reran a corrected version of the script that checks and replaces an entry’s earliest date.

The editor spotted a couple of entries that still hadn’t been updated after this process and I then had to investigate them.  One of them had an error in the edited markup that was preventing the update from being applied.  For the other I realised that my code to update the XML wasn’t looking at all senses, just the first in each entry.  My script was attempting to loop through all senses as follows:

foreach($xml->main_entry->sense -> attestation as $a){

//process here

}

Which unfortunately only loops through all attestations in the first sense.  What I needed to do was:

 

foreach($xml->main_entry->sense as $s){

foreach($s->attestation as $a){

//process here

}

}

As the sense that needed updating for ‘aspreté’ was the last one the XML wasn’t getting changed, this meant ‘usevardate’ wasn’t present in the XML therefore my update to regenerate the earliest dates didn’t catch this entry (despite all dates for citations being successfully updated in the database for the entry).  I then fixed my script and regenerated all data again, including fixing the data so the ones with XML errors were updated.  I then ran a further spreadsheet containing entries that needed updated through the fixed script, resulting in a further 257 citations that had their dates updated.

Finally, I updated the Dictionary Management System so that ‘usevardate’ dates are taken into consideration when processing and publishing uploaded XML files.  If a ‘usevardate’ is found then this date is used for the attestation, which automatically affects the earliest date that is generated for the entry and also the dates used for attestations for search purposes.  I tried this out by downloading the XML for ‘admirable’, which features a ‘usevardate’.  I then edited the XML to remove the ‘usevardate’ before uploading and publishing this version.  As expected the dates for the attestation and the entry’s earliest date were affected by this change.  I then edited the XML to reinstate the ‘usevardate’ and uploaded and published this version, which took into consideration the ‘usevardate’ when generating the entry’s earliest date and attestation dates and returned the entry to the way it was before the test.

Also this week I set up a WordPress site that will be used for the archive of the International Journal of Scottish Theatre and Screen and migrated one of the issues to WordPress, which required me to do the following:

  1. Open the file in a PDF viewer for reference (e.g. Adobe Acrobat)
  2. Open the file in MS Word, which converts it into an editable format
  3. Create a WordPress page for the article with the article’s title as the page title and setting the page ‘parent’ as Volume 1
  4. Copy and paste the article contents from Word into WordPress
  5. Go through the article in WordPress, referencing the file in Acrobat, and manually fixing any issues that I spotted (e.g. fixing the display of headings, fixing some line breaks that were erroneously added). Footnotes proved to be particularly tricky as their layout was not handled very well by Word.  It’s possible that some footnotes are not quite right, especially with the ‘Trainspotting’ article that has more than 70(!) footnotes.
  6. Publish the WordPress page and update the ‘Volume 1’ page to add a link to it.

None of this was particularly difficult to do, but it was somewhat time-consuming.  There are a further 18 issues left to do (as far as I can tell), although some of these will take longer as they contain more articles, and some of these are more structurally complicated (e.g. including images).  Gerry Carruthers is getting a couple of students to do the rest and we have a meeting scheduled next week where I’ll talk through the process.

I also made some further tweaks to the WordPress site for the ‘Our Heritage, Our Stories’ site and dealt with renewing the domain for TheGlasgowStory.com site, which is now safe for a further nine years.  I also generated an Excel spreadsheet of the full lexical dataset from Mapping Metaphor for Wendy Anderson after she had a request for the data from some researchers in Germany.

I spent the rest of the week working for the Speak For Yersel project, continuing to generate mockups of the interactive exercises.  I completed an initial version of the overall structure for both the accessibility and word choice question types for the grammar exercise, so it will be possible to just ‘plug in’ any number of other questions that fit these templates.  What I haven’t done yet is incorporate the maps, the post-questionnaire ‘explore’ or the final quiz, as these need more content.  Here’s how things currently look:

I used another different font for the heading (Slackey), with the same one used for the ‘Question x of y’ text too’.  I also used CSS gradients quite a bit in this version, as the team seemed quite keen on these.  There’s a subtle diagonal gradient in the header and footer backgrounds, and a more obvious  top-to-bottom one in the answer buttons.  I used different combinations of colours too.  I created a progress bar, which works, but with only two questions in the system it’s not especially obvious what it does.  Rather than having people click an answer and then click a ‘next’ button to continue I’ve made it so that clicking an answer automatically loads the next step, and clicking an answer loads a panel with a ‘map’ – this is just a static image for now.  It also loads a ‘next’ button if there is a next question.  Clicking the ‘next’ button slides up the map panel, loads the next question in and advances the progress bar.  Users will be accessing this on many different screen sizes and I’ve tested it out on my Android phone and my iPad in both portrait and landscape orientations and all seems to work well to me.  However, the map panel will be displayed below rather than beside the questions on narrower screens.

I then began experimenting with randomly positioned markers in polygonal areas.  Initially I wanted to see whether this would be possible in ArcGIS, and a bit of Googling suggested it would be, see for example this post: http://gis.mtu.edu/?p=127 which is 10 years old, so the instructions don’t in any way match up to how things work in the current version of ArcGIS, but it at least showed it should be possible.  I loaded the desktop version of ArcGIS up via Glasgow Anywhere and after some experimentation and a fair bit of exasperation I managed to create a polygon shape and add 100 randomly placed marker points to it, which you can see here:

Something we will have to bear in mind is how such points will look when zoomed:

This is just 100 points over a pretty large geographical area.  We might end up with thousands of points, which might make this approach unusable.  Another issue is it took ArcGIS more than a minute to generate and process these 100 random points.  I don’t know how much of this is down to running the software via Glasgow Anywhere, but if we’re dealing with tens of polygons and hundreds or thousands of data points this is just not going to be feasible.

An issue of greater concern is that as far as I can tell (after more than an hour of investigation) the ‘create random points’ option is not available via ArcGIS Online, which is the tool we would need to use to generate maps to share online (if we choose to use ArcGIS).  The online version seems to be really pared back in terms of functionality compared to the desktop version and I just couldn’t see any way of incorporating the random points system.  However, I discovered a way of generating random points using Leaflet and another javascript based geospatial library called turf.js (http://turfjs.org/).  The information about how to go about it is here:  https://gis.stackexchange.com/questions/163044/mapbox-how-to-generate-a-random-coordinate-inside-a-polygon

I created a test map using the SCOSYA area for Campbeltown and the SCOSYA base map.  As a solution I’d say it’s working pretty well – it’s very fast and seems to do what we want it to.  You can view an example of the script output here:

The script generates 100 randomly placed markers each time you load the page.  At zoomed out levels the markers are too big, but I can make them smaller – this is just an initial test.  There is unfortunately going to be some clustering of markers as well, due to the nature of the random number generator.  This may give people to wrong impression.  I could maybe update the code to reject markers that are in too close proximity to another one, but I’d need to see about that.  I’d say it’s looking promising, anyway!

Week Beginning 25th October 2021

I came down with some sort of stomach bug on Sunday and was off work with it on Monday and Tuesday.  Thankfully I was feeling well again by Wednesday and managed to cram quite a lot into the three remaining days of the week.  I spent about a day working on the Data Management Plan for the new Curious Travellers proposal, sending out a first draft on Wednesday afternoon and dealing with responses to the draft during the rest of the week.  I also had some discussions with the Dictionaries of the Scots Language’s IT people about updating the DNS record regarding emails, responded to a query about the technology behind the SCOTS corpus, updated the images used in the mockups of the STAR website and created the ‘attendees only’ page for the Iona Placenames conference and added some content to it.  I also had a conversation with one of the Books and Borrowing researchers about trimming out the blank pages from the recent page image upload, and I’ll need to write a script to implement this next week.

My main task of the week was to develop a test version of the ‘where is the speaker from?’ exercise for the Speak For Yersel project.  This exercise involves the user listening to an audio clip and pressing a button each time they hear something that identifies the speaker as being from a particular area.  In order to create this I needed to generate my own progress bar that tracks the recording as it’s played, implement ‘play’ and ‘pause’ buttons, implement a button that when pressed grabs the current point in the audio playback and places a marker in the progress bar, and implement a means of extrapolating the exact times of the button press to specific sections of the transcription of the audio file so we can ascertain which section contains the feature the user noted.

It took quite some planning and experimentation to get the various aspects of the feature working, but I managed to complete an initial version that I’m pretty pleased with.  It will still need a lot of work but it demonstrates that we will be able to create such an exercise.  The interface design is not final, it’s just there as a starting point, using the Bootstrap framework (https://getbootstrap.com), the colours from the SCOSYA logo and a couple of fonts from Google Fonts (https://fonts.google.com).  There is a big black bar with a sort of orange vertical line on the right.  Underneath this is the ‘Play’ button and what I’ve called the ‘Log’ button (but we probably want to think of something better).  I’ve used icons from Font Awesome (https://fontawesome.com/) including a speech bubble icon in the ‘log’ button.

As discussed previously, when you press the ‘Play’ button the audio plays and the orange line starts moving across the black area.  The ‘Play’ button also turns into a ‘Pause’ button.  The greyed out ‘Log’ button becomes active when the audio is playing.  If you press the ‘Log’ button a speech bubble icon is added to the black area at the point where the orange ‘needle’ is.

For now the exact log times are outputted in the footer area.  Once the audio clip finishes the ‘Play’ button becomes a ‘Start again’ button.  Pressing on this clears the speech bubble icons and the footer and starts the audio from the beginning again.  The log is also processed.  Currently 1 second is taken off each click time to account for thinking and clicking.  I’ve extracted the data from the transcript of the audio and manually converted it into JSON data which is more easily processed by JavaScript.  Each ‘block’ consists of an ID, the transcribed content and the start and end times of the block in milliseconds.

For the time being for each click the script looks through the transcript data to find an entry where the click time is between the entry’s start and end times.  A tally of clicks for each transcript entry is then stored. This then gets outputted in the footer so you can see how things are getting worked out.  This is of course just test data – we’ll need smaller transcript areas for the real thing.  Currently nothing gets submitted to the server or stored – it’s all just processed in the browser.  I’ve tested the page out in several browsers in Windows, on my iPad and on my Android phone and the interface works perfectly well on mobile phone screens.  Below is a screenshot showing audio playback and four linguistic features ‘logged’:

Also this week I had a conversation with the editor of the AND about updating the varlist dates.  I also updated the DTD to allow the new ‘usevardate’ attribute to be used to identify occasions where a varlist date should be used as the earliest citation date.  We also became aware that a small number of entries in the online dictionary are referencing an old DTD on the wrong server so I updated these.

Week Beginning 18th October 2021

I was back at work this week after having a lovely holiday in Northumberland last week.  I spent quite a bit of time in the early part of the week catching up with emails that had come in whilst I’d been off.  I fixed an issue with Bryony Randall’s https://imprintsarteditingmodernism.glasgow.ac.uk/ site, which was put together by an external contractor, but I have now inherited.  The site menu would not update via the WordPress admin interface and after a bit of digging around in the source files for the theme it would appear that the theme doesn’t display a menu anywhere, that is the menu which is editable from the WordPress Admin interface is not the menu that’s visible on the public site.  That menu is generated in a file called ‘header.php’ and only pulls in pages / posts that have been given one of three specific categories: Commissioned Artworks, Commissioned Text or Contributed Text (which appear as ‘Blogs’).  Any post / page that is given one of these categories will automatically appear in the menu.  Any post / page that is assigned to a different category or has no assigned category doesn’t appear.  I added a new category to the ‘header’ file and the missing posts all automatically appeared in the menu.

I also updated the introductory texts in the mockups for the STAR websites and replied to a query about making a place-names website from a student at Newcastle.  I spoke to Simon Taylor about a talk he’s giving about the place-name database and gave him some information on the database and systems I’d created for the projects I’ve been involved with.  I also spoke to the Iona Place-names people about their conference and getting the website ready for this.

I also had a chat with Luca Guariento about a new project involving the team from the Curious Travellers project.  As this is based in Critical Studies Luca wondered whether I’d write the Data Management Plan for the project and I said I would.  I spent quite a bit of time during the rest of the week reading through the bid documentation, writing lists of questions to ask the PI, emailing the PI, experimenting with different technologies that the project might use and beginning to write the Plan, which I aim to complete next week.

The project is planning on running some pre-digitised images of printed books through an OCR package and I investigated this.  Google owns and uses a program called Tesseract to run OCR for Google Books and Google Docs and it’s freely available (https://opensource.google/projects/tesseract).  It’s part of Google Docs – if you upload an image of text into Google Drive then open it in Google Docs the image will be automatically OCRed.  I took a screenshot of one of the Welsh tour pages (https://viewer.library.wales/4690846#?c=&m=&s=&cv=32&manifest=https%3A%2F%2Fdamsssl.llgc.org.uk%2Fiiif%2F2.0%2F4690846%2Fmanifest.json&xywh=-691%2C151%2C4725%2C3632) and cropped the text and then opened it in Google Docs and even on this relatively low resolution image the OCR results are pretty decent.  It managed to cope with most (but not all) long ‘s’ characters and there are surprisingly few errors – ‘Englija’ and ‘Lotty’ are a couple and have been caused by issues with the original print quality.  I’d say using Tesseract is going to be suitable for the project.

I spent a bit of time working on the Speak For Yersel project.  We had a team meeting on Thursday to go through in detail how one of the interactive exercises will work.  This one will allow people to listed to a sound clip and then relisten to it in order to click whenever they hear something that identifies the speaker as coming from a particular location.  Before the meeting I’d prepared a document giving an overview of the technical specification of the feature and we had a really useful session discussing the feature and exactly how it should function.  I’m hoping to make a start on a mockup of the feature next week.

Also for the project I’d enquired with Arts IT Support as to whether the University held a license for ArcGIS Online, which can be used to publish maps online.  It turns out that there is a University-wide license for this which is managed by the Geography department and a very helpful guy called Craig MacDonell arranged for me and the other team members to be set up with accounts for it.  I spent a bit of time experimenting with the interface and managed to publish a test heatmap based on data from SCOSYA.  I can’t get it to work directly with the SCOSYA API as it stands, but after exporting and tweaking one of the sets of rating data as a CSV I pretty quickly managed to make a heatmap based on the ratings and publish it: https://glasgow-uni.maps.arcgis.com/apps/instant/interactivelegend/index.html?appid=9e61be6879ec4e3f829417c12b9bfe51 This is just a really simple test, but we’d be able to embed such a map in our website and have it pull in data dynamically from CSVs generated in real-time and hosted on our server.

Also this week I had discussions with the Dictionaries of the Scots Language people about how dates will be handled.  Citation dates are being automatically processed to add in dates as attributes that can then be used for search purposes.  Where there are prefixes such as ‘a’ and ‘c’ the dates are going to be given ranges based on values for these prefixes.  We had a meeting to discuss the best way to handle this.  Marc had suggested that having a separate prefix attribute rather than hard coding the resulting ranges would be best.  I agreed with Marc that having a ‘prefix’ attribute would be a good idea, not only because it means we can easily tweak the resulting date ranges at a later point rather than having them hard-coded, but also because it then gives us an easy way to identify ‘a’, ‘c’ and ‘?’ dates if we ever want to do this.  If we only have the date ranges as attributes then picking out all ‘c’ dates (e.g. show me all citations that have a date between 1500 and 1600 that are ‘c’) would require looking at the contents of each date tag for the ‘c’ character which is messier.

A concern was raised that not having the exact dates as attributes would require a lot more computational work for the search function, but I would envisage generating and caching the full date ranges when the data is imported into the API so this wouldn’t be an issue.  However, there is a potential disadvantage to not including the full date range as attributes in the XML, and this is that if you ever want to use the XML files in another system and search the dates through it the full ranges would not be present in the XML so would require processing before they could be used.  But whether the date range is included in the XML or not I’d say it’s important to have the ‘prefix’ as an attribute, unless you’re absolutely sure that being able to easily identify dates that have a particular prefix isn’t important.

We decided that prefixes would be stored as attributes and that the date ranges for dates with a prefix would be generated whenever the data is exported from the DSL’s editing system, meaning editors wouldn’t have to deal with noting the date ranges and all the data would be fully usable without further processing as soon as it’s exported.

Also this week I was given access to a large number of images of registers from the Advocates Library that had been digitised by the NLS.  I downloaded these, batch processed them to add in the register numbers as a prefix to the filenames, uploaded the images to our server, created register records for each register and page records for each page.  The registers, pages and associated images can all now be accessed via our CMS.

My final task of the week was to continue work on the Anglo-Norman Dictionary.  I completed work on the script identifies which citations have varlists and which may need to have their citation date updated based on one of the forms in the varlist.  What the script does is to retrieve all entries that have a <varlist> somewhere in them.  It then grabs all of the forms in the <head> of the entry.  It then goes through every attestation (main sense and subsense plus locution sense and subsense) and picks out each one that has a <varlist> in it.

For each of these it then extracts the <aform> if there is one, or if there’s not then it extracts the final word before the <varlist>.  It runs a Levenshtein test on this ‘aform’ to ascertain how different it is from each of the <head> forms, logging the closest match (0 = exact match of one form, 1 = one character different from one of the forms etc).  It then picks out each <ms_form> in the <varlist> and runs the same Levenshtein test on each of these against all forms in the <head>.

If the score for the ‘aform’ is lower or equal to the lowest score for an <ms_form> then the output is added to the ‘varlist-aform-ok’ spreadsheet.  If the score for one of the <ms_form> words is lower than the ‘aform’ score the output is added to the ‘varlist-vform-check’ spreadsheet.

My hope is that by using the scores we can quickly ascertain which are ok and which need to be looked at by ordering the rows by score and dealing with the lowest scores first.  In the first spreadsheet there are 2187 rows that have a score of 0.  This means the ‘aform’ exactly matches one of the <head> forms.  I would imagine that these can safely be ignored.  There are a further 872 that have a score of 1, and we might want to have a quick glance through these to check they can be ignored, but I suspect most will be fine.  The higher the score the greater the likelihood that the ‘aform’ is not the form that should be used for dating purposes and one of the <varlist> forms should instead.  These would need to be checked and potentially updated.

The other spreadsheet contains rows where a <varlist> form has a lower score than the ‘aform’ – i.e. one of the <varlist> forms is closer to one of the <head> forms than the ‘aform’ is.  These are the ones that are more likely to have a date that needs updated. The ‘Var forms’ column lists each var form and its corresponding score.  It is likely that the var form with the lowest score is the form that we would need to pick the date out for.

In terms of what the editors could do with the spreadsheets:  My plan was that we’d add an extra column to note whether a row needs updated or not – maybe called ‘update’ – and be left blank for rows that they think look ok as they are and containing a ‘Y’ for rows that need to be updated.  For such rows they could manually update the XML column to add in the necessary date attributes.  Then I could process the spreadsheet in order to replace the quotation XML for any attestations that needs updated.

For the ‘vform-check’ spreadsheet I could update my script to automatically extract the dates for the lowest scoring form and attempt to automatically add in the required XML attributes for further manual checking, but I think this task will require quite a lot of manual checking from the onset so it may be best to just manually edit the spreadsheet here too.

Week Beginning 27th September 2021

I had two Zoom calls on Monday this week.  The first was with the Burns people to discuss the launch of the website for the ‘letters and poems’ part of ‘Editing Burns’, to complement the existing ‘Prose and song’ website (https://burnsc21.glasgow.ac.uk/).  The new website will launch in January with some video content and blogs, plus I will be working on a content management system for managing the network of Burns’ letter correspondents, which I will put together some time in November, assuming the team can send me on some sample data by then.  This system will eventually power the ‘Burns letter writing trail’ interactive maps that I’ll create for the new site sometime next year.

My second Zoom call was for the Books and Borrowing project to discuss adding data from a new source to the database.  The call gave us an opportunity to discuss the issues with the data that I’d highlighted last week.  It was good to catch up with the team again and to discuss the issues with the researcher who had originally prepared the spreadsheet containing the data.  We managed to address all of the issues and the researcher is going to spend a bit of time adapting the spreadsheet before sending it to me to be batch uploaded into our system.

I spent some further time this week investigating the issue of some of the citation dates in the Anglo-Norman Dictionary being wrong, as discussed last week.  The issue affects some 4309 entries where at least one citation features the form only in a variant text.  This means that the citation date should not be the date of the manuscript in the citation, but the date when the variant of the manuscript was published.  Unfortunately this situation was never flagged in the XML, and there was never any means of flagging the situation.  The variant date should only ever be used when the form of the word in the main manuscript is not directly related to the entry in question but the form in the variant text is.  The problem is it cannot be automatically ascertained when the form in the main manuscript is the relevant one and when the form in the variant text is as there is so much variation in forms.

For example, the entry https://anglo-norman.net/entry/bochet_1 there is a form ‘buchez’ in a citation and then two variant texts for this where the form is ‘huchez’ and ‘buistez’.  None of these forms are listed in the entry’s XML as variants so it’s not possible for a script to automatically deduce which is the correct date to use (the closest is ‘buchet’).  In this case the main citation form and its corresponding date should be used.  Whereas in the entry https://anglo-norman.net/entry/babeder the main citation form is ‘gabez’ while the variant text has ‘babedez’ and so this is the form and corresponding date that needs to be used.  It would be difficult for a script to automatically deduce this.  In this case a Levenstein test (which test how many letters need to be changed to turn one string into another) could work, but this would still need to be manually checked.

The editor wanted me to focus on those entries where the date issue affects the earliest date for an entry, as these are the most important as the issue results in an incorrect date being displayed for the entry in the header and the browse feature.  I wrote a script that finds all entries that feature ‘<varlist’ somewhere in the XML (the previously exported 4309 entries).  It then goes through all attestations (in all sense, subsense and locution sense and subsense sections) to pick out the one with the earliest date, exactly as the code for publishing an entry does.  What it then does is checks the quotation XML for the attestation with the earliest date for the presence of ‘<varlist’ and if it finds this it outputs information for the entry, consisting of the slug, the earliest date as recorded in the database, the earliest date of the attestation as found by the script, the ID of the  attestation and then the XML of the quotation.  The script has identified 1549 entries that have a varlist in the earliest citation, all of which will need to be edited.

However, every citation has a date associated with it and this is used in the advanced search where users have the option to limit their search to years based on the citation date.  Only updating citations that affect the entry’s earliest date won’t fix this, as there will still be many citations with varlists that haven’t been updated and will still therefore use the wrong date in the search.  Plus any future reordering of citations would require all citations with varlists to be updated to get entries in the correct order.  Fixing the earliest citations with varlists in entries based on the output of my script will fix the earliest date as used in the header of the entry and the ‘browse’ feature only, but I guess that’s a start.

Also this week I sorted out some access issues for the RNSN site, submitted the request for a new top-level ‘ac.uk’ domain for the STAR project and spent some time discussing the possibilities for managing access to videos of the conference sessions for the Iona place-names project.  I also updated the page about the Scots Dictionary for Schools app on the DSL website (https://dsl.ac.uk/our-publications/scots-dictionary-for-schools-app/) after it won the award for ‘Scots project of the year’.

I also spent a bit of time this week learning about the statistical package R (https://www.r-project.org/).  I downloaded and installed the package and the R Studio GUI and spent some time going through a number of tutorials and examples in the hope that this might help with the ‘Speak for Yersel’ project.

For a few years now I’ve been meaning to investigate using a spider / radar chart for the Historical Thesaurus, but I never found the time.  I unexpectedly found myself with some free time this week due to ‘Speak for Yersel’ not needing anything from me yet so I thought I’d do some investigation.  I found a nice looking d3.js template for spider / radar charts here: http://bl.ocks.org/nbremer/21746a9668ffdf6d8242  and set about reworking it with some HT data.

My idea was to use the chart to visualise the distribution of words in one or more HT categories across different parts of speech in order to quickly ascertain the relative distribution and frequency of words.  I wanted to get an overall picture of the makeup of the categories initially, but to then break this down into different time periods to understand how categories changed over time.

As an initial test I chose the categories 02.04.13 Love and 02.04.14 Hatred, and in this initial version I looked only at the specific contents of the categories – no subcategories and no child categories.  I manually extracted counts of the words across the various parts of speech and then manually split them up into words that were active in four broad time periods: OE (up to 1149), ME (1150-1449), EModE (1450-1799) and ModE (1800 onwards) and then plotted them on the spider / radar chart, as you can see in this screenshot:

You can quickly move through the different time periods plus the overall picture using the buttons above the visualisation, and I think the visualisation does a pretty good job of giving you a quick and easy to understand impression of how the two categories compare and evolve over time, allowing you to see, for example, how the number of nouns and adverbs for love and hate are pretty similar in OE:

but by ModE the number of nouns for Love have dropped dramatically, as have the number of adverbs for Hate:

We are of course dealing with small numbers of words here, but even so it’s much easier to use the visualisation to compare different categories and parts of speech than it is to use the HT’s browse interface.  Plus if such a visualisation was set up to incorporate all words in child categories and / or subcategories it could give a very useful overview of the makeup of different sections of the HT and how they develop over time.

There are some potential pitfalls to this visualisation approach, however.  The scale used currently changes based on the largest word count in the chosen period, meaning unless you’re paying attention you might get the wrong impression of the number of words.  I could change it so that the scale is always fixed as the largest, but that would then make it harder to make out details in periods that have much fewer words.  Also, I suspect most categories are going to have many more nouns than other parts of speech, and a large spike of nouns can make it harder to see what’s going on with the other axes.  Another thing to note is that the order of the axes is fairly arbitrary but can have a major impact on how someone may interpret the visualisation.  If you look at the OE chart the ‘Hate’ area looks massive compared to the ‘Love’ area, but this is purely because there is only one ‘Love’ adjective compared to 5 for ‘Hate’.  If the adverb axis had come after the noun one instead the shapes of ‘Love’ and ‘Hate’ would have been more similar.  You don’t necessarily appreciate on first glance that ‘Love’ and ‘Hate’ have very similar numbers of nouns in OE, which is concerning.  However, I think the visualisations have a potential for the HT and I’ve emailed the other HT people to see what they think.

 

Week Beginning 20th September 2021

This was a four-day week for me as I’d taken Friday off.  I went into my office at the University on Tuesday to have my Performance and Development Review with my line-manager Marc Alexander.  It was the first time I’d been at the University since before the summer and it felt really different to the last time – much busier and more back to normal, with lots of people in the building and a real bustle to the West End.  My PDR session was very positive and it was great to actually meet a colleague in person again – the first time I’d done so since the first lockdown began.  I spent the rest of the day trying to get my office PC up to date after months of inaction.  One of the STELLA apps (the Grammar one) had stopped working on iOS devices, seemingly because it was still a 32-bit app, and I wanted to generate a new version of it.  This meant upgrading MacOS on my dual-boot PC, which I hadn’t used for years and was very out of date.  I’m still not actually sure whether the Mac I’ve got will support a version of MacOS that will allow me to engage in app development, as I need to incrementally upgrade the MacOS version, which takes quite some time, and by the end of the day there were still further updates required.  I’ll need to continue with this another time.

I spent quite a bit of the remainder of the week working on the new ‘Speak for Yersel’ project.  We had a team meeting on Monday and a follow-up meeting on Wednesday with one of the researchers involved in the Manchester Voices project (https://www.manchestervoices.org/) who very helpfully showed us some of the data collection apps they use and some of the maps that they generate.  It gave us a lot to think about, which was great.  I spent some further time looking through other online map examples, such as the New York Times dialect quiz (https://www.nytimes.com/interactive/2014/upshot/dialect-quiz-map.html) and researching how we might generate the maps we’d like to see.  It’s going to take quite a bit more research to figure out how all of this is going to work.

Also this week I spoke to the Iona place-names people about how their conference in December might be moved online and fixed a permissions issue with the Imprints of New Modernist Editing website and discussed the domain name for the STAR project with Eleanor Lawson.  I also had a chat with Luca Guariento about the restrictions we have on using technologies on the servers in the College of Arts and how this might be addressed.

I also received a spreadsheet of borrowing records covering five registers for the Books and Borrowing project and went through it to figure out how the data might be integrated with our system.  The biggest issue is figuring out which page each record is on.  In the B&B system each borrowing record must ‘belong’ to a page, which in turn ‘belongs’ to a register.  If a borrowing record has no page it can’t exist in the system.  In this new data only three registers have a ‘Page No.’ column and not every record in these registers has a value in this column.  We’ll need to figure out what can be done about this, because as I say, having a page is mandatory in the B&B system.  We could use the ‘photo’ column as this is present across all registers and every row.  However, I noticed that there are multiple photos per page, e.g. for SL137144 page 2 has 2 photos (4538 and 4539) so photo IDs don’t have a 1:1 relationship with pages.  If we can think of a way to address the page issue then I should be able to import the data.

Finally, I continued to work on the Anglo-Norman Dictionary project, fixing some issues relating to yoghs in the entries and researching a potentially large issue relating to the extraction of earliest citation dates.  Apparently there are a number of cases when the date for a citation that should be used is not the date as coded in the date section of the citation’s XML, but should instead be a date taken from a manuscript containing a variant form within the citation.  The problem is there is no flag to state when this situation occurs, instead it occurs whenever the form of the word in the citation is markedly different within the citation but similar in the variant text.  It seems unlikely that an automated script would be able to ascertain when to use the variant date as there is just so much variation between the forms.  This will need some further investigation, which I hope to be able to do next week.