Week Beginning 24th January 2022

I had a very busy week this week, working on several different projects.  For the Books and Borrowing project I participated in the team Zoom call on Monday to discuss the upcoming development of the front-end and API for the project, which will include many different search and browse facilities, graphs and visualisations.  I followed this up with a lengthy email to the PI and Co-I where I listed some previous work I’ve done and discussed some visualisation libraries we could use.  In the coming weeks I’ll need to work with them to write a requirements document for the front-end.  I also downloaded images from Orkney library, uploaded all of them to the server and generated the necessary register and page records.  One register with 7 pages already existed in the system and I ensured that page images were associated with these and the remaining pages of the register fit in with the existing ones.  I also processed the Wigtown data that Gerry McKeever had been working on, splitting the data associated with one register into two distinct registers, uploading page images and generating the necessary page records.  This was a pretty complicated process, and I still need to complete the work on it next week, as there are several borrowing records listed as separate rows when in actual fact they are merely another volume of the same book borrowed at the same time.  These records will need to be amalgamated.

For the Speak For Yersel project I had a meeting with the PI and RA on Monday to discuss updates to the interface I’ve been working on, new data for the ‘click’ exercise and a new type of exercise that will precede the ‘click’ exercise and will involve users listening to sound clips then dragging and dropping them onto areas of a map to see whether they can guess where the speaker is from.  I spent some time later in the week making all of the required changes to the interface and the grammar exercise, including updating the style used for the interactive map and using different marker colours.

I also continued to work on the speech database for the Speech Star project based on feedback I received about the first version I completed last week.  I added in some new introductory text and changed the order of the filter options.  I also made the filter option section hidden by default as it takes up quite a lot of space, especially on narrow screens.  There’s now a button to show / hide the filters, with the section sliding down or up.  If a filter option is selected the section remains visible by default.  I also changed the colour of the filter option section to a grey with a subtle gradient (it gets lighter towards the right) and added a similar gradient to the header, just to see how it looks.

The biggest update was to the filter options, which I overhauled so that instead of a drop-down list where one option in each filter type can be selected there are checkboxes for each filter option, allowing multiple items of any type to be selected.  This was a fairly large change to implement as the way selected options are passed to the script and the way the database is queried needed to be completely changed.  When an option is selected the page immediately reloads to display the results of the selection and this can also change the contents of the other filter option boxes – e.g. selecting ‘alveolar’ limits the options in the ‘sound’ section.  I also removed the ‘All’ option and left all checkboxes unselected by default.  This is how filters on clothes shopping sites do it – ‘all’ is the default and a limit is only applied if an option is ticked.

I also changed the ‘accent’ labels as requested, changed the ‘By Prompt’ header to ‘By Word’ and updated the order of items in the ‘position’ filter.  I also fixed an issue where ‘cheap’ and ‘choose’ were appearing in a column instead of the real data.  Finally, I made the overlay that appears when a video is clicked on darker so it’s more obvious that you can’t click on the buttons.  I did investigate whether it was possible to have the popup open while other page elements were still accessible but this is not something that the Bootstrap interface framework that I’m using supports, at least not without a lot of hacking about with its source code.  I don’t think it’s worth pursuing this as the popup will cover much of the screen on tablets / phones anyway, and when I add in the option to view multiple videos the popup will be even larger.

Also this week I made some minor tweaks to the Burns mini-project I was working on last week and had a chat with the DSL people about a few items, such as the data import process that we will be going through again in the next month or so and some of the outstanding tasks that I still need to tackle with the DSL’s interface.

I also did some work for the AND this week, investigating a weird timeout error that cropped up on the new server and discussing how best to tackle a major update to the AND’s data.  The team have finished working on a major overhaul of the letter S and this is now ready to go live.  We have decided that I will ask for a test instance of the AND to be set up so I can work with the new data, testing out how the DMS runs on the new server and how it will cope with such a large update.

The editor, Geert, had also spotted an issue with the textbase search, which didn’t seem to include one of the texts (Fabliaux) he was searching for.  I investigated the issue and it looked like the script that extracted words from pages may have silently failed in some cases.  There are 12,633 page records in the textbase, each of which has a word count.  When the word count is greater than zero my script processes the contents of the page to generate the data for searching.  However, there appear to be 1889 pages in the system that have a word count of zero, including all of Fabliaux.  Further investigation revealed that my scripts expect the XML to be structured with the main content in a <body> tag.  This cuts out all of the front matter and back matter from the searches, which is what we’d agreed should happen and thankfully accounts for many of the supposedly ‘blank’ pages listed above as they’re not the actual body of the text.

However, Fabliaux doesn’t include the <body> tag in the standard way.  In fact, the XML file consists of multiple individual texts, each of which has a separate <body> tag.  As my script didn’t find a <body> in the expected place no content was processed.  I ran a script to check the other texts and the following also have a similar issue:  gaunt1372 (710 pages) and polsongs (111 pages), in addition to the 37 pages of Fabliaux.  Having identified these I could update my script that generates search words and re-ran it for these texts, fixing the issue.

Also this week I attended a Zoom-based seminar on ‘Digitally Exhibiting Textual Heritage’ that was being run by Information Studies.  This featured four speakers from archives, libraries and museums discussing how digital versions of texts can be exhibited, both in galleries and online.  Some really interesting projects were discussed, both past and present.  This included the BL’s ‘Turning the Pages’ system (http://www.bl.uk/turning-the-pages/) , some really cool transparent LCD display cases (https://crystal-display.com/transparent-displays-and-showcases/) that allow images to be projected on clear glass while objects behind the panel are still visible.  3d representations of gallery spaces were discussed (e.g. https://www.lib.cam.ac.uk/ghostwords), as were ‘long form narrative scrolls’ such as https://www.nytimes.com/projects/2012/snow-fall/index.html#/?part=tunnel-creek,  http://www.wolseymanuscripts.ac.uk/ and https://stories.durham.ac.uk/journeys-prologue/.  There is a tool that can be used to create these here: https://shorthand.com/.  It was a very interesting session!

Week Beginning 17th January 2022

I divided my time mostly between three projects this week:  Speech Star, Speak For Yersel and a Burns mini-project for Kirsteen McCue.  For Speech Star I set up the project’s website, based on our mockup number 9 (which still needs work) and completed an initial version of the speech database.   As with the Dynamic Dialects accent chart (https://www.dynamicdialects.ac.uk/accent-chart/) , there are limiting options and any combination of these can be selected.  The page refreshes after each selection is made and the contents of the other drop-down lists vary depending on the option that is selected.  As requested, there are 6 limiting options (accent, sex, age range, sound, articulation and position).

I created two ‘views’ of the data that are available in different tabs on the page.  The first is ‘By Accent’ which lists all data by region.  Within each region there is a table for each speaker with columns for the word that’s spoken and its corresponding sound, articulation and position.  Users can press on a column heading to order the table by that column.  Pressing again reverses the order.  Note that this only affects the current table and not those of other speakers.  Users can also press on the button in the ‘Word’ column to open a popup containing the video, which automatically plays.  Pressing any part of the browser window outside of the popup closes the popup and stops the video, as does pressing on the ‘X’ icon in the top-right of the popup.

The ‘By Prompt’ tab presents exactly the same data, but arranged by the word that’s spoken rather than by accent.  This allows you to quickly access the videos for all speakers if you’re interested in hearing a particular sound.  Note that the limit options apply equally to both tabs and are ‘remembered’ if you switch from one tab to the other.

The main reason I created the two-tab layout is to give users the bi-directional access to video clips that the Dynamic Dialects Accent Chart offers without ending up with a table that is far too long for most screens, especially mobile screens.  One thing I haven’t included yet is the option to view multiple video clips side by side.  I remember this was discussed as a possibility some time ago but I need to discuss this further with the rest of the team to understand how they would like it to function.  Below is a screenshot of the database, but note that the interface is still just a mockup and all elements such as the logo, fonts and colours will likely change before the site launches:

For the Speak For Yersel project I also created an initial project website using our latest mockup template and I migrated both sample activities over to the new site.  At the moment the ‘Home’ and ‘About’ pages just have some sample blocks of text I’ve taken from SCOSYA.  The ‘Activities’ page provides links to the ‘grammar’ and ‘click’ exercises which mostly work in the same way as in the old mockups with a couple of differences that took some time to implement.

Firstly, the ‘grammar’ exercise now features actual interactive maps throughout the various stages.  These are the sample maps I created previously that feature large numbers of randomly positioned markers and local authority boundaries.  I also added a ‘fullscreen’ option to the bottom-right of each map (the same as SCOSYA) to give people the option of viewing a larger version of the map.  Here’s an example of how the grammar exercise now looks:

I also updated the ‘click’ exercise so that is uses the new page design.  Behind the scenes I also amalgamated the individual JavaScript and CSS files I’d been using for the different exercise mockups into single files that share functions and elements.  Here’s an example of how the ‘click’ exercise looks now:

The Burns mini-project for Kirsteen involved manuscript scores for around 100 pieces of music in PDF format and metadata about these pieces as a spreadsheet.  I needed to make a webpage featuring a tabular interface with links to the PDFs.  This involved writing a script to rename the PDFs as they had filenames containing characters that are problematic for web servers, such as colons, apostrophes and ampersands.  I then wrote another script to process the Excel spreadsheet in order to generate the required HTML table.  I uploaded the PDFs to WordPress and created a page for the table.  I also needed to add in a little bit of JavaScript to handle the reordering of the table columns.  I have given the URL for the page to Kirsteen for feedback before we make the feature live.

Also this week I gave some further advice to the students who are migrating the IJOSTS journal. Fixed an issue with some data in the Old English Thesaurus for Jane Roberts and responded to an enquiry about the English Language Twitter account.

Week Beginning 10th January 2022

I continued to work on the Books and Borrowing project for a lot of this week, completing some of the tasks I began last week and working on some others.  We ran out of server space for digitised page images last week, and although I freed up some space by deleting a bunch of images that were no longer required we still have a lot of images to come.  The team estimates that a further 11,575 images will be required.  If the images we receive for these pages are comparable to the ones from the NLS, which average around 1.5Mb each, then 30Gb should give us plenty of space.  However, after checking through the images we’ve received from other digitisation units it turns out that the  NLS images are a vit of an outlier in term of file size and generally 8-10Mb is more usual.  If we use this as an estimate then we would maybe require 120Gb-130Gb of additional space.  I did some experiments with resizing and changing the image quality of one of the larger images, managing to bring an 8.4Mb image down to 2.4Mb while still retaining its legibility.  If we apply this approach to the tens of thousands of larger images we have then this would result in a considerable saving of storage.  However, Stirling’s IT people very kindly offered to give us a further 150Gb of space for the images so this resampling process shouldn’t be needed for now at least.

Another task for the project this week was to write a script to renumber the folio numbers for the 14 volumes from the Advocates Library that I noticed had irregular numbering.  Each of the 14 volumes had different issues with their handwritten numbering, so I had to tailor my script to each volume in turn, and once the process was complete the folio numbers used to identify page images in the CMS (and eventually in the front-end) entirely matched the handwritten numbers for each volume.

My next task for the project was to import the records for several volumes from the Royal High School of Edinburgh but I ran into a bit of an issue.  I had previously been intending to extract the ‘item’ column and create a book holding record and a single book item record for each distinct entry in the column.  This would then be associated with all borrowing records in RHS that also feature this exact ‘item’.  However, this is going to result in a lot of duplicate holding records due to the contents of the ‘item’ column including information about different volumes of a book and/or sometimes using different spellings.

For example, in SL137142 the book ‘Banier’s Mythology’ appears four times as follows (assuming ‘Banier’ and ‘Bannier’ are the same):

  1. Banier’s Mythology v. 1, 2
  2. Banier’s Mythology v. 1, 2
  3. Bannier’s Myth 4 vols
  4. Bannier’s Myth. Vol 3 & 4

My script would create one holding and item record for ‘Banier’s Mythology v. 1, 2’ and associate it with the first two borrowing records but the 3rd and 4th items above would end up generating two additional holding / item records which would then be associated with the 3rd and 4th borrowing records.

No script I can write (at least not without a huge amount of work) would be able to figure out that all four of these books are actually the same, or that there are actually 4 volumes for the one book, each requiring its own book item record, and that volumes 1 & 2 need to be associated with borrowing records 1&2 while all 4 volumes need to be associated with borrowing record 3 and volumes 3&4 need to be associated with borrowing record 4.  I did wonder whether I might be able to automatically extract volume data from the ‘item’ column but there is just too much variation.

We’re going to have to tackle the normalisation of book holding names and the generation of all required book items for volumes at some point and this either needs to be done prior to ingest via the spreadsheets or after ingest via the CMS.

My feeling is that it might be simpler to do it via the spreadsheets before I import the data.  If we were to do this then the ‘Item’ column would become the ‘original title’ and we’d need two further columns, one for the ‘standardised title’ and one listing the volumes, consisting of a number of each volume separated with a comma.  With the above examples we would end up with the following (with a | representing a column division):

  1. Banier’s Mythology v. 1, 2 | Banier’s Mythology | 1,2
  2. Banier’s Mythology v. 1, 2 | Banier’s Mythology | 1,2
  3. Bannier’s Myth 4 vols | Banier’s Mythology | 1,2,3,4
  4. Bannier’s Myth. Vol 3 & 4 | Banier’s Mythology | 3,4

If each sheet of the spreadsheet is ordered alphabetically by the ‘item’ column it might not take too long to add in this information.  The additional fields could also be omitted where the ‘item’ column has no volumes or different spellings.  E.g. ‘Hederici Lexicon’ may be fine as it is.  If the ‘standardised title’ and ‘volumes’ columns are left blank in this case then when my script reaches the record it will know to use ‘Hederici Lexicon’ as both original and standardised titles and to generate one single unnumbered book item record for it.  We agreed that normalising the data prior to ingest would be the best approach and I will therefore wait until I receive updated data before I proceed further with this.

Also this week I generated a new version of a spreadsheet containing the records for one register for Gerry McKeever, who wanted borrowers, book items and book holding details to be included in addition to the main borrowing record.  I also made a pretty major update to the CMS to enable books and borrower listings for a library to be filtered by year of borrowing in addition to filtering by register.  Users can either limit the data by register or year (not both).  They need to ensure the register drop-down is empty for the year filter to work, otherwise the selected register will be used as the filter.  On either the ‘books’ or ‘borrowers’ tab in the year box they can add either a single year (e.g. 1774) or a range (e.g. 1770-1779).  Then when ‘Go’ is pressed the data displayed is limited to the year or years entered.  This also includes the figures in the ‘borrowing records’ and ‘Total borrowed items’ columns.  Also, the borrowing records listed when a related pop-up is opened will only feature those in the selected years.

I also worked with Raymond in Arts IT Support and Geert, the editor of the Anglo-Norman Dictionary to complete the process of migrating the AND website to the new server.  The website (https://anglo-norman.net/) is now hosted on the new server and is considerably faster than it was previously.  We also took the opportunity the launch the Anglo-Norman Textbase, which I had developed extensively a few months ago.  Searching and browsing can be found here: https://anglo-norman.net/textbase/ and this marks the final major item in my overhaul of the AND resource.

My last major task of the week was to start work on a database of ultrasound video files for the Speech Star project.  I received a spreadsheet of metadata and the video files from Eleanor this week and began processing everything.  I wrote a script to export the metadata into a three-table related database (speakers, prompts and individual videos of speakers saying the prompts) and began work on the front-end through which this database and the associated video files will be accessed.  I’ll be continuing with this next week.

In addition to the above I also gave some advice to the students who are migrating the IJOSTS journal over the WordPress, had a chat with the DSL people about when we’ll make the switch to the new API and data, set up a WordPress site for Joanna Kopaczyk for the International Conference on Middle English, upgraded all of the WordPress sites I manage to the latest version of WordPress, made a few tweaks to the 17th Century Symposium website for Roslyn Potter, spoke to Kate Simpson in Information Studies about speaking to her Digital Humanities students about what I do and arranged server space to be set up for the Speak For Yersel project website and the Speech Star project website.  I also helped launch the new Burns website: https://burnsc21-letters-poems.glasgow.ac.uk/ and updated the existing Burns website to link into it via new top-level tabs.  So a pretty busy week!

Week Beginning 29th November 2021

I participated in the UCU strike action on Wednesday to Friday this week, so it was a two-day working week for me.  During this time I gave some help to the students who are migrating the International Journal of Scottish Theatre and Screen and talked to Gerry Carruthers about another project he’s hoping to put together.  I also passed on information about the DNS update to the DSL’s IT people, added a link to the DSL’s new YouTube site to the footer of the DSL site and dealt with a query regarding accessing the DSL’s Google Analytics data.  I also spoke with Luca about arranging a meeting with him and his line manager to discuss digital humanities across the college and updated the listings for several Android apps that I created a few years ago that had been taken down due to their information being out of date.  As central IT services now manages the University Android account I hadn’t received notifications that this was going to take place.  Hopefully the updates have done the trick now.

Other than this I made some further updates to the Anglo-Norman Dictionary’s locution search that I created last week.  This included changing the ordering to list results by the word that was search for rather than by headword, changing the way the search works so that a wildcard search such as ‘te*’ now matches the start of any word in the locution phrase rather than just the first work and fixing a number of bugs that had been spotted.

I spent the rest of my available time starting to work on an interactive version of the radar diagram for the Historical Thesaurus.  I’d made a static version of this a couple of months ago which looks at a the words in an HT category by part of speech and visualises how the numbers of words in each POS change over time.  What I needed to do was find a way to allow users to select their own categories to visualise.  We had decided to use the broader Thematic Categories for the feature rather than regular HT categories so my first task was to create a Thematic Category browser from ‘AA The World’ to ‘BK Leisure’.  It took a bit of time to rework the existing HT category browser to work with thematic categories, and also to then enable the selection of multiple categories by pressing on the category name.  Selected categories appear to the right of the browser, and I added in an option to remove a selected category if required.  With this in place I began work on the code to actually grab and process the data for the selected categories.  This finds all lexemes and their associated dates for each lexeme in each HT category in each of the selected thematic categories.  For now the data is just returned and I’m still in the middle of processing the dates to work out which period each word needs to appear in.  I’ll hopefully find some time to continue with this next week.  Here’s a screenshot of the browser:

Week Beginning 27th September 2021

I had two Zoom calls on Monday this week.  The first was with the Burns people to discuss the launch of the website for the ‘letters and poems’ part of ‘Editing Burns’, to complement the existing ‘Prose and song’ website (https://burnsc21.glasgow.ac.uk/).  The new website will launch in January with some video content and blogs, plus I will be working on a content management system for managing the network of Burns’ letter correspondents, which I will put together some time in November, assuming the team can send me on some sample data by then.  This system will eventually power the ‘Burns letter writing trail’ interactive maps that I’ll create for the new site sometime next year.

My second Zoom call was for the Books and Borrowing project to discuss adding data from a new source to the database.  The call gave us an opportunity to discuss the issues with the data that I’d highlighted last week.  It was good to catch up with the team again and to discuss the issues with the researcher who had originally prepared the spreadsheet containing the data.  We managed to address all of the issues and the researcher is going to spend a bit of time adapting the spreadsheet before sending it to me to be batch uploaded into our system.

I spent some further time this week investigating the issue of some of the citation dates in the Anglo-Norman Dictionary being wrong, as discussed last week.  The issue affects some 4309 entries where at least one citation features the form only in a variant text.  This means that the citation date should not be the date of the manuscript in the citation, but the date when the variant of the manuscript was published.  Unfortunately this situation was never flagged in the XML, and there was never any means of flagging the situation.  The variant date should only ever be used when the form of the word in the main manuscript is not directly related to the entry in question but the form in the variant text is.  The problem is it cannot be automatically ascertained when the form in the main manuscript is the relevant one and when the form in the variant text is as there is so much variation in forms.

For example, the entry https://anglo-norman.net/entry/bochet_1 there is a form ‘buchez’ in a citation and then two variant texts for this where the form is ‘huchez’ and ‘buistez’.  None of these forms are listed in the entry’s XML as variants so it’s not possible for a script to automatically deduce which is the correct date to use (the closest is ‘buchet’).  In this case the main citation form and its corresponding date should be used.  Whereas in the entry https://anglo-norman.net/entry/babeder the main citation form is ‘gabez’ while the variant text has ‘babedez’ and so this is the form and corresponding date that needs to be used.  It would be difficult for a script to automatically deduce this.  In this case a Levenstein test (which test how many letters need to be changed to turn one string into another) could work, but this would still need to be manually checked.

The editor wanted me to focus on those entries where the date issue affects the earliest date for an entry, as these are the most important as the issue results in an incorrect date being displayed for the entry in the header and the browse feature.  I wrote a script that finds all entries that feature ‘<varlist’ somewhere in the XML (the previously exported 4309 entries).  It then goes through all attestations (in all sense, subsense and locution sense and subsense sections) to pick out the one with the earliest date, exactly as the code for publishing an entry does.  What it then does is checks the quotation XML for the attestation with the earliest date for the presence of ‘<varlist’ and if it finds this it outputs information for the entry, consisting of the slug, the earliest date as recorded in the database, the earliest date of the attestation as found by the script, the ID of the  attestation and then the XML of the quotation.  The script has identified 1549 entries that have a varlist in the earliest citation, all of which will need to be edited.

However, every citation has a date associated with it and this is used in the advanced search where users have the option to limit their search to years based on the citation date.  Only updating citations that affect the entry’s earliest date won’t fix this, as there will still be many citations with varlists that haven’t been updated and will still therefore use the wrong date in the search.  Plus any future reordering of citations would require all citations with varlists to be updated to get entries in the correct order.  Fixing the earliest citations with varlists in entries based on the output of my script will fix the earliest date as used in the header of the entry and the ‘browse’ feature only, but I guess that’s a start.

Also this week I sorted out some access issues for the RNSN site, submitted the request for a new top-level ‘ac.uk’ domain for the STAR project and spent some time discussing the possibilities for managing access to videos of the conference sessions for the Iona place-names project.  I also updated the page about the Scots Dictionary for Schools app on the DSL website (https://dsl.ac.uk/our-publications/scots-dictionary-for-schools-app/) after it won the award for ‘Scots project of the year’.

I also spent a bit of time this week learning about the statistical package R (https://www.r-project.org/).  I downloaded and installed the package and the R Studio GUI and spent some time going through a number of tutorials and examples in the hope that this might help with the ‘Speak for Yersel’ project.

For a few years now I’ve been meaning to investigate using a spider / radar chart for the Historical Thesaurus, but I never found the time.  I unexpectedly found myself with some free time this week due to ‘Speak for Yersel’ not needing anything from me yet so I thought I’d do some investigation.  I found a nice looking d3.js template for spider / radar charts here: http://bl.ocks.org/nbremer/21746a9668ffdf6d8242  and set about reworking it with some HT data.

My idea was to use the chart to visualise the distribution of words in one or more HT categories across different parts of speech in order to quickly ascertain the relative distribution and frequency of words.  I wanted to get an overall picture of the makeup of the categories initially, but to then break this down into different time periods to understand how categories changed over time.

As an initial test I chose the categories 02.04.13 Love and 02.04.14 Hatred, and in this initial version I looked only at the specific contents of the categories – no subcategories and no child categories.  I manually extracted counts of the words across the various parts of speech and then manually split them up into words that were active in four broad time periods: OE (up to 1149), ME (1150-1449), EModE (1450-1799) and ModE (1800 onwards) and then plotted them on the spider / radar chart, as you can see in this screenshot:

You can quickly move through the different time periods plus the overall picture using the buttons above the visualisation, and I think the visualisation does a pretty good job of giving you a quick and easy to understand impression of how the two categories compare and evolve over time, allowing you to see, for example, how the number of nouns and adverbs for love and hate are pretty similar in OE:

but by ModE the number of nouns for Love have dropped dramatically, as have the number of adverbs for Hate:

We are of course dealing with small numbers of words here, but even so it’s much easier to use the visualisation to compare different categories and parts of speech than it is to use the HT’s browse interface.  Plus if such a visualisation was set up to incorporate all words in child categories and / or subcategories it could give a very useful overview of the makeup of different sections of the HT and how they develop over time.

There are some potential pitfalls to this visualisation approach, however.  The scale used currently changes based on the largest word count in the chosen period, meaning unless you’re paying attention you might get the wrong impression of the number of words.  I could change it so that the scale is always fixed as the largest, but that would then make it harder to make out details in periods that have much fewer words.  Also, I suspect most categories are going to have many more nouns than other parts of speech, and a large spike of nouns can make it harder to see what’s going on with the other axes.  Another thing to note is that the order of the axes is fairly arbitrary but can have a major impact on how someone may interpret the visualisation.  If you look at the OE chart the ‘Hate’ area looks massive compared to the ‘Love’ area, but this is purely because there is only one ‘Love’ adjective compared to 5 for ‘Hate’.  If the adverb axis had come after the noun one instead the shapes of ‘Love’ and ‘Hate’ would have been more similar.  You don’t necessarily appreciate on first glance that ‘Love’ and ‘Hate’ have very similar numbers of nouns in OE, which is concerning.  However, I think the visualisations have a potential for the HT and I’ve emailed the other HT people to see what they think.

 

Week Beginning 30th August 2021

This week I completed work on the proximity search of the Anglo-Norman textbase.  Thankfully the performance issues I’d feared might crop up haven’t occurred at all.  The proximity search allows you to search for term 1 up to 10 words to the left or right of term 2 using ‘after’ or ‘before’.  If you select ‘after or before’ then (as you might expect) the search looks 10 words in each direction.  This ties in nicely with the KWIC display, which displays 10 words either side of your term.  As mentioned last week, unless you search for exact terms (surrounded by double quotes) you’ll reach an intermediary page that lists all possible matching forms for terms 1 and 2.  Select one of each and you can press the ‘Continue’ button to perform the actual search.  What this does is finds all occurrences of term 2 (term 2 is the fixed anchor point, it’s term 1 that can be variable in position), then for each it checks the necessary words before or after (or before and after) the term for the presence of term 1.  When generating the search words I generated and stored the position the word appears on the page, which made it relatively easy to pinpoint nearby words.  What is trickier is dealing with words near the beginning or the end of a page, as in such cases the next or previous page must also then be looked at.  I hadn’t previously generated a total count of the number of words on a page, which was needed to ascertain whether a word was close to the end of the page, so I ran a script that generated and stored the word count for each page.  The search seems to be working as it should for words near the beginning and end of a page.

The results page is displayed in the same way as the regular search, complete with KWIC and sorting options.  Both terms 1 and 2 are bold, and if you sort the results the relevant numbered word left or right of term 2 is highlighted, as with the regular search.  When you click through to the actual text all occurrences of both term 1 and term 2 are highlighted (not just those in close proximity), but the page centres on the part of the text that meets the criteria, so hopefully this isn’t a problem – it is quite useful to see other occurrences of the terms after all.  There are still some tweaks I need to make to the search based on feedback I received during the week, and I’ll look at these next week, but on the whole the search facility (and the textbase facility in general) is just about ready to launch, which is great as it’s the last big publicly facing feature of the AND that I needed to develop.

Also this week I spent some time working on the Books and Borrowing project.  I created a new user account for someone who will be working for the project and I also received the digitised images for another library register, this time from the NLS.  I downloaded these and then uploaded them to the server, associating the images with the page records that were already in the system.  The process was a little more complicated and time consuming than I’d anticipated as the register has several blank pages in it that are not in our records but have been digitised.  Therefore the number of page images didn’t match up with the number of pages, plus page images were getting associated with the wrong page.  I had to manually look through the page images and delete the blanks, but I was still off by one image.  I then had to manually check through the contents of the images to compare them with the transcribed text to see where the missing image should have gone.  Thankfully I managed to track it down and reinstate it (it had one very faint record on it, which I hadn’t noticed when viewing and deleting blank thumbnails).  With that in place all images and page records aligned and I could made the associations in the database.  I also sent Gerry McKeever the zipped up images (several gigabytes) for a couple of the St Andrews registers as he prefers to have the complete set when working on the transcriptions.

I had a meeting with Gerry Carruthers and Pauline McKay this week to discuss further developments of the ‘phase 2’ Burns website, which they are hoping to launch in the new year, and also to discuss the hosting of the Scottish theatre studies journal that Gerry is sorting out.

I spent the rest of the week working on mockups for the two websites for the STAR speech and language therapy project.  Firstly there’s the academic site.  The academics site is going to sit alongside Seeing Speech and Dynamic Dialects, and as such it should have the same interface as these sites.  Therefore I’ve made a site that is pretty much identical in terms of the overall theme.  I added in a new ‘site tab’ for the site that sits at the top of the page and have added in the temporary logo as a site logo and favicon (the latter may need a dark background to make it stand out).  I created menu items for all of the items in Eleanor Lawson’s original mockup image.  These all work – leading to empty pages for now and added the star logo to the ‘Star in-clinic’ menu item as in the mockup too.  In the footer I made a couple of tweaks to the layout – the logos are all centre aligned and have a white border.  I added in the logo for Strathclyde and have only included the ESRC logo, but can add others in if required.  The actual content of the homepage is identical to Seeing Speech for now – I haven’t changed any images or text.

For the clinic website I’ve taken Eleanor’s mockup as a starting point again and have so far made two variations.  I will probably work on at least one more different version (with multiple variations) next week.  I haven’t added in the ‘site tabs’ to either version as I didn’t want to clutter things up, and I’m imagining that there will be a link somewhere to the STAR academic site for those that want it, and from there people would be able to find Seeing Speech and Dynamic Dialects.  The first version of the mockup has a top-level menu bar (we will need such a menu listing the pages the site features otherwise people may get confused) then the main body of the page is the blue, as in the mockup.  I used the same logo and the font for the header is this Google font: https://fonts.google.com/?query=rampart+one&preview.text=STAR%20Speech%20and%20Language%20Therapy&preview.text_type=custom.  Other headers on the page use this font: https://fonts.google.com/specimen/Annie+Use+Your+Telescope?query=annie&preview.text=STAR%20Speech%20and%20Language%20Therapy&preview.text_type=custom.  I added in a thick dashed border under the header.  The intro text is just some text I’ve taken from one of the Seeing Speech pages, and the images are still currently just the ones in the mockup.  Hovering over an image causes the same dashed border to appear.  The footer is a kind of pink colour, which is supposed to suggest those blue and pink rubbers you used to get in schools.

The second version uses the ‘rampart one’ font just for ‘STAR’ in the header, with the other font used for the rest of the text.  The menu bar is moved to underneath the header and the dashed line is gone.  The main body of the page is white rather than continuing the blue of the header and ‘rampart one’ is used as the in-page headers.  The images now have rounded edges, as do the text blocks in the images.  Hovering over an image brings up a red border, the same shade as used in the active menu item.  The pink footer has been replaced with the blue from the navbar.  Both versions are ‘responsive’ and work on all screen sizes.

I’ll be continuing to work on the mockups next week.

Week Beginning 22nd February 2021

I had a couple of Zoom meetings this week, then first on Monday was with the Historical Thesaurus team and members of the Oxford English Dictionary’s team to discuss how our two datasets will be aligned and updated in future.  It was an interesting meeting, but there’s still a lot of uncertainty regarding how the datasets can be tracked and connected as future updates are made, at least some of which will probably only become apparent when we get new data to integrate.

My second Zoom meeting was on Tuesday with the Place-Names of Iona project to discuss how we will be working with the QGIS package that team members will be using to access some of the archaeological data and Lidar maps, and also to discuss the issue of 10 digit grid references and the potential change from the old OSGB-36 means of generating latitude and longitude from grid references to the new WGS84 method.  It was a productive meeting and we decided that we would switch over to WGS84 and I would update the CMS to incorporate the new library for generating latitude and longitude from grid references.

I spent some time later in the week implementing this change, meaning that when a member of the project team adds or edits a place-name and supplies a grid reference the latitude and longitude generated use the new system.  As I mentioned a couple of weeks ago, the new library (see  http://www.movable-type.co.uk/scripts/latlong-os-gridref.html) allows 6, 8 or 10 digit grid references to be used and is JavaScript based, meaning as soon as the user enters the grid reference the latitude and longitude are generated.  I updated my scripts so that these values immediately appear in the relevant boxes in the form, and also integrated the Google Maps service that generates altitude data from the latitude and longitude, populating the altitude box in the form and also displaying a Google Map showing the exact location that the entered grid reference has produced if further tweaks are required.  I’m pretty happy with how the new system is working out.

Also this week I continued to work on the Books and Borrowing project, generating image tilesets for the scans of several volumes of ledgers from Edinburgh University Library and writing scripts to generate pages in the Content Management System, creating ‘next’ and ‘previous’ links as required and associating the relevant images.  I also had an email correspondence about some of the querying methods we will develop for the data, such as collocation information.

I also gave some feedback on a data management plan for a project I’m involved with, had a chat with Wendy Anderson about a possible future project she’s trying to set up and spent some time making updates to the underlying data of the Interactive Map of Burns Suppers that launched last month.  I didn’t have the time to do a huge amount of work on the Anglo-Norman Dictionary this week, but I still managed to migrate some of the project’s old blog posts to our new site over the course of the week.

Finally, I made some updates to the bibliography system for the Dictionary of the Scots Language, updating the new system so it works in a similar manner to the live site.  I added ‘Author’ and ‘Title’ to the drop-down items when searching for both to help differentiate them and a search for an item when the user ignores the drop-down options and manually submits the search now works as it does in the live site.  I also fixed the issue with selecting ‘Montgomerie, Norah & William’ resulting in a 404 error.  This was caused by the ampersand.  There were some issues with other non-alphanumeric characters that I’ve fixed too, including slashes and apostrophes.

Week Beginning 1st February 2021

I had two Zoom calls this week, the first on Wednesday with Kirsteen McCue to discuss a new, small project to publish a selection of musical settings to Burns poems and the second on Friday with Joanna Kopaczyk and her RA on the Scots Language Policy project to give a tutorial on how to use WordPress.

The majority of my week was divided between the Anglo-Norman Dictionary, the Dictionary of the Scots Language and the Place-names of Iona projects.  For the AND I made a few tweaks to the static content of the site and migrated some more blog posts across to the new site (these are not live yet).  I also added commentaries to more than 260 entries, which took some time to test.  I also worked on the DTD file that the editors reference from their XML editing software to ensure that all of the elements and attributes found within commentaries are ‘allowed’ in the XML.  Without doing this it was possible to add the tags in, but this would give errors in the editing software.  I also batch updated all of the entries on the site to reference the new DTD and exported all of the files, zipped them up and sent them to the editors so they can work on them as required.  I also began to think about migrating the TextBase from the old site to the new one, and managed to source the XML files that comprise this system.  It looks like it may be quite tricky to work with these as there are more than 70 book-length XML files to deal with and so far I have not managed to locate the XSLT that was originally used to process these files.

For the DSL I completed work on the new bibliography search pages that use the new ‘V4’ data.  These pages allow the authors and titles of bibliographical items to be searched, results to be viewed and individual items to be displayed.  I also made some minor tweaks to the live site and had a discussion with Ann Fergusson about transferring the project’s data to the people who have set up a new editing interface for them, something I’m hoping to be able to tackle next week.

For the Place-names of Iona project I had a discussion about implementing a new ‘work of the month’ feature and spent quite a bit of time investigating using 10-digit OS grid references in the project’s CMS.  The team need to use up to 10-digit grid references to get 1m accuracy for individual monuments, but the library I use in the CMS to automatically generate latitude and longitude from the supplied grid reference will only work with a 6-digit NGR.  The automatically generated latitude and longitude are then automatically passed to Google Maps to ascertain the altitude of the location and all of this information is stored in the database whenever a new place-name record is created or an existing record is edited.

As the library currently in use will only accept 6-digit NGRs I had to do a bit of research into alternative libraries, and I managed to find one that can accept NGRs of 2,4,6,8 or 10 digits.  Information about the library, including text boxes where you can enter an NGR and see the results can be found here: http://www.movable-type.co.uk/scripts/latlong-os-gridref.html along with an awful lot of description about the calculations and some pretty scary looking formulae.

The library is written in JavaScript, which runs in the client’s browser, whereas the previous library was written in PHP, which runs on the server.  This means I needed to change the way the CMS works – previously you’d enter an NGR and then when the form was submitted to the server the PHP library would generate the latitude and longitude whereas now the latitude and longitude need to be generated in the browser as soon as the NGR is entered into the textbox, and two further textboxes for latitude and longitude will appear in the form and will then be automatically populated with the results.

 

This does mean the person filling out the form can see the generated latitude and longitude and also tweak it if required before submitting the form, which is a potentially useful thing.  I may even be able to add a Google Map to the form so you can see (and possibly tweak) the point before submitting the form, but I’ll need to look into this further.  I also still need to work on the format of the latitude and longitude as the new library generates them with a compass point (e.g. 6.420848° W) and we need to store them as a purely decimal value (e.g. -6.420848) with ‘W’ and ‘S’ figures being negatives.

However, whilst researching this I discovered a potentially worrying thing that needs discussion with the wider team.  The way the Ordnance Survey generates latitude and longitude from their grid references was changed in 2014.  Information about this can be found in the page linked to above in the ‘Latitude/longitudes require a datum’ section.  Previously the OS used ‘OSGB-36’ to generate latitude and longitude, but in 2014 this was changed to ‘WGS84’, which is used by GPS systems.  The difference in the latitude / longitude figures generated by the two systems is about 100 metres, which is quite a lot if you’re intending to pinpoint individual monuments.

The new library has facilities to generate latitude and longitude using either the new or old systems, but defaults to the new system.  I’ve checked the output of the library we currently use and it uses the old ‘OSGB-36’ system.  This means all of the place-names in the system so far (and all those for the previous projects) have latitudes and longitudes generated using the now obsolete (since 2014) system. To give an example of the difference, the place-name A’ Mhachair in the CMS has this location: https://www.google.com/maps/place/56%C2%B019’33.2%22N+6%C2%B025’11.4%22W/@56.3258889,-6.422022,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325885!4d-6.419828 and with the newer ‘WGS84’ system it would have this location: https://www.google.com/maps/place/56%C2%B019’32.7%22N+6%C2%B025’15.1%22W/@56.325744,-6.4230367,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325744!4d-6.420848

So what we need to decide before I replace the old library with the new one in the CMS is whether we switch to using ‘WGS84’ or we keep using ‘OSGB-36’.  As I say, this will need further discussion before I implement any changes.

Also this week I responded to a query from Cris Sarg of the Medical Humanities Network project, spoke to Fraser Dallachy about future updates to the HT’s data from the OED, made some tweaks to the structure of the SCOSYA website for Jennifer Smith, added a plugin to the Editing Burns site for Craig Lamont and had a chat with the Books and Borrowing people about cleaning the authors data, importing the Craigston data and how to deal with a lot of borrowers that were excluded from the Selkirk data that I previously imported.

Next week I’ll be on holiday from Monday to Wednesday to cover the school half term.

 

Week Beginning 18th January 2021

I worked on many different projects this week, with most of my time being split between the Dictionary of the Scots Language, the Anglo-Norman Dictionary, the Books and Borrowing project and the Scots Language Policy project.  For the DSL I began investigating adding the bibliographical data to the new API and developing bibliographical search facilities.  Ann Ferguson had sent me spreadsheets containing the current bibliographical data for DOST and SND and I migrated this data into a database and began to think about how the data needs to be processed in order to be used on the website.  At the moment links to bibliographies from SND entries are not appearing in the new version of the API, while DOST bibliographical links do appear but don’t lead anywhere.  Fixing the latter should be fairly straightforward but the former looks to be a bit trickier.

For SND for the live site using the original V1 API it looks like the bibliographical links are stored in a database table and these are then injected into the XML entries whenever an entry is displayed.  A column in the table contains the order the citation appears in the entry and this is how the system knows which bibliographical ID to assign to which link in the entry.  This raises some questions about what happens when an entry is edited.  If the order of the citations in the XML is changed, or a new citation is added then all of the links to the bibliographies will be out of sync.  Plus, unless the database table is edited no new bibliographical links will ever display.  It is possible that the data in bibliographical links table is already out of date and we are going to need to try and find a way to add these bibliographical links into the actual XML entries rather than retaining the old system of storing them separately and then injected then each time the entry is requested.  I emailed Ann for further discussion about these points.  Also this week I made a few updates to the live DSL website, changing the logos that are used and making ‘Dictionary’ in the title plural.

For the AND this week I added in the missing academic articles that Geert had managed to track down and then began focusing on updating the source texts and working with the commentaries for the R data.  The commentaries were sent to me in two Word files, and although we had hoped to be able to work out a mechanism for automatically extracting these and adding them to their corresponding entries it looks like this will be very difficult to achieve with any accuracy.  I concluded that I could split the entries up in Geert’s document based on the ‘**’ characters between commentaries and possibly split Heather’s up based on blank lines.  I could possibly retain the formatting (bold, italic, superscript text etc) and convert this to HTML, although even this would be tricky, time consuming and error-prone.  The commentaries include links to other entries in bold, and I would possibly be able to automatically add in links to other entries based on entries appearing in bold in the commentaries, but again this would be highly error-prone as bold text is used for things other than entries, and sometimes the entry number follows a hash while at other times it’s superscript.  It would also be difficult to automatically ascertain which entry a commentary belongs to as there is some inconsistency here too – e.g. the commentary for ‘remuement’ is listed as ‘[remuement]??’ and there are other occasions where the entry doesn’t appear on its own on a line – e.g. ‘Retaillement xref with recelement’ and ‘Reverdure—Geert says to omit’.  Then there are commentaries that are all crossed out, e.g. ‘resteot’.  We decided that attempting to automatically process the commentaries would not be feasible and instead the editors would add them to the entry XML files manually, adding the tags for bold, italic, superscript and other formatting as required.  Geert added commentaries to two entries to see how this would work and it worked very well.

For the source texts, we had originally discussed the editors editing these via a spreadsheet that I’d generated from the online data last year, but I decided it would be better if I just start work on the new online Dictionary Management System (DMS) and create the means of adding, listing and editing the source texts as the first thing that can be managed via the new DMS.  This seemed preferable to establishing a new, temporary workflow that may take some time to set up and may end up not being used for very long.  I therefore created the login and initial pages for the DMS (by repurposing earlier content management systems I’d created).  I then set up database tables for holding the new source text data, which includes multiple potential items for each source and a range of new fields that the original source text data does not contain.  With this in place I created the DMS pages for browsing the source texts and deleting them, and I’m midway through writing the scripts for editing existing and adding new source texts.  I aim to have this finished next week.

For the Books and Borrowing project I continued to make refinements to the CMS, namely reducing the number of books and borrowers from 500 to 200 to speed up page loads, adding in the day of the week that books were borrowed and returned, based on the date information already in the system, removing tab characters for edition titles as these were causing some issues for the system, replacing the editor’s notes rich text box with a plain text area to save space on the edit page and adding a new field to the borrowing record that allows the editor to note when certain items appear for display only and should otherwise be overlooked, for example when generating stats.  This is to be used for duplicate lines and lines that are crossed out.  I also had a look through the new sample data from Craigston that was sent to us this week.

For the Scots Language Policy project I set up the project’s website, including the user interface, adding in fonts, plugins, initial page structure, site graphics, logos etc.  Also this week I fixed an issue with song downloads on the Burns website (the plugin the controls the song downloads is very old and had broken.  I needed to install a newer version and upgrade the song data for the downloads to work again.  I also continued my email conversation with Rachel Fletcher about a project she’s putting together and created a user account to allow Simon Taylor to access the Ayr Placenames CMS.

Week Beginning 11th January 2021

This was my first full week back of the year, although it was also the first week of a return to homeschooling, which made working a little trickier than usual.  I also had a dentist’s appointment on Tuesday and lost some time to that due to my dentist being near the University rather than where I live.  However, despite these challenges I was able to achieve quite a lot this week.  I had two Zoom calls, the first on Monday to discuss a new ESRC grant that Jane Stuart-Smith is putting together with colleagues at Strathclyde while the second on Wednesday was with a partner in Joanna Kopaczyk’s new RSC funded project about Scots Language Policy to discuss the project’s website and the survey they’re going to put out.  I also made a few tweaks to the DSL website, replied to Kirsteen McCue about the AHRC proposal she’s currently putting together, replied to a query regarding the technologies behind the Scots Syntax Atlas, made a few further updates to the Burns Supper map and replied to a query from Rachel Fletcher in English Language about lemmatising Old English.

Other than these various tasks I split my time between the Anglo-Norman Dictionary and the Books and Borrowing projects.  For the former I completed adding explanatory notes to all of the ‘Introducing the AND’ pages.  This was a very time consuming task as there were probably about 150 explanatory notes in total to add in, each appearing in a Bootstrap dialog box, and each requiring me to copy the note form the old website, add in any required HTML formatting, find and check all of the links to AND entries on the old site and add these in as required.  It was pretty tedious to do, but it feels great to get it done, as the notes were previously just giving 404 errors on the new live site, and I don’t like having such things on a site I’m responsible for.  I also migrated the academic articles from the old site to the new one (https://anglo-norman.net/articles/) which also required some manual formatting of the content.  There are five other articles that I haven’t managed to migrate yet as they are full of character encoding errors on the old site.  Geert is looking for copies of these articles that actually work and I’ll add them in once he’s able to get them to me.  I also begin migrating the blog posts to the new site too.  Currently the blog is hosted on Blogspot and there are 55 entries, but we’d like these to be an internal part of the new site.  Migrating these is going to take some time as it means copying the text (which thankfully retains formatting) and then manually saving and embedding any images in the posts.  I’m just going to do a few of these a week until they’re all done and so far I’ve migrated seven.  I also needed to look into how the blogs page works in the WordPress theme I created for the AND, as to start with the page was just listing the full text of every post rather than giving summaries and links through to the full text of each.  After some investigation I figured out that in my theme there is a script called ‘home.php’ and this is responsible for displaying all of the blog posts on the ‘blog’ page.  It in turn calls another template called ‘content-blog.php’ which was previously set to display the full content of the blog.  Instead I set it to display the title as a link through to the full post, the date and then an excerpt from the full blog, which can be accessed through a handy WordPress function called ‘the_excerpt()’.

For the Books and Borrowing project I made some improvements and fixes to the Content Management System.  I’d been meaning to enhance the CMS for some time, but due to other commitments to other projects I didn’t have the time to delve into it.  It felt good to find the time to return to the project this week.

I updated the ‘Books’ and ‘Borrowers’ tabs when viewing a library in the CMS.  I added in pagination to speed up the loading of the pages.  Pages are now split into 500 record blocks and you can navigate between pages using the links above and below the tables.  For some reason the loading of the page is still a bit slow on the Stirling server whereas it was fine on the Glasgow server I was using for test purposes.  I’m not entirely sure why as I’d copied the database over too – presumably the Stirling server is slower.  However, it is still a massive improvement on the speed of the page previously.

I also changed the way tables scroll horizontally.  Previously if a table was wider than the page a scrollbar appeared above and below the table, but this was rather awkward to use if you were looking at the middle of the table (you had to scroll up or down to the beginning or end of the table, then use the horizontal scrollbar to move the table along a bit, then navigate back to the section of the page you were interested in).  Now the scrollbar just appears at the bottom of the browser window and can always be accessed no matter where in the table you are.

I also removed the editorial notes from tables by default to reduce clutter, and added in a button for showing / hiding the editors’ notes near the top of each page.  I also added a limit option in the ‘Books’ and ‘Borrowers’ pages within a library to limit the displayed records to only those found in a specific ledger.  I added in a further option to display those records that are not currently associated with any ledgers too.

I then deleted the ‘original borrowed date’ and ‘original returned date’ fields in St Andrews data as these were no longer required.  I deleted these additional fields from the system and all data that were contained in these fields.

It had been noted that the book part numbers were not being listed numerically.  As part numbers can contain text as well as numbers (e.g. ‘Vol. II’), this field in the database needed to be set as text rather than an integer.  Unfortunately the database doesn’t order numbers correctly when they are contained in a non-numerical field  – instead all the ones come first (1, 10, 11) then all the twos (2, 20, 22) etc.  However, I managed to find a way to ensure that the numbers are ordered correctly.

I also fixed the ‘Add another Edition/Work to this holding’ button that was not working.  This was caused by the Stirling server running a different version of PHP that doesn’t allow functions to have variable numbers of arguments.  The autocomplete function was also not working at edition level and I investigated this.  The issue was being caused by tab characters appearing in edition titles, and I updated my script to ensure these characters are stripped out before the data is formatted as JSON.

There may be further tweaks to be made – I’ll need to hear back from the rest of the team before I know more, but for now I’m up to date with the project.  Next week I intend to get back into some of the larger and more trickier outstanding AND tasks (of which there are, alas, many) and to begin working towards adding the DSL bibliography data into the new version of the API.