Week Beginning 8th October 2018

I continued to work on the HT / OED data alignment for a lot of this week.  I updated the matching scripts I had previously created so that all matches based on last lexeme were removed and instead replaced by a ‘6 matches or more and 80% of words in total match’ check.  This was a lot more effective that purely comparing the last word in each category and helped match up a lot more categories.  I also created a QA script to check the manual matches that were made during our first phase of matching.  There are 1407 manual matches in the system.  The script also listed all the words in each potential matched category to make it easier to tell where any potential difficulties were.  I also updated the ‘pattern matching’ script I’d created last week to list all words and include the ‘6 matches and 80%’ check and changed the layout so that separate groupings now appear in different tables rather than being all mixed up in one table.  It took quite a long time to sort this out, but it’s going to be much more useful for manual checking.

I then moved on to writing a new ‘sibling matching’ script.  This script goes through all unmatched OED categories (this includes all that appear in other scripts such as the pattern matching one) and retrieves all sibling categories of the same POS.  E.g. if the category is ‘01.01.01|03 (n)’ then the script brings back all HT noun subcats of ’01.01.01’ that are ‘level 1’ subcats and compares their headings.  It then looks to see if there is a sibling category that has the same heading – i.e. looking for when a category has been renumbered within the same level of the thesaurus.  This has uncovered several hundred such potential matches, which will hopefully be very helpful. I also then created a further script that compares non-noun headings to noun headings at the same level, as it looked like a number of times the OED kept the noun heading for other parts of speech while the HT renamed them.  This identified a further 65 possible matches, which isn’t too bad.

I met with Marc and Fraser on Wednesday to discuss the recent updates I’d made, after which I managed to tick off 2614 matched categories, taking our total of unmatched OED categories that have a part of speech and are not empty down to 10,854.  I then made a start on a new script that looks at pattern matching for category contents (i.e. words), but I didn’t have enough time to make a huge amount of progress with this.

I also fixed an issue with the HT’s Google Analytics not working properly.  It looks like the code stopped working around the time we shifted domains in the summer, but it was a bit of a weird one as everything was apparently set up correctly – the ‘Send test traffic’ option on the .JS tracking code section successfully navigated through to the site and the tracking code was correct, but nothing was getting through.  However, I replaced the existing GA JavaScript that we had on our page with a new snippet from the JS section of Google Analytics and this seems to have done the trick.

However, we also have some calls to GA in our JavaScript so that loading parts of the tree, changing parts of speech, selecting subcats etc are considered ‘page hits’ and were reported to GA.  None of these worked after I changed the code.  I followed some guidelines here:

https://developers.google.com/analytics/devguides/collection/analyticsjs/sending-hits

to try and get things working but the callbacks were never being initiated – i.e. data wasn’t getting through to Google.  Thankfully Stack Overflow had an answer that worked (After trying several that didn’t):

https://stackoverflow.com/a/40761709

I’ve updated this so that pageviews rather than events are sent and now everything seems to be working again.

I spent a bit more time this week working on the Bilingual Thesaurus project, focussing on getting the front end for the thesaurus working.  I’ve reworked the code for the HT’s browse facility to work with the project’s data.  This required quite a lot of work as structurally the datasets are quite different – the HT relies in its ‘tier’ numbers for parent / child / sibling category relationships, and also has different categories for parts of speech and nested subcategories.  The BTH data is much simpler (which is great) as it just has parent and child categories, with things like part of speech handled at word level.  This meant I had to strip a lot of stuff out of the code and rework things.  I’m also taking the opportunity to move to a new interface library (Bootstrap) so had to rework the page layout to take this into consideration too.  I managed to get an initial version of the browse facility working now, which works in much the same way as the main HT site:  clicking on a heading allows you to view its words and clicking on a ‘plus’ sign allows you to view the child categories.  As with the HT you can link directly to a category too.  I do still need to work on the formatting of the category contents, though.  Currently words are just listed all together, with their type (AN or ME) listed first, then the word, then the POS in brackets, then dates (if available).  I haven’t included data about languages of source or citation yet, or URLs.  I’m also going to try and get the timeline visualisations working as well.  I’ll probably split the AN and ME words into separate tabs, and maybe split the list up by POS too.  I’m also wondering whether the full category hierarchy should be represented above the selected category (the right pane), as unlike the HT there’s no category number to show your position in the thesaurus.  Also, as a lot of the categories are empty I’m thinking of making the ones with words in them bold in the tree, or even possibly adding a count of words in brackets after the category heading.  I’ve also updated the project’s homepage to include the ‘sample category’ feature, allowing you to press the ‘reload’ icon to load a new random category.

I also made some further tweaks to the Seeing Speech data (fixing some of the video titles and descriptions) and had a chat with Thomas Clancy about his Iona proposal, which is starting to come together again after several months of inactivity.  Also for the ‘Books and Borrowing’ proposal I replied to a request for more information on the technical side of things from Stirling’s IT Services, who will be hosting the website and database for the project.  I met with Luca this week as well, to discuss how best to grab XML data via an AJAX query and process it using client-side JavaScript.  This is something I had already tackled as part of the ‘New Modernist Editing’ project, so was able to give Luca some (hopefully) useful advice.  I also continued an email conversation with Bryony Randall and Ronan Crowley about the workshop we’re running on digital editions later in the month.

On Friday I spent most of the day working on the RNSN project, adding direct links to the ‘nation’ introductions to the main navigation menu and creating new ‘storymap’ stories based on Powerpoint presentations that had been sent to me.  This is actually quite a time-consuming process as it involves grabbing images from the PPT, reformatting them, uploading them to WordPress, linking to them from the Storymap pages, creating Zoomified versions of the image or images that will be used as the ‘map’ for the story, extracting audio files from the PPT and uploading them, grabbing all of the text and formatting it for display and other such tasks.  However, despite being a long process the end result is definitely worth it as the stroymaps work very nicely.  I managed to get two such stories completed today, and now I’ve re-familiarised myself with the process it should be quicker when the next set get sent to me.

I’m going to be on holiday next week so there won’t be another report from me until the week after that.