Week Beginning 19th November 2018

This week I mainly working on three projects:  The Historical Thesaurus, the Bilingual Thesaurus and the Romantic National Song Network.  For the HT I continued with the ongoing and seemingly never-ending task of joining up the HT and OED datasets.  Marc, Fraser and I had a meeting last Friday and I began to work through the action points from this meeting on Monday.  By Wednesday I had ticked off most of the items, which I’ll summarise here.

Whilst developing the Bilingual Thesaurus I’d noticed that search term highlighting on the HT site wasn’t working for quick searches, only advanced searches for words, so I investigated and fixed this.  I then updated the lexeme pattern matching / date matching script to incorporate the stoplist we’d created during last week’s meeting (words or characters that should be removed when comparing lexemes, such as ‘to ‘ and ‘the’).  This worked well and has bumped matches up to better colour levels, but has resulted in some words getting matched multiple times.  E.g. when removing ‘to’, ‘of’ etc this results in a form that then appears multiple times.  For example, in one category the OED has ‘bless’ twice (presumably an erro?) and HT has ‘bless’ and ‘bless to’.  With ‘to’ removed there then appear to be more matches that there should be.  However, this is not an issue when dates are also taken into consideration.  I also updated the script so that categories where there are 3 matches and at least 66% of words match have been promoted from orange to yellow.

When looking at the outputs at the meeting Marc wondered why certain matches (e.g. 120202 ‘relating to doctrine or study’ / ‘pertaining to doctrine/study’ and 88114 ‘other spec.’ / ‘other specific’) hadn’t been ticked off and wondered whether category heading pattern matching had worked properly.  After some investigation I’d say it has worked properly – the reason these haven’t been ticked off is they contain too few words to have reached the criteria for ticking off.

Another script we looked at during our meeting was the sibling matching script, which looks for matches at the same hierarchical level and part of speech, but different numbers.  I completely overhauled the script to bring it into line with the other scripts (including recent updates such as the stoplist for lexeme matching and the new yellow criteria).  There are currently 19, 17 and 25 green, lime green and yellow matches that could be ticked off.  I also ticked off the empty category matches listed on the ‘thing heard’ script (so long as they have a match) and for the ‘Noun Matching’ I ticked off the few matches that there were.  Most were empty categories and there were less than 15 in total.

Another script I worked on was the ‘monosemous’ script, which looks for monosemous forms in unmatched categories and tries to identify HT categories that also contain these forms.  We weren’t sure at the meeting whether this script identified words that were fully monosemous in the entire dataset, or those that were monosemous in the unmatched categories.  It turned out it was the former, so I updated the script to only look through the unchecked data, which has identified further monosemous forms.  This has helped to more accurately identify matched categories.  I also created a QA script that checks the full categories that have potentially been matched by the monosemous script.

I also worked on the date fingerprinting script.  This gets all of the start dates associated with lexemes in a category, plus a count of the number of times each date appears, and uses these to try and find matches in the HT data.  I updated this script to incorporate the stoplist and the ‘3 matches and 66% match’ yellow rule, and ticked off lots of matches that this script identified.  I ticked off all green (1556), lime green (22) and yellow (123) matches.

Out of curiosity, I wrote a script that looked at our previous attempt at matching the categories, which Fraser and I worked on last year and earlier this year.  The script looks at categories that were matched during this ‘v1’ process that had yet to be matched during our current ‘v2’ process.  For each of these the script performs the usual checks based on content: comparing words and first dates and colour coding based on number of matches (this includes the stoplist and new yellow criteria mentioned earlier).  There are 7148 OED categories that are currently unmatched but were matched in V1.  Almost 4000 of these are empty categories.  There are 1283 ‘purple’ matches, which means (generally) something is wrong with the match.  But there are 421 in the green, lime green and yellow sections, which is about 12% of the remaining unmatched OED categories that have words.  It might also be possible to spot some patterns to explain why they were matched during v1 but have yet to be matched in v2.  For example, 2711 ‘moving water’ has 01.02.06.01.02 and its HT counterpart has 01.02.06.01.01.02.  There are possibly patterns in the 1504 orange matches that could be exploited too.

Finally, I updated the stats page to include information about main and subcats.  Here are the current unmatched figures:

Unmatched (with POS): 8629

Unmatched (with POS and not empty): 3414

Unmatched Main Categories (with POS): 5036

Unmatched Main Categories (with POS and not empty): 1661

Unmatched Subcategories (with POS): 3573

Unmatched Subcategories (with POS and not empty): 1753

So we are getting there!

For the Bilingual Thesaurus I completed an initial version of the website this week.  I have replaced the original colour scheme with a ‘red, white and blue’ colour scheme as suggested by Louise.  This might be changed again, but for now here is an example of how the resource looks:

The ‘quick’ and ‘advanced’ searches are also now complete, using the ‘search words’ mentioned in a previous post, and ignoring accents on characters.  As with the HT, by default the quick search matches category headings and headwords exactly, so ‘ale’ will return results as there is a category ‘ale’ and also a word ‘ale’ but ‘bread’ won’t match anything because there are no words or categories with this exact text.  You need to use an asterisk wildcard to find text within word or category text:  ‘bread*’ would find all items starting with ‘bread’, ‘*bread’ would find all items ending in ‘bread’ and ‘*bread*’ would find all items with ‘bread’ occurring anywhere.

The ‘advanced search’ lets you search for any combination of headword, category, part of speech, section, dates and languages or origin and citation.  Note that if you specify a range of years in the date search it brings back any word that was ‘active’ in your chosen period.  E.g. a search for ‘1330-1360’ will bring back ‘Edifier’ with a date of 1100-1350 because it was still in use in this period.

As with the HT, different search boxes are joined with ‘AND’ – e.g. if you tick ‘verb’ and select ‘Anglo Norman’ as the section then only words that are verbs AND Anglo Norman will be returned.  Where search types allow multiple options to be selected (i.e. part of speech and languages of origin and citation) if multiple options in each list are selected these are joined by ‘OR’.  E.g. if you select ‘noun’ and ‘verb’ and select ‘Dutch’, ‘Flemish’ and ‘Italian’ as languages or origin this will find all words that are either nouns OR verbs AND have a language of origin of Dutch OR Flemish OR Italian.

Search results display the full hierarchy leading to the category, the category name, plus the headword, section, POS and dates (if the result is a ‘word’ result rather than a ‘category’ result).  Clicking through to the category highlights the word. I also added in a ‘cite’ option to the category page and updated the ‘About’ page to add a sentence about the current website.  The footer still needs some work (e.g. maybe including logos for the University of Westminster and Leverhulme) and there’s a ‘terms of use’ page linked to from the homepage that currently doesn’t have any content, but other than that I think most of my work here is done.

For the Romantic National Song Network I continued to create timelines and ‘storymaps’ based on powerpoint presentations that had been sent to me.  This is proving to be a very time-intensive process, as it involves extracting images, audio files and text from the presentations, formatting the text as HTML, reworking the images (resizing, sometimes joining multiple images together to form one image, changing colour levels, saving the images, uploading them to the WordPress site), uploading the audio files, adding in the HTML5 audio tags to get the audio files to play, creating the individual pages for each timeline entry / storymap entry.  It took the best part of an afternoon to create one timeline for the project, which involved over 30 images, about 10 audio files and more than 20 Powerpoint slides.  Still, the end result works really well, so I think it’s worth putting the effort in.

In addition to these projects I met with a PhD student, Ewa Wanat, who wanted help in creating an app.  I spent about a day attempting to make a proof of concept for the app, but unfortunately the tools I work with are just not very well suited to the app she wants to create.  The app would be interactive and highly dependent on logging user interactions as accurately as possible.  I created looked into using the d3.js library to create the sort of interface she wanted (a circle that rotates with smaller circles attached to it, that the user should tap on when a certain point in the rotation is reached), but although this worked, the ‘tap’ detection was not accurate enough.  In fact on touchscreens more often than not a ‘tap’ wasn’t even being registered.  D3.js just isn’t made to deal with time-sensitive user interaction on animated elements and I have no experience with any libraries that are made in this way, so unfortunately it looks like I won’t be able to help out with this project.  Also, Ewa wanted the app to be launched in January and I’m just far too busy with other projects to be able to do the required work in this sort of timescale.

Also this week I helped extract some data about the Seeing Speech and Dynamic Dialects videos for Eleanor Lawson, I responded to queries from Meg MacDonald and Jennifer Nimmo about technical work on proposals they are involved with, I responded to a request for advice from David Wilson about online surveys, and another request from Rachel Macdonald about the use of Docker on the SPADE server.  I think that’s just about everything to report.