I participated in the UCU strike action from Monday to Wednesday this week, making it a two-day week for me. I’d heard earlier in the week that the paper I’d submitted about the redevelopment of the Anglo-Norman Dictionary had been accepted for DH2022 in Tokyo, which was great. However, the organisers have decided to make the conference online only, which is disappointing, although probably for the best given the current geopolitical uncertainty. I didn’t want to participate in an online only event that would be taking place in Tokyo time (nine hours ahead of the UK) so I’ve asked to withdraw my paper.
On Thursday I had a meeting with the Speak For Yersel project to discuss the content that the team have prepared and what I’ll need to work on next. I also spend a bit of time looking into creating a geographical word cloud which would fit word cloud output into a geoJSON polygon shape. I found one possible solution here: https://npm.io/package/maptowordcloud but I haven’t managed to make it work yet.
I also received a new set of videos for the Speech Star project, relating to the extIPA consonants, and I began looking into how to present these. This was complicated by the extIPA symbols not being standard Unicode characters. I did a bit of research into how these could be presented, and found this site http://www.wazu.jp/gallery/Test_IPA.html#ExtIPAChart but here the marks appear to the right of the main symbol rather than directly above or below. I contacted Eleanor to see if she had any other ideas and she got back to me with some alternatives which I’ll need to look into next week.
I spent a bit of time working for the DSL this week too, looking into a question about Google Analytics from Pauline Graham (and finding this very handy suite of free courses on how to interpret Google Analytics here https://analytics.google.com/analytics/academy/). The DSL people had also wanted me to look into creating a Levenshtein distance option, whereby words that are spelled similarly to an entered term are given as suggestions, in a similar way to this page: http://chrisgilmour.co.uk/scots/levensht.php?search=drech. I created a test script that allows you to enter a term and view the SND headwords that have a Levenshtein distance of two or less from your term, with any headwords with a distance of one highlighted in bold. However, Levenshtein is a bit of a blunt tool, and as it stands I’m not sure the results of the script are all that promising. My test term ‘drech’ brings back 84 matches, including things like ‘french’ which is unfortunately only two letters different from ‘drech’. I’m fairly certain my script is using the same algorithm as used by the site linked to above, it’s just that we have a lot more possible matches. However, this is just a simple Levenshtein test – we could also add in further tests to limit (or expand) the output, such as a rule that changes vowels in certain places as in the ‘a’ becomes ‘ai’ example suggested by Rhona at our meeting last week. Or we could limit the output to words beginning with the same letter.
Also this week I had a chat with the Historical Thesaurus people, arranging a meeting for next week and exporting a recent version of the database for them to use offline. I also tweaked a couple of entries for the AND and spent an hour or so upgrading all of the WordPress sites I manage to the latest WordPress version.