Week Beginning 8th August 2022

I should have been back at work on Monday this week, after having a lovely holiday last week.  Unfortunately I began feeling unwell over the weekend and ended up off sick on Monday and Tuesday.  I had a fever and a sore throat and needed to sleep most of the time, but it wasn’t Covid as I tested negative.  Thankfully I began feeling more normal again on Tuesday and by Wednesday I was well enough to work again.

I spent the majority of the rest of the week working on the Speak For Yersel project.  On Wednesday I moved the ‘She sounds really clever’ activities to the ‘maps’ page, as we’d decided that these ‘activities’ really were just looking at the survey outputs and so fitting in better on the ‘maps’ page.  I also updated some of the text on the ‘about’ and ‘home’ pages and updated the maps to change the gender labels, expanding ‘F’ and ‘M’ and replacing ‘NB’ with ‘other’ as this is a more broad option that better aligns with the choices offered during sign-up.  I also an option to show and hide the map filters that defaults to ‘hide’ but remembers the users selection when other map options are chosen.  I added titles to the maps on the ‘Maps’ page and made some other tweaks to the terminology used in the maps.

On Wednesday we had a meeting to discuss the outstanding tasks still left for me to tackle.  This was a very useful meeting and we managed to make some good decisions about how some of the larger outstanding areas will work.  We also managed to get confirmation from Rhona Alcorn of the DSL that we will be able to embed the old web-based version of the Schools Dictionary app for use with some of our questions, which is really great news.

One of the outstanding tasks was to investigate how the map-based quizzes could have their answer options and the correct answer dynamically generated.  This was never part of the original plan for the project, but it became clear that having static answers to questions (e.g. where do people use ‘ginger’ for ‘soft drink’) wasn’t going to work very well when the data users are looking at is dynamically generated and potentially changing all the time – we would be second guessing the outputs of the project rather than letting the data guide the answers.  As dynamically generating answers wasn’t part of the plan and would be pretty complicated to develop this has been left as a ‘would be nice if there’s time’ task, but at our call it was decided that this should now become a priority.  I therefore spent most of Thursday investigating this issue and came up with two potential methods.

The first method looks at each region individually to compare the number of responses for each answer option in the region.  It counts the number of responses for each answer option and then generates a percentage of the total number of responses in the area.  So for example:

North East (Aberdeen)

Mother: 12 (8%)

Maw: 4 (3%)

Mam: 73 (48%)

Mammy: 3 (2%)

Mum: 61 (40%)

So of the 153 current responses in Aberdeen, 73 (48%) were ‘Mam’.  The method then compares the percentages for the particular answer option across all regions to pick out the highest percentage.  The advantage of this approach is that by looking at percentages any differences caused by there being many more respondents in one region over another are alleviated.  If we look purely at counts then a region with a large number of respondents (as with Aberdeen at the moment) will end up with an unfair advantage, even for answer options that are not chosen the most.  E.g. ‘Mother’ has 12 responses, which is currently by far the most in any region, but taken as a percentage it’s roughly in line with other areas.

But there are downsides.  Any region where the option has been chosen but the total number of responses is low will end up with a large percentage.  For example, both Inverness and Dumfries & Galloway currently only have two respondents, but in each case one of these was for ‘Mam’, meaning they pass Aberdeen and would be considered the ‘correct’ answer with 50% each.  If we were to use this method then I would have to put something in place to disregard small samples.  Another downside is that as far as users are concerned they are simply evaluating dots on a map, so perhaps we shouldn’t be trying to address the bias of some areas having more respondents than others because users themselves won’t be addressing this.

This then led me to develop method 2, which only looks at the answer option in question (e.g. ‘Mam’) rather than the answer option within the context of other answer options.  This method takes a count of the number of responses for the answer option in each region and for the number generates a percentage of the total number of answers for the option across Scotland.  So for ‘Mam’ the counts and percentages are as follows:

Ayrshire

1 (1%)

Fife

2 (2%)

Glasgow

2 (2%)

North East (Aberdeen)

73 (84%)

Stirling and Falkirk

2 (2%)

Lothian (Edinburgh)

1 (1%)

Tayside and Angus (Dundee)

4 (5%)

Dumfries and Galloway

1 (1%)

Highlands (Inverness)

1 (1%)

Across Scotland there are currently a total of 87 responses where ‘Mam’ was chosen and 73 of these (84%) were in Aberdeen.  As I say, this simple solution probably mirrors how a user will analyse the map – they will see lots of dots in Aberdeen and select this option.  However, it completely ignores the context of the chosen answer.  For example, if we get a massive rush of users from Glasgow (say 2000) and 100 of these choose ‘Mam’ then Glasgow ends up being the correct answer (beating Aberdeen’s 73), even though as a proportion of all chosen answers in Glasgow 100 is only 5% (the other 1900 people will have chosen other options), meaning it would be a pretty unpopular choice compared to the 48% who chose ‘Mam’ over other options in Aberdeen as mentioned near the start.  But perhaps this is a nuance that users won’t consider anyway.

This latter issue became more apparent when I looked at the output for the use of ‘rocket’ to mean ‘stupid’.  The simple count method has Aberdeen with 45% of the total number of ‘rocket’ responses, but if you look at the ‘rocket’ choices in Aberdeen in context you see that only 3% of respondents in this region selected this option.

There are other issues we will need to consider too.  Some questions currently have multiple regions linked in the answers (e.g. lexical quiz question 4 ‘stour’ has answers ‘Edinburgh and Glasgow’, ‘Shetland and Orkney’ etc.)  We need to decide whether we still want this structure.  This is going to be tricky to get working dynamically as the script would have to join two regions with the most responses together to form the ‘correct’ answer and there’s no guarantee that these areas would be geographically next to each other.  We should perhaps reframe the question; we could have multiple buttons that are ‘correct’ and ask something like ‘stour is used for dust in several parts of Scotland.  Can you pick one?’  Or I guess we could ask the user to pick two.

We also need to decide how to handle the ‘heard throughout Scotland’ questions (e.g. lexical question 6 ‘is greetin’ heard throughout most of Scoatland’).  We need to define what we mean by ‘most of Scotland’.  We need to define this in a way that can be understood programmatically, but thinking about it, we probably also need to better define what we mean by this for users too.  If you don’t know where most of the population of Scotland is situated and purely looked at the distribution of ‘greetin’ on the map you might conclude that it’s not used throughout Scotland at all, but only in the central belt and up the East coast.  But returning to how an algorithm could work out the correct answer for this question:  We need to set thresholds for whether an option is used throughout most of Scotland or not.  Should the algorithm only look at certain regions?  Should it count the responses in each region and consider it in use in the region if (for example) 50% or more respondents chose the option?  The algorithm could then count the number of regions that meet this threshold compared to the total number of regions and if (for example) 8 out of our 14 regions surpass the threshold the answer could be deemed ‘true’.  The problem is humans can look at a map and quickly estimate an answer but an algorithm needs more rigid boundaries.

Also, question 15 of the ‘give your word’ quiz asks about the ‘central belt’ but we need to define what regions make this up.  Is it just Glasgow and Lothian (Edinburgh), for example?  We also might need to clarify this for users too.  The ‘I would never say that’ quiz has several questions where one possible answer is ‘All over Scotland’.  If we’re dynamically ascertaining the correct answer then we can’t guarantee that this answer will be one that comes up.  Also, ‘All over Scotland’ may in fact be the correct answer for questions that we haven’t considered this to be an answer for.  What should be do about this?  Two possibilities: Firstly, the code for ascertaining the correct answer (for all of the map-based quizzes) also has a threshold that when reached would mean the correct answer is ‘All over Scotland’ and this option would then be included in the question.  This could use the same logic as ‘heard throughout Scotland’ yes/no questions that I mentioned above.  Secondly, we could reframe the questions that currently have an ‘All over Scotland’ answer option to be the same as the ‘heard throughout Scotland yes/no questions as found in the lexical quiz and we don’t bother to try and work out whether an ‘all over Scotland’ option needs to be added to any of the other questions.

I also realised that we may end up with a situation where more than one region has a similar number of markers, meaning the system will still easily be able to ascertain which is correct, but users might struggle.  Do we need to consider this eventuality?  I could for example add in a check to see whether any other regions have a similar score to the ‘correct’ one and ensure any that are too close never get picked as the randomly generated ‘wrong’ answer options.  Linked to this: we need to consider whether it is acceptable that the ‘wrong’ answer options will always be randomly generated. The options will be different each time a user loads the quiz question and if they are entirely random this means the question may sometimes be very easy and other times very hard.  Do I need to update the algorithm to add some sort of weighting to how the ‘wrong’ options are chosen?  This will need further discussion with the team next week.

I decided to move onto some of the other outstanding tasks and to leave the dynamically generated map answers issue until Jennifer and Mary are back next week.  I managed to complete the majority of minor updates to the site that were still outstanding during this time, such as updating introductory and explanatory text for the surveys, quizzes and activities, removing or rearranging questions, rewording answers, reinstating the dictionary based questions and tweaking the colour and justification of some of the site text.

This leaves several big issues left to tackle before the end of the month including  dynamically generating answers for quiz questions, developing the output for the ‘click’ activity and developing the interactive activities for ‘I would never say that’.  It’s going to be a busy few weeks.

Also this week I continued to process the data for the Books and Borrowing project.  This included uploading images for one more Advocates library register from the NLS, including generating pages, associating images and fixing the page numbering to align with the handwritten numbers.  I also received images for a second register for Haddington library from the NLS, and I needed some help with this as we already have existing pages for this register in the CMS, but the number of images received didn’t match.  Thankfully the RA Kit Baston was able to look over the images and figure out what needed to be done, which included inserting new pages in the CMS and then me writing a script to associate images with records.  I also added two missing pages to the register for Dumfries Presbytery and added in a missing image for Westerkirk library.

Finally, I tweaked the XSLT for the Dictionaries of the Scots Language bibliographies to ensure the style guide reference linked to the most recent version.

Week Beginning 25th July 2022

I was on holiday for most of the previous two weeks, working two days during this period.  I’ll also be on holiday again next week, so I’ve had quite a busy time getting things done.  Whilst I was away I dealt with some queries from Joanna Kopaczyk about the Future of Scots website.  I also had to investigate a request to fill in timesheets for my work on the Speak For Yersel project, as apparently I’d been assigned to the project as ‘Directly incurred’ when I should have been ‘Directly allocated’.  Hopefully we’ll be able to get me reclassified but this is still in-progress.  I also fixed a couple of issues with the facility to export data for publication for the Berwickshire place-name project for Carole Hough, and fixed an issue with an entry in the DSL, which was appearing in the wrong place in the dictionary.  It turned out that the wrong ‘url’ tag had been added to the entry’s XML several years ago and since then the entry was wrongly positioned.  I fixed the XML and this sorted things.  I also responded to a query from Geert of the Anglo-Norman Dictionary about Aberystwyth’s new VPN and whether this would affect his access to the AND.  I also investigated an issue Simon Taylor was having when logging into a couple of our place-names systems.

On the Monday I returned to work I launched two new resources for different projects.  For the Books and Borrowing project I published the Chambers Library Map (https://borrowing.stir.ac.uk/chambers-library-map/) and reorganised the site menu to make space for the new page link.  The resource has been very well received and I’m pretty pleased with how it’s turned out.  For the Seeing Speech project I launched the new Gaelic Tongues resource (https://www.seeingspeech.ac.uk/gaelic-tongues/) which has received a lot of press coverage, which is great for all involved.

I spent the rest of the week dividing my time primarily between three projects:  Speak For Yersel, Books and Borrowing and Speech Star.  For Books and Borrowing I continued processing the backlog of library register image files that has built up.  There were about 15 registers that needed to be processed, and each needed to be handled in a different way.  This included nine registers from Advocates Library that had been digitised by the NLS, for which I needed to batch process the images to rename them, delete blank pages, create page records in the CMS and then tweak the automatically generated folio numbers to account for discrepancies in the handwritten page number in the images.  I also processed a register for the Royal High School, which involved renaming the images so they match up with image numbers already assigned to page records in the CMS, inserting new page records and updating the ‘next’ and ‘previous’ links for pages for which new images had been uncovered and generating new page records for many tens of new pages that follow on from the ones that have already been created in the CMS.  I also uploaded new images for the Craigston register and created a new register including all page records and associated image URLs for a further register for Aberdeen.  I still have some further RHS registers to do and a few from St Andrews, but these will need to wait until I’m back from my holiday.

For Speech Star I downloaded a ZIP containing 500 new ultrasound MP4 videos.  I then had to process them to generate ‘poster’ images for each video (these are images that get displayed before the user chooses to play the video).  I then had to replace the existing normalised speech database with data from a new spreadsheet that included these new videos plus updates to some of the existing data.  This included adding a few new fields and changing the way the age filter works, as much of the new data is for child speakers who have specific ages in months and years, and these all need to be added to a new ‘under 18’ age group.

For Speak For Yersel I had an awful lot to do.  I started with a further large-scale restructuring of the website following feedback from the rest of the team.  This included changing the site menu order, adding in new final pages to the end of surveys and quizzes and changing the text of buttons that appear when displaying the final question.

I then developed the map filter options for age and education for all of the main maps.  This was a major overhaul of the maps.  I removed the slide up / slide down of the map area when an option is selected as this was a bit long and distracting.  Now the map area just updates (although there is a bit of a flicker as the data gets replaced).  The filter options unfortunately make the options section rather big, which is going to be an issue on a small screen.  On my mobile phone the options section takes up 100% of the width and 80% of the height of the map area unless I press the ‘full screen’ button.  However, I figured out a way to ensure that the filter options section scrolls if the content extends beyond the bottom of the map.

I also realised that if you’re in full screen mode and you select a filter option the map exits full screen as the map section of the page reloads.  This is very annoying, but I may not be able to fix it as it would mean completely changing how the maps are loaded.  This is because such filters and options were never intended to be included in the maps and the system was never developed to allow for this.  I’ve had to somewhat shoehorn in the filter options and it’s not how I would have done things had I known from the beginning that these options were required.  However, the filters work and I’m sure they will be useful.  I’ve added in filters for age, education and gender, as you can see in the following screenshot:

I also updated the ‘Give your word’ activity that asks to identify younger and older speakers to use the new filters too.  The map defaults to showing ‘all’ and the user then needs to choose an age.  I’m still not sure how useful this activity will be as the total number of dots for each speaker group varies considerably, which can easily give the impression that more of one age group use a form compared to another age group purely because one age group has more dots overall.  The questions don’t actually ask anything about geographical distribution so having the map doesn’t really serve much purpose when it comes to answering the question.  I can’t help but think that just presenting people with percentages would work better, or some other sort of visualisation like a bar graph or something.

I then moved on to working on the quiz for ‘she sounds really clever’ and so far I have completed both the first part of the quiz (questions about ratings in general) and the second part (questions about listeners from a specific region and their ratings of speakers from regions).  It’s taken a lot of brain-power to get this working as I decided to make the system work out the correct answer and to present it as an option alongside randomly selected wrong answers.  This has been pretty tricky to implement (especially as depending on the question the ‘correct’ answer is either the highest or the lowest) but will make the quiz much more flexible – as the data changes so will the quiz.

Part one of the quiz page itself is pretty simple.  There is the usual section on the left with the question and the possible answers.  On the right is a section containing a box to select a speaker and the rating sliders (readonly).  When you select a speaker the sliders animate to their appropriate location.  I decided to not include the map or the audio file as these didn’t really seem necessary for answering the questions, they would clutter up the screen and people can access them via the maps page anyway (well, once I move things from the ‘activities’ section).  Note that the user’s answers are stored in the database (the region selected and whether this was the correct answer at the time).  Part two of the quiz features speaker/listener true/false questions and this also automatically works out the correct answer (currently based on the 50% threshold).  Note that where there is no data for a listener rating a speaker from a region the rating defaults to 50.  We should ensure that we have at least one rating for a listener in each region before we let people answer these questions.  Here is a screenshot of part one of the quiz in action, with randomly selected ‘wrong’ answers and a dynamically outputted ‘right’ answer:

I also wrote a little script to identify duplicate lexemes in categories in the Historical Thesaurus as it turns out there are some occasions where a lexeme appears more than once (with different dates) and this shouldn’t happen.  These will need to be investigated and the correct dates will need to be established.

I will be on holiday again next week so there won’t be another post until the week after I’m back.