Week Beginning 22nd August 2022

I continued to spend a lot of my time working on the Speak For Yersel project this week.  We had a team meeting on Monday at which we discussed the outstanding tasks and particularly how I was going to tackle converting the quiz questions into dynamic answers.  Previously the quiz question answers were static, which will not work well as the maps the users will reference in order to answer a question are dynamic, meaning the correct answer may evolve over time.  I had proposed a couple of methods that we could use to ensure that the answers were dynamically generated based on the currently available data and we finalised our approach today.

Although I’d already made quite a bit of progress with my previous test scripts, there was still a lot to do to actually update the site.  I needed to update the structure of the database, the script that outputs the data for use in the site, the scripts that handle the display of questions and the evaluation of answers, and the scripts that store a user’s selected answers.

Changes to the database allow for dynamic quiz questions to be stored (non-dynamic ones have fixed ‘answer options’ but dynamic ones don’t).  Changes also allow for references to the relevant answer option of the survey question the quiz question is about to be stored (e.g. that the quiz is about the ‘mother’ map and specifically about the use of ‘mam’).  I made significant updates to the script that outputs data for use in the site to integrate the functions from my earlier test script that calculated the correct answer.  I updated these functions to change the logic somewhat.  They now only use ‘method 1’ as mentioned in an earlier post.  This method also now has a built-in check to filter out regions that have the highest percentage of usage but only a limited amount of data.  Currently this is set to a minimum of 10 answers for the option in question (e.g. ‘mam’) rather than total number of answers in a region.  Regions are ordered by their percentage usage (highest first) and the script iterates down through the regions and will pick as ‘correct’ the first one that has at least 10 answers.  I’ve also added in a contingency in cases where none of the regions have at least 10 answers (currently the case for the ‘rocket’ question).  In such cases the region marked as ‘correct’ will be the one that has the highest raw count of answers for the answer option rather than the highest percentage.

With the ‘correct’ region picked out the script then picks out all other regions where the usage percentage is at least 10% lower than the correct percentage.  This is to ensure that there isn’t an ‘incorrect’ answer that is too similar to the ‘correct’ one.  If this results in less than three regions (as regions are only returned if they have clicks for the answer option) then the system goes through the remaining regions and adds these in with a zero percentage.  These ‘incorrect’ regions are then shuffled and three are picked out at random.  The ‘correct’ answer is then added to these three and the options are shuffled again to ensure the ‘correct’ option is randomly positioned.  The dynamically generated output is then plugged into the output script that the website uses.

I then updated the front-end to work with this new data.  This also required me to create a new database table to hold the user’s answers, storing the region the user presses on and whether their selection was correct, along with the question ID and the person ID.  Non-dynamic answers store the ID of the ‘answer option’ that the user selected, but these dynamic questions don’t have static ‘answer options’ so the structure needed to be different.

I then implemented the dynamic answers for the ‘most of Scotland’ questions.  For these questions the script needs to evaluate whether a form is used throughout Scotland or not.  The algorithm gets all of the answer options for the survey question (e.g. ‘crying’ and ‘greetin’) and for each region works out the percentage of responses for each option.  The team had previously suggested a fixed percentage threshold of 60%, but I reckoned it might be better for the threshold to change depending on how many answer options there are.  Currently I’ve set the threshold to be 100 divided by the number of options.  So where there are two options the threshold is 50%.  Where there are four options (e.g. the ‘wean’ question) the threshold is 25% (i.e. if 25% or more of the answers in a region are for ‘wean’ it is classed as present in the region).  Where there are three options (e.g. ‘clap’) the threshold is 33%.  Where there are 5 options (e.g. ‘clarty’) the threshold is 20%.

The algorithm counts the number of regions that meet the threshold, and if the number is 8 or more then the term is considered to be found throughout Scotland and ‘Yes’ is the correct answer.  If not then ‘No’ is the correct answer.  I also had to update the way answers are stored in the database so these yes/no answers can be saved (as they have no associated region like the other questions).

I then moved onto tackling the non-standard (in terms of structure) questions to ensure they are dynamically generated as well.  These were rather tricky to do as they each had to be handled differently as they were asking different things of the data (e.g. a question like ‘What are you likely to call the evening meal if you live in Tayside and Angus (Dundee) and didn’t go to Uni?’).  I also made the ‘Sounds about right’ quiz dynamic.

I then moved onto tackling the ‘I would never say that’ quiz, which has been somewhat tricky to get working as the structure of the survey questions and answers is very different.  Quizzes for the other surveys involved looking at a specific answer option but for this survey the answer options are different rating levels that each need to be processed and handled differently.

For this quiz for each region the system returns the number of times each rating level has been selected and works out the percentages for each.  It then adds the ‘I’ve never heard this’ and ‘people elsewhere say this’ percentages together as a ‘no’ percentage and adds the ‘people around me say this’ and ‘I’d say this myself’ percentages together as a ‘yes’ percentage.  Currently there is no weighting but we may want to consider this (e.g. ‘I’d say this’ would be worth more than ‘people around me’).

With these ratings stored the script handled question types differently.  For the ‘select a region’ type of question the system works in a similar way to the other quizzes:  It sorts the regions by ‘yes’ percentage with the biggest first.  It then iterates through the regions and picks as the correct answer the first it comes to where the total number of responses for the region is the same or greater than the minimum allowed (currently set to 10).  Note that this is different to the other quizzes where this check for 10 is made against the specific answer option rather than the number of responses in the region as a whole.

If no region passes the above check then the region with the highest ‘yes’ percentage without a minimum allowed check is chosen as the correct answer.  The system then picks out all other regions with data where the ‘yes’ percentage is at least 10% lower than the correct answer, adds in regions with no data if less than three have data, shuffles the regions and picks out three.  These are then added to the ‘correct’ region and the answers are shuffled again.

I changed the questions that had an ‘all over Scotland’ answer option so that these are now ‘yes/no’ questions, e.g. ‘Is ‘Are you wanting to come with me?’ heard throughout most of Scotland?’.  For these questions the system uses 8 regions as the threshold, as with the other quizzes.  However, the percentage threshold for ‘yes’ is fixed.  I’ve currently set this to 60% (i.e. at least 60% of all answers in a region are either ‘people around me say this’ or ‘I’d say this myself’).  There is currently no minimum number of responses limit for this question type, so a region with 1 single answer that’s ‘people around me say this’ will have a 100% ‘yes’ and the region will included.  This is also the case for the ‘most of Scotland’ questions in the other quizzes, as we may need to tweak this.

As we’re using percentages rather than exact number of dots the questions can sometimes be a bit tricky.  For example the first question currently has Glasgow as the correct answer because all but two of the markers in this region are ‘people around me say this’ or ‘I’d say this myself’.  But if you turn off the other two categories and just look at the number of dots you might surmise that the North East is the correct answer as there are more dots there, even though proportionally fewer of them are the high ratings.  I don’t know if we can make it clearer that we’re asking which region has proportionally more higher ratings without confusing people further, though.

I also spent some time this week working on the Book and Borrowing project.  I had to make a few tweaks to the Chambers map of borrowers to make the map work better on smaller screens.  I ensured that both the ‘Map options’ section on the left and the ‘map legend’ on the right are given a fixed height that is shorter than the map and the areas become scrollable, as I’d noticed that on short screens both these areas could end up longer than the map and therefore their lower parts were inaccessible.  I’ve also added a ‘show/hide’ button to the map legend, enabling people to hide the area if it obscures their view of the map.

I also sent on some renamed library register files from St Andrews to Gerry for him to align with existing pages in the CMS, replaced some of the page images for the Dumfries register and renamed and uploaded images for a further St Andrews register that already existed in the CMS, ensuring the images became associated with the existing pages.

I started to work on the images for another St Andrews register that already exists in the system, but for this one the images are a double page spread so I need to merge two pages into one in the CMS.  The script needs to find all odd numbered pages then move the records on these to the preceding even numbered page, and at the same time regenerate the ‘page order’ for each record so they follow on from the existing records.  Then the even page needs its folio number updated to add in the odd number (e.g. so folio number 2 becomes ‘2-3’.  Then I need to delete the odd page record and after all that is done I need to regenerate the ‘next’ and ‘previous’ page links for all pages.  I completed everything except the final task, but I really need to test the script out on a version of the database running on my local PC first, as if anything goes wrong data could very easily be lost. I’ll need to tackle this next week as I ran out of time this week.

I also participated in our six-monthly formal review meeting for the Dictionaries of the Scots Language where we discussed our achievements in the past six months and our plans for the next.  I also made some tweaks to the DSL website, such as splitting up the ‘Abbreviations and symbols’ buttons into two separate links, updating the text found on a couple of the old maps pages and considering future changes to the bibliography XSLT to allow links in the ‘oral sources’

Finally this week I made a start on the Burns manuscript database for Craig Lamont.  I wrote a script that extracts the data from Craig’s spreadsheet and imports it into an online database.  We will be able to rerun this whenever I’m given a new version of the spreadsheet.  I then created an initial version of a front-end for the database within the layout for the Burns Correspondence and Poetry site. Currently the front-end only displays the data in one table with columns for type, date, content, physical properties, additional notes and locations.  The latter contains the location name, shelfmark (if applicable) and condition (if applicable) for all locations associated with a record, each on a separate line with the location name in bold.  Currently it’s possible to order the columns by clicking on them.  Clicking a second time reverses the order.  I haven’t had a chance to create any search or filter options yet but I’m intending to continue with this next week.

Week Beginning 15th August 2022

I spent the majority of the week continuing to work on the Speak For Yersel resource, working through a lengthy document of outstanding tasks that need to be completed before the site is launched in September.  First up was the output for the ‘Where do you think is speaker is from’ click activity.  The page features some explanatory text and a drop-down through which you can select a speaker.  When this is clicked on the user is presented with the option to play the audio file and can view the transcript.

I decided to make the transcript chunks visible with a green background that’s slightly different from the colour of the main area.  I thought it would be useful for people to be able to tell which of the ‘bigger’ words was part of which section, as it may well be that the word that caused a user to ‘click’ a section is not the word that we’ve picked for the section.  For example, in the Glasgow transcript ‘water’ is the chosen word for one section but I personally I clicked this section because ‘hands’ was pronounced ‘honds’.  Another reason to make the chunks visible is because I’ve managed to set up the transcript to highlight the appropriate section as the audio plays.  Currently the section that is playing is highlighted in white and this really helps to get your eye in whilst listening to the audio.

In terms of resizing the ‘bigger’ words, I chose the following as a starting point:  Less than 5% of the clicks: word is bold but not bigger (default font size for the transcript area is currently 16pt); 5-9%: 20pt;  10-14%: 25pt; 15-19%: 30pt; 20-29%: 35pt; 30-49%: 40pt; 50-74%: 45pt; 75% or more: 50pt.

I’ve also given the ‘bigger’ word a tooltip that displays the percentage of clicks responsible for its size as I thought this might be useful for people to see.  We will need to change the text, though.  Currently it says something like ‘15% of respondents clicked in this section’ but it’s actually ‘15% of all clicks for this transcript were made in this section’ which is a different thing, but I’m not sure how best to phrase it.  Where there is a pop-up for a word it appears in the blue font and the pop-up text contains the text that the team has specified.  Where the pop-up word is also the ‘bigger’ word (most but not always the case) then the percentage text also appears in the popup, below the text.  Here’s a screenshot of how the feature currently looks:

I then moved onto the ‘I would never say that’ activities.  This is a two-part activity, with the first part involving the user dragging and dropping sentences into either a ‘used in Scots’ or ‘isn’t used in Scots’ column and then checking their answers.  The second part has the user translating a Scots sentence into Standard English by dragging and dropping possible words into a sentence area.  My first task was to format the data used for the activity, which involved creating a suitable data structure in JSON and then migrating all of the data into this structure from a Word document.  With this in place I then began to create the front-end.  I’d created similar drag and drop features before (including for another section of the current resource) and therefore used the same technologies:  The jQuery UI drag and drop library (https://jqueryui.com/draggable/).  This allowed me to set up two areas where buttons could be dropped and then create a list of buttons that could be dragged.  I then had to work on the logic for evaluating the user’s answers.  This involved keeping a tally of the number of buttons that had been dropped into one or other of the boxes (which also had to take into consideration that the user can drop a button back in the original list) and when every button has been placed in a column a ‘check answers’ button appears.  On pressing, the code then fixes the draggable buttons in place and compares the user’s answers with the correct answers, adding a ‘tick’ or ‘cross’ to each button and giving an overall score in the middle.  There are multiple stages to this activity so I also had to work on the logic for loading a new set of sentences with their own introductory text, or moving onto part two of the activity if required.  Below is a screenshot of part 1 with some of the buttons dragged and dropped:

Part two of the activity involved creating a sentence by choosing words to add.  The original plan was to have the user click on a word to add it to the sentence, or click on the word in the sentence to remove it if required.  I figured that using a drag and drop method and enabling the user to move words around the sentence after they have dropped them if required would be more flexible and would fit in better with the other activities in the site.  I was just going to use the same drag and drop library that I’d used for part one, but then I spotted a further jQuery interaction called sortable that allowed for connected lists (https://jqueryui.com/sortable/#connect-lists).  This allowed items within a list to be sortable, but also for items to be dragged and dropped from one list to another.  This sounded like the ideal solution, so I set about investigating its usage.

It took some time to style the activity to ensure that empty lists were still given space on the screen, and to ensure the word button layout worked properly, but after that the ‘sentence builder’ feature worked very well – the user could move words between the sentence area and the ‘list of possible words’ area and rearrange their order as required.  I set up the code to ensure a ‘check answers’ button appeared when at least one word had been added to the sentence (disappearing again if the user removes all words).  When the ‘check answers’ button is pressed the code grabs the content of the buttons in the sentence area in the order the buttons have been added and creates a sentence from the text.  It then compares this to one of the correct sentences (of which there may be more than one).  If the answer is correct a ‘tick’ is added after the sentence and if it’s wrong a ‘cross’ is added.  If there are multiple correct answers the other correct possibilities are displayed, and if the answer was wrong all correct answers are displayed.  Then it’s on to the next sentence, or the final evaluation.  Here’s a screenshot of part 2 with some words dragged and dropped:

Whilst working on part two it became clear that the ‘sortable’ solution I’d developed for part 2 worked better than the other draggable method I’d used for part one.  This is because the ‘sortable’ solution uses HTML lists and ‘snaps’ each item into place whereas the previous method just leaves the draggable item wherever the user drops it (so long as it’s in the confines of the droppable box).  This means things can look a bit messy.  I therefore revisited part 1 and replaced the method.  This took a bit of time to implement as I had to rework a lot of the logic, but I think it was worth it.

Also this week I spent a bit of time working for the Dictionaries of the Scots Language.  I had a conversation with Pauline Graham about the workflow for updates to the online data.  I also investigated a couple of issues with entries for Ann Fergusson.  One entry (sleesh) wasn’t loading as there were spaces in the entry’s ‘slug’.  Spaces in URLs can cause issues and this is what was happening with this entry.  I updated the URL information in the database so that ‘sleesh_n1 and v’ has been changed to ‘sleesh_n1_and_v’ and this has fixed the issue.  I also updated the XML in the online system so the first URL is now <url>sleesh_n1_and_v</url>.  I checked the online database and thankfully no other entries have a space in their ‘slug’ so this issue doesn’t affect anything else.  The second issue related to an entry that doesn’t appear in the online database.  It was not in the data I was sent and wasn’t present in several previous versions of the data, so in this case something must have happened prior to the data getting sent to me.  I also had a conversation about the appearance of yogh characters in the site.

I also did a bit more work for the Books and Borrowing project this week.  I added two further library registers from the NLS to our system.  This means there should now only be one further register to come from the NLS, which is quite a relief as each register takes some time to process.  I also finally got round to processing the four registers for St Andrews, which had been on my ‘to do’ list since late July.  It was very tricky to rename the images into a format that we can use on the server because the lack of trailing zeros meant a script to batch process the images loaded them in the wrong order.  The was made worse because rather than just being numbered sequentially the image filenames were further split into ‘parts’.  For example, the images beginning ‘UYLY 207 11 Receipt book part 11’ were being processed before images beginning ‘UYLY 207 11 Receipt book part 2’ as programming languages when ordering strings consider 11, 12 etc to come before 2.  This was also then happening within each ‘part’, e.g. ‘UYLY207 15 part 43_11.jpg’ was coming before ‘UYLY207 15 part 43_2.jpg’.  It took most of the morning to sort this out, but I was then able to upload the images to the server and I create new registers, generate pages and associate images for the two new registers (207-11 and 207-15).

However, the other two registers already exist in the CMS as page records with associated borrowing records.  Each image of the register is an open spread showing two borrowing pages and we had previously decided that I should run a script to merge pages in the CMS and then associate the merged record with one of the page images.  However, I’m afraid this is going to need some manual intervention.  Looking at the images for 206-1 and comparing them to the existing page records for this register, it’s clear that there are many blank pages in the two-page spreads that have not been replicated in the CMS.  For example, page 164 in the CMS is for ‘Profr Spens’.  The corresponding image (in my renamed images) is ‘UYLY206-1_00000084.jpg’.  The data is on the right-hand page and the left-hand page is blank.  But in the CMS the preceding page is for ‘Prof. Brown’, which is on the left-hand page of the preceding image.  If I attempted to automatically merge these two page records into one this would therefore result in an error.

I’m afraid what I need is for someone who is familiar with the data to look through the images and the pages and create a spreadsheet noting which pages correspond to which image.  Where multiple pages correspond to one page I can then merge the records.  So for example: Pages 159 (id 1087) and 160 (ID 1088) are found on image UYLY206-1_00000082.jpg.  Page 161 (1089) corresponds to UYLY206-1_00000083.jpg.  The next page in the CMS is 164 (1090) and this corresponds to UYLY206-1_00000084.jpg. So a spreadsheet could have two columns:

Page ID                 Image

1087                       UYLY206-1_00000082.jpg

1088                       UYLY206-1_00000082.jpg

1089                       UYLY206-1_00000083.jpg

1900                       UYLY206-1_00000084.jpg

Also, the page numbers in the CMS don’t tally with the handwritten page numbers in the images (e.g. the page record 1089 mentioned above has page 161 but the image has page number 162 written on it).  And actually, the page numbers would need to include two pages, e.g. 162-163.  Ideally whoever is going to manually create the spreadsheet could add new page numbers as a further column and I could then fix these when I process the spreadsheet too.  This task is still very much in progress.

Also for the project this week I created a ‘full screen’ version of the Chambers map that will be pulled into an iframe on the Edinburgh University Library website when they create an online exhibition based on our resource.

Finally this week I helped out Sofia from the Iona Place-names project who as luck would have it was also wanting help with embedding a map in an iframe.  As I’d already done some investigation about this very issue for the Chambers map I was able to easily set this up for Sofia.

 

Week Beginning 8th August 2022

I should have been back at work on Monday this week, after having a lovely holiday last week.  Unfortunately I began feeling unwell over the weekend and ended up off sick on Monday and Tuesday.  I had a fever and a sore throat and needed to sleep most of the time, but it wasn’t Covid as I tested negative.  Thankfully I began feeling more normal again on Tuesday and by Wednesday I was well enough to work again.

I spent the majority of the rest of the week working on the Speak For Yersel project.  On Wednesday I moved the ‘She sounds really clever’ activities to the ‘maps’ page, as we’d decided that these ‘activities’ really were just looking at the survey outputs and so fitting in better on the ‘maps’ page.  I also updated some of the text on the ‘about’ and ‘home’ pages and updated the maps to change the gender labels, expanding ‘F’ and ‘M’ and replacing ‘NB’ with ‘other’ as this is a more broad option that better aligns with the choices offered during sign-up.  I also an option to show and hide the map filters that defaults to ‘hide’ but remembers the users selection when other map options are chosen.  I added titles to the maps on the ‘Maps’ page and made some other tweaks to the terminology used in the maps.

On Wednesday we had a meeting to discuss the outstanding tasks still left for me to tackle.  This was a very useful meeting and we managed to make some good decisions about how some of the larger outstanding areas will work.  We also managed to get confirmation from Rhona Alcorn of the DSL that we will be able to embed the old web-based version of the Schools Dictionary app for use with some of our questions, which is really great news.

One of the outstanding tasks was to investigate how the map-based quizzes could have their answer options and the correct answer dynamically generated.  This was never part of the original plan for the project, but it became clear that having static answers to questions (e.g. where do people use ‘ginger’ for ‘soft drink’) wasn’t going to work very well when the data users are looking at is dynamically generated and potentially changing all the time – we would be second guessing the outputs of the project rather than letting the data guide the answers.  As dynamically generating answers wasn’t part of the plan and would be pretty complicated to develop this has been left as a ‘would be nice if there’s time’ task, but at our call it was decided that this should now become a priority.  I therefore spent most of Thursday investigating this issue and came up with two potential methods.

The first method looks at each region individually to compare the number of responses for each answer option in the region.  It counts the number of responses for each answer option and then generates a percentage of the total number of responses in the area.  So for example:

North East (Aberdeen)

Mother: 12 (8%)

Maw: 4 (3%)

Mam: 73 (48%)

Mammy: 3 (2%)

Mum: 61 (40%)

So of the 153 current responses in Aberdeen, 73 (48%) were ‘Mam’.  The method then compares the percentages for the particular answer option across all regions to pick out the highest percentage.  The advantage of this approach is that by looking at percentages any differences caused by there being many more respondents in one region over another are alleviated.  If we look purely at counts then a region with a large number of respondents (as with Aberdeen at the moment) will end up with an unfair advantage, even for answer options that are not chosen the most.  E.g. ‘Mother’ has 12 responses, which is currently by far the most in any region, but taken as a percentage it’s roughly in line with other areas.

But there are downsides.  Any region where the option has been chosen but the total number of responses is low will end up with a large percentage.  For example, both Inverness and Dumfries & Galloway currently only have two respondents, but in each case one of these was for ‘Mam’, meaning they pass Aberdeen and would be considered the ‘correct’ answer with 50% each.  If we were to use this method then I would have to put something in place to disregard small samples.  Another downside is that as far as users are concerned they are simply evaluating dots on a map, so perhaps we shouldn’t be trying to address the bias of some areas having more respondents than others because users themselves won’t be addressing this.

This then led me to develop method 2, which only looks at the answer option in question (e.g. ‘Mam’) rather than the answer option within the context of other answer options.  This method takes a count of the number of responses for the answer option in each region and for the number generates a percentage of the total number of answers for the option across Scotland.  So for ‘Mam’ the counts and percentages are as follows:

Ayrshire

1 (1%)

Fife

2 (2%)

Glasgow

2 (2%)

North East (Aberdeen)

73 (84%)

Stirling and Falkirk

2 (2%)

Lothian (Edinburgh)

1 (1%)

Tayside and Angus (Dundee)

4 (5%)

Dumfries and Galloway

1 (1%)

Highlands (Inverness)

1 (1%)

Across Scotland there are currently a total of 87 responses where ‘Mam’ was chosen and 73 of these (84%) were in Aberdeen.  As I say, this simple solution probably mirrors how a user will analyse the map – they will see lots of dots in Aberdeen and select this option.  However, it completely ignores the context of the chosen answer.  For example, if we get a massive rush of users from Glasgow (say 2000) and 100 of these choose ‘Mam’ then Glasgow ends up being the correct answer (beating Aberdeen’s 73), even though as a proportion of all chosen answers in Glasgow 100 is only 5% (the other 1900 people will have chosen other options), meaning it would be a pretty unpopular choice compared to the 48% who chose ‘Mam’ over other options in Aberdeen as mentioned near the start.  But perhaps this is a nuance that users won’t consider anyway.

This latter issue became more apparent when I looked at the output for the use of ‘rocket’ to mean ‘stupid’.  The simple count method has Aberdeen with 45% of the total number of ‘rocket’ responses, but if you look at the ‘rocket’ choices in Aberdeen in context you see that only 3% of respondents in this region selected this option.

There are other issues we will need to consider too.  Some questions currently have multiple regions linked in the answers (e.g. lexical quiz question 4 ‘stour’ has answers ‘Edinburgh and Glasgow’, ‘Shetland and Orkney’ etc.)  We need to decide whether we still want this structure.  This is going to be tricky to get working dynamically as the script would have to join two regions with the most responses together to form the ‘correct’ answer and there’s no guarantee that these areas would be geographically next to each other.  We should perhaps reframe the question; we could have multiple buttons that are ‘correct’ and ask something like ‘stour is used for dust in several parts of Scotland.  Can you pick one?’  Or I guess we could ask the user to pick two.

We also need to decide how to handle the ‘heard throughout Scotland’ questions (e.g. lexical question 6 ‘is greetin’ heard throughout most of Scoatland’).  We need to define what we mean by ‘most of Scotland’.  We need to define this in a way that can be understood programmatically, but thinking about it, we probably also need to better define what we mean by this for users too.  If you don’t know where most of the population of Scotland is situated and purely looked at the distribution of ‘greetin’ on the map you might conclude that it’s not used throughout Scotland at all, but only in the central belt and up the East coast.  But returning to how an algorithm could work out the correct answer for this question:  We need to set thresholds for whether an option is used throughout most of Scotland or not.  Should the algorithm only look at certain regions?  Should it count the responses in each region and consider it in use in the region if (for example) 50% or more respondents chose the option?  The algorithm could then count the number of regions that meet this threshold compared to the total number of regions and if (for example) 8 out of our 14 regions surpass the threshold the answer could be deemed ‘true’.  The problem is humans can look at a map and quickly estimate an answer but an algorithm needs more rigid boundaries.

Also, question 15 of the ‘give your word’ quiz asks about the ‘central belt’ but we need to define what regions make this up.  Is it just Glasgow and Lothian (Edinburgh), for example?  We also might need to clarify this for users too.  The ‘I would never say that’ quiz has several questions where one possible answer is ‘All over Scotland’.  If we’re dynamically ascertaining the correct answer then we can’t guarantee that this answer will be one that comes up.  Also, ‘All over Scotland’ may in fact be the correct answer for questions that we haven’t considered this to be an answer for.  What should be do about this?  Two possibilities: Firstly, the code for ascertaining the correct answer (for all of the map-based quizzes) also has a threshold that when reached would mean the correct answer is ‘All over Scotland’ and this option would then be included in the question.  This could use the same logic as ‘heard throughout Scotland’ yes/no questions that I mentioned above.  Secondly, we could reframe the questions that currently have an ‘All over Scotland’ answer option to be the same as the ‘heard throughout Scotland yes/no questions as found in the lexical quiz and we don’t bother to try and work out whether an ‘all over Scotland’ option needs to be added to any of the other questions.

I also realised that we may end up with a situation where more than one region has a similar number of markers, meaning the system will still easily be able to ascertain which is correct, but users might struggle.  Do we need to consider this eventuality?  I could for example add in a check to see whether any other regions have a similar score to the ‘correct’ one and ensure any that are too close never get picked as the randomly generated ‘wrong’ answer options.  Linked to this: we need to consider whether it is acceptable that the ‘wrong’ answer options will always be randomly generated. The options will be different each time a user loads the quiz question and if they are entirely random this means the question may sometimes be very easy and other times very hard.  Do I need to update the algorithm to add some sort of weighting to how the ‘wrong’ options are chosen?  This will need further discussion with the team next week.

I decided to move onto some of the other outstanding tasks and to leave the dynamically generated map answers issue until Jennifer and Mary are back next week.  I managed to complete the majority of minor updates to the site that were still outstanding during this time, such as updating introductory and explanatory text for the surveys, quizzes and activities, removing or rearranging questions, rewording answers, reinstating the dictionary based questions and tweaking the colour and justification of some of the site text.

This leaves several big issues left to tackle before the end of the month including  dynamically generating answers for quiz questions, developing the output for the ‘click’ activity and developing the interactive activities for ‘I would never say that’.  It’s going to be a busy few weeks.

Also this week I continued to process the data for the Books and Borrowing project.  This included uploading images for one more Advocates library register from the NLS, including generating pages, associating images and fixing the page numbering to align with the handwritten numbers.  I also received images for a second register for Haddington library from the NLS, and I needed some help with this as we already have existing pages for this register in the CMS, but the number of images received didn’t match.  Thankfully the RA Kit Baston was able to look over the images and figure out what needed to be done, which included inserting new pages in the CMS and then me writing a script to associate images with records.  I also added two missing pages to the register for Dumfries Presbytery and added in a missing image for Westerkirk library.

Finally, I tweaked the XSLT for the Dictionaries of the Scots Language bibliographies to ensure the style guide reference linked to the most recent version.

Week Beginning 25th July 2022

I was on holiday for most of the previous two weeks, working two days during this period.  I’ll also be on holiday again next week, so I’ve had quite a busy time getting things done.  Whilst I was away I dealt with some queries from Joanna Kopaczyk about the Future of Scots website.  I also had to investigate a request to fill in timesheets for my work on the Speak For Yersel project, as apparently I’d been assigned to the project as ‘Directly incurred’ when I should have been ‘Directly allocated’.  Hopefully we’ll be able to get me reclassified but this is still in-progress.  I also fixed a couple of issues with the facility to export data for publication for the Berwickshire place-name project for Carole Hough, and fixed an issue with an entry in the DSL, which was appearing in the wrong place in the dictionary.  It turned out that the wrong ‘url’ tag had been added to the entry’s XML several years ago and since then the entry was wrongly positioned.  I fixed the XML and this sorted things.  I also responded to a query from Geert of the Anglo-Norman Dictionary about Aberystwyth’s new VPN and whether this would affect his access to the AND.  I also investigated an issue Simon Taylor was having when logging into a couple of our place-names systems.

On the Monday I returned to work I launched two new resources for different projects.  For the Books and Borrowing project I published the Chambers Library Map (https://borrowing.stir.ac.uk/chambers-library-map/) and reorganised the site menu to make space for the new page link.  The resource has been very well received and I’m pretty pleased with how it’s turned out.  For the Seeing Speech project I launched the new Gaelic Tongues resource (https://www.seeingspeech.ac.uk/gaelic-tongues/) which has received a lot of press coverage, which is great for all involved.

I spent the rest of the week dividing my time primarily between three projects:  Speak For Yersel, Books and Borrowing and Speech Star.  For Books and Borrowing I continued processing the backlog of library register image files that has built up.  There were about 15 registers that needed to be processed, and each needed to be handled in a different way.  This included nine registers from Advocates Library that had been digitised by the NLS, for which I needed to batch process the images to rename them, delete blank pages, create page records in the CMS and then tweak the automatically generated folio numbers to account for discrepancies in the handwritten page number in the images.  I also processed a register for the Royal High School, which involved renaming the images so they match up with image numbers already assigned to page records in the CMS, inserting new page records and updating the ‘next’ and ‘previous’ links for pages for which new images had been uncovered and generating new page records for many tens of new pages that follow on from the ones that have already been created in the CMS.  I also uploaded new images for the Craigston register and created a new register including all page records and associated image URLs for a further register for Aberdeen.  I still have some further RHS registers to do and a few from St Andrews, but these will need to wait until I’m back from my holiday.

For Speech Star I downloaded a ZIP containing 500 new ultrasound MP4 videos.  I then had to process them to generate ‘poster’ images for each video (these are images that get displayed before the user chooses to play the video).  I then had to replace the existing normalised speech database with data from a new spreadsheet that included these new videos plus updates to some of the existing data.  This included adding a few new fields and changing the way the age filter works, as much of the new data is for child speakers who have specific ages in months and years, and these all need to be added to a new ‘under 18’ age group.

For Speak For Yersel I had an awful lot to do.  I started with a further large-scale restructuring of the website following feedback from the rest of the team.  This included changing the site menu order, adding in new final pages to the end of surveys and quizzes and changing the text of buttons that appear when displaying the final question.

I then developed the map filter options for age and education for all of the main maps.  This was a major overhaul of the maps.  I removed the slide up / slide down of the map area when an option is selected as this was a bit long and distracting.  Now the map area just updates (although there is a bit of a flicker as the data gets replaced).  The filter options unfortunately make the options section rather big, which is going to be an issue on a small screen.  On my mobile phone the options section takes up 100% of the width and 80% of the height of the map area unless I press the ‘full screen’ button.  However, I figured out a way to ensure that the filter options section scrolls if the content extends beyond the bottom of the map.

I also realised that if you’re in full screen mode and you select a filter option the map exits full screen as the map section of the page reloads.  This is very annoying, but I may not be able to fix it as it would mean completely changing how the maps are loaded.  This is because such filters and options were never intended to be included in the maps and the system was never developed to allow for this.  I’ve had to somewhat shoehorn in the filter options and it’s not how I would have done things had I known from the beginning that these options were required.  However, the filters work and I’m sure they will be useful.  I’ve added in filters for age, education and gender, as you can see in the following screenshot:

I also updated the ‘Give your word’ activity that asks to identify younger and older speakers to use the new filters too.  The map defaults to showing ‘all’ and the user then needs to choose an age.  I’m still not sure how useful this activity will be as the total number of dots for each speaker group varies considerably, which can easily give the impression that more of one age group use a form compared to another age group purely because one age group has more dots overall.  The questions don’t actually ask anything about geographical distribution so having the map doesn’t really serve much purpose when it comes to answering the question.  I can’t help but think that just presenting people with percentages would work better, or some other sort of visualisation like a bar graph or something.

I then moved on to working on the quiz for ‘she sounds really clever’ and so far I have completed both the first part of the quiz (questions about ratings in general) and the second part (questions about listeners from a specific region and their ratings of speakers from regions).  It’s taken a lot of brain-power to get this working as I decided to make the system work out the correct answer and to present it as an option alongside randomly selected wrong answers.  This has been pretty tricky to implement (especially as depending on the question the ‘correct’ answer is either the highest or the lowest) but will make the quiz much more flexible – as the data changes so will the quiz.

Part one of the quiz page itself is pretty simple.  There is the usual section on the left with the question and the possible answers.  On the right is a section containing a box to select a speaker and the rating sliders (readonly).  When you select a speaker the sliders animate to their appropriate location.  I decided to not include the map or the audio file as these didn’t really seem necessary for answering the questions, they would clutter up the screen and people can access them via the maps page anyway (well, once I move things from the ‘activities’ section).  Note that the user’s answers are stored in the database (the region selected and whether this was the correct answer at the time).  Part two of the quiz features speaker/listener true/false questions and this also automatically works out the correct answer (currently based on the 50% threshold).  Note that where there is no data for a listener rating a speaker from a region the rating defaults to 50.  We should ensure that we have at least one rating for a listener in each region before we let people answer these questions.  Here is a screenshot of part one of the quiz in action, with randomly selected ‘wrong’ answers and a dynamically outputted ‘right’ answer:

I also wrote a little script to identify duplicate lexemes in categories in the Historical Thesaurus as it turns out there are some occasions where a lexeme appears more than once (with different dates) and this shouldn’t happen.  These will need to be investigated and the correct dates will need to be established.

I will be on holiday again next week so there won’t be another post until the week after I’m back.