Week Beginning 8th August 2022

I should have been back at work on Monday this week, after having a lovely holiday last week.  Unfortunately I began feeling unwell over the weekend and ended up off sick on Monday and Tuesday.  I had a fever and a sore throat and needed to sleep most of the time, but it wasn’t Covid as I tested negative.  Thankfully I began feeling more normal again on Tuesday and by Wednesday I was well enough to work again.

I spent the majority of the rest of the week working on the Speak For Yersel project.  On Wednesday I moved the ‘She sounds really clever’ activities to the ‘maps’ page, as we’d decided that these ‘activities’ really were just looking at the survey outputs and so fitting in better on the ‘maps’ page.  I also updated some of the text on the ‘about’ and ‘home’ pages and updated the maps to change the gender labels, expanding ‘F’ and ‘M’ and replacing ‘NB’ with ‘other’ as this is a more broad option that better aligns with the choices offered during sign-up.  I also an option to show and hide the map filters that defaults to ‘hide’ but remembers the users selection when other map options are chosen.  I added titles to the maps on the ‘Maps’ page and made some other tweaks to the terminology used in the maps.

On Wednesday we had a meeting to discuss the outstanding tasks still left for me to tackle.  This was a very useful meeting and we managed to make some good decisions about how some of the larger outstanding areas will work.  We also managed to get confirmation from Rhona Alcorn of the DSL that we will be able to embed the old web-based version of the Schools Dictionary app for use with some of our questions, which is really great news.

One of the outstanding tasks was to investigate how the map-based quizzes could have their answer options and the correct answer dynamically generated.  This was never part of the original plan for the project, but it became clear that having static answers to questions (e.g. where do people use ‘ginger’ for ‘soft drink’) wasn’t going to work very well when the data users are looking at is dynamically generated and potentially changing all the time – we would be second guessing the outputs of the project rather than letting the data guide the answers.  As dynamically generating answers wasn’t part of the plan and would be pretty complicated to develop this has been left as a ‘would be nice if there’s time’ task, but at our call it was decided that this should now become a priority.  I therefore spent most of Thursday investigating this issue and came up with two potential methods.

The first method looks at each region individually to compare the number of responses for each answer option in the region.  It counts the number of responses for each answer option and then generates a percentage of the total number of responses in the area.  So for example:

North East (Aberdeen)

Mother: 12 (8%)

Maw: 4 (3%)

Mam: 73 (48%)

Mammy: 3 (2%)

Mum: 61 (40%)

So of the 153 current responses in Aberdeen, 73 (48%) were ‘Mam’.  The method then compares the percentages for the particular answer option across all regions to pick out the highest percentage.  The advantage of this approach is that by looking at percentages any differences caused by there being many more respondents in one region over another are alleviated.  If we look purely at counts then a region with a large number of respondents (as with Aberdeen at the moment) will end up with an unfair advantage, even for answer options that are not chosen the most.  E.g. ‘Mother’ has 12 responses, which is currently by far the most in any region, but taken as a percentage it’s roughly in line with other areas.

But there are downsides.  Any region where the option has been chosen but the total number of responses is low will end up with a large percentage.  For example, both Inverness and Dumfries & Galloway currently only have two respondents, but in each case one of these was for ‘Mam’, meaning they pass Aberdeen and would be considered the ‘correct’ answer with 50% each.  If we were to use this method then I would have to put something in place to disregard small samples.  Another downside is that as far as users are concerned they are simply evaluating dots on a map, so perhaps we shouldn’t be trying to address the bias of some areas having more respondents than others because users themselves won’t be addressing this.

This then led me to develop method 2, which only looks at the answer option in question (e.g. ‘Mam’) rather than the answer option within the context of other answer options.  This method takes a count of the number of responses for the answer option in each region and for the number generates a percentage of the total number of answers for the option across Scotland.  So for ‘Mam’ the counts and percentages are as follows:

Ayrshire

1 (1%)

Fife

2 (2%)

Glasgow

2 (2%)

North East (Aberdeen)

73 (84%)

Stirling and Falkirk

2 (2%)

Lothian (Edinburgh)

1 (1%)

Tayside and Angus (Dundee)

4 (5%)

Dumfries and Galloway

1 (1%)

Highlands (Inverness)

1 (1%)

Across Scotland there are currently a total of 87 responses where ‘Mam’ was chosen and 73 of these (84%) were in Aberdeen.  As I say, this simple solution probably mirrors how a user will analyse the map – they will see lots of dots in Aberdeen and select this option.  However, it completely ignores the context of the chosen answer.  For example, if we get a massive rush of users from Glasgow (say 2000) and 100 of these choose ‘Mam’ then Glasgow ends up being the correct answer (beating Aberdeen’s 73), even though as a proportion of all chosen answers in Glasgow 100 is only 5% (the other 1900 people will have chosen other options), meaning it would be a pretty unpopular choice compared to the 48% who chose ‘Mam’ over other options in Aberdeen as mentioned near the start.  But perhaps this is a nuance that users won’t consider anyway.

This latter issue became more apparent when I looked at the output for the use of ‘rocket’ to mean ‘stupid’.  The simple count method has Aberdeen with 45% of the total number of ‘rocket’ responses, but if you look at the ‘rocket’ choices in Aberdeen in context you see that only 3% of respondents in this region selected this option.

There are other issues we will need to consider too.  Some questions currently have multiple regions linked in the answers (e.g. lexical quiz question 4 ‘stour’ has answers ‘Edinburgh and Glasgow’, ‘Shetland and Orkney’ etc.)  We need to decide whether we still want this structure.  This is going to be tricky to get working dynamically as the script would have to join two regions with the most responses together to form the ‘correct’ answer and there’s no guarantee that these areas would be geographically next to each other.  We should perhaps reframe the question; we could have multiple buttons that are ‘correct’ and ask something like ‘stour is used for dust in several parts of Scotland.  Can you pick one?’  Or I guess we could ask the user to pick two.

We also need to decide how to handle the ‘heard throughout Scotland’ questions (e.g. lexical question 6 ‘is greetin’ heard throughout most of Scoatland’).  We need to define what we mean by ‘most of Scotland’.  We need to define this in a way that can be understood programmatically, but thinking about it, we probably also need to better define what we mean by this for users too.  If you don’t know where most of the population of Scotland is situated and purely looked at the distribution of ‘greetin’ on the map you might conclude that it’s not used throughout Scotland at all, but only in the central belt and up the East coast.  But returning to how an algorithm could work out the correct answer for this question:  We need to set thresholds for whether an option is used throughout most of Scotland or not.  Should the algorithm only look at certain regions?  Should it count the responses in each region and consider it in use in the region if (for example) 50% or more respondents chose the option?  The algorithm could then count the number of regions that meet this threshold compared to the total number of regions and if (for example) 8 out of our 14 regions surpass the threshold the answer could be deemed ‘true’.  The problem is humans can look at a map and quickly estimate an answer but an algorithm needs more rigid boundaries.

Also, question 15 of the ‘give your word’ quiz asks about the ‘central belt’ but we need to define what regions make this up.  Is it just Glasgow and Lothian (Edinburgh), for example?  We also might need to clarify this for users too.  The ‘I would never say that’ quiz has several questions where one possible answer is ‘All over Scotland’.  If we’re dynamically ascertaining the correct answer then we can’t guarantee that this answer will be one that comes up.  Also, ‘All over Scotland’ may in fact be the correct answer for questions that we haven’t considered this to be an answer for.  What should be do about this?  Two possibilities: Firstly, the code for ascertaining the correct answer (for all of the map-based quizzes) also has a threshold that when reached would mean the correct answer is ‘All over Scotland’ and this option would then be included in the question.  This could use the same logic as ‘heard throughout Scotland’ yes/no questions that I mentioned above.  Secondly, we could reframe the questions that currently have an ‘All over Scotland’ answer option to be the same as the ‘heard throughout Scotland yes/no questions as found in the lexical quiz and we don’t bother to try and work out whether an ‘all over Scotland’ option needs to be added to any of the other questions.

I also realised that we may end up with a situation where more than one region has a similar number of markers, meaning the system will still easily be able to ascertain which is correct, but users might struggle.  Do we need to consider this eventuality?  I could for example add in a check to see whether any other regions have a similar score to the ‘correct’ one and ensure any that are too close never get picked as the randomly generated ‘wrong’ answer options.  Linked to this: we need to consider whether it is acceptable that the ‘wrong’ answer options will always be randomly generated. The options will be different each time a user loads the quiz question and if they are entirely random this means the question may sometimes be very easy and other times very hard.  Do I need to update the algorithm to add some sort of weighting to how the ‘wrong’ options are chosen?  This will need further discussion with the team next week.

I decided to move onto some of the other outstanding tasks and to leave the dynamically generated map answers issue until Jennifer and Mary are back next week.  I managed to complete the majority of minor updates to the site that were still outstanding during this time, such as updating introductory and explanatory text for the surveys, quizzes and activities, removing or rearranging questions, rewording answers, reinstating the dictionary based questions and tweaking the colour and justification of some of the site text.

This leaves several big issues left to tackle before the end of the month including  dynamically generating answers for quiz questions, developing the output for the ‘click’ activity and developing the interactive activities for ‘I would never say that’.  It’s going to be a busy few weeks.

Also this week I continued to process the data for the Books and Borrowing project.  This included uploading images for one more Advocates library register from the NLS, including generating pages, associating images and fixing the page numbering to align with the handwritten numbers.  I also received images for a second register for Haddington library from the NLS, and I needed some help with this as we already have existing pages for this register in the CMS, but the number of images received didn’t match.  Thankfully the RA Kit Baston was able to look over the images and figure out what needed to be done, which included inserting new pages in the CMS and then me writing a script to associate images with records.  I also added two missing pages to the register for Dumfries Presbytery and added in a missing image for Westerkirk library.

Finally, I tweaked the XSLT for the Dictionaries of the Scots Language bibliographies to ensure the style guide reference linked to the most recent version.

Week Beginning 25th July 2022

I was on holiday for most of the previous two weeks, working two days during this period.  I’ll also be on holiday again next week, so I’ve had quite a busy time getting things done.  Whilst I was away I dealt with some queries from Joanna Kopaczyk about the Future of Scots website.  I also had to investigate a request to fill in timesheets for my work on the Speak For Yersel project, as apparently I’d been assigned to the project as ‘Directly incurred’ when I should have been ‘Directly allocated’.  Hopefully we’ll be able to get me reclassified but this is still in-progress.  I also fixed a couple of issues with the facility to export data for publication for the Berwickshire place-name project for Carole Hough, and fixed an issue with an entry in the DSL, which was appearing in the wrong place in the dictionary.  It turned out that the wrong ‘url’ tag had been added to the entry’s XML several years ago and since then the entry was wrongly positioned.  I fixed the XML and this sorted things.  I also responded to a query from Geert of the Anglo-Norman Dictionary about Aberystwyth’s new VPN and whether this would affect his access to the AND.  I also investigated an issue Simon Taylor was having when logging into a couple of our place-names systems.

On the Monday I returned to work I launched two new resources for different projects.  For the Books and Borrowing project I published the Chambers Library Map (https://borrowing.stir.ac.uk/chambers-library-map/) and reorganised the site menu to make space for the new page link.  The resource has been very well received and I’m pretty pleased with how it’s turned out.  For the Seeing Speech project I launched the new Gaelic Tongues resource (https://www.seeingspeech.ac.uk/gaelic-tongues/) which has received a lot of press coverage, which is great for all involved.

I spent the rest of the week dividing my time primarily between three projects:  Speak For Yersel, Books and Borrowing and Speech Star.  For Books and Borrowing I continued processing the backlog of library register image files that has built up.  There were about 15 registers that needed to be processed, and each needed to be handled in a different way.  This included nine registers from Advocates Library that had been digitised by the NLS, for which I needed to batch process the images to rename them, delete blank pages, create page records in the CMS and then tweak the automatically generated folio numbers to account for discrepancies in the handwritten page number in the images.  I also processed a register for the Royal High School, which involved renaming the images so they match up with image numbers already assigned to page records in the CMS, inserting new page records and updating the ‘next’ and ‘previous’ links for pages for which new images had been uncovered and generating new page records for many tens of new pages that follow on from the ones that have already been created in the CMS.  I also uploaded new images for the Craigston register and created a new register including all page records and associated image URLs for a further register for Aberdeen.  I still have some further RHS registers to do and a few from St Andrews, but these will need to wait until I’m back from my holiday.

For Speech Star I downloaded a ZIP containing 500 new ultrasound MP4 videos.  I then had to process them to generate ‘poster’ images for each video (these are images that get displayed before the user chooses to play the video).  I then had to replace the existing normalised speech database with data from a new spreadsheet that included these new videos plus updates to some of the existing data.  This included adding a few new fields and changing the way the age filter works, as much of the new data is for child speakers who have specific ages in months and years, and these all need to be added to a new ‘under 18’ age group.

For Speak For Yersel I had an awful lot to do.  I started with a further large-scale restructuring of the website following feedback from the rest of the team.  This included changing the site menu order, adding in new final pages to the end of surveys and quizzes and changing the text of buttons that appear when displaying the final question.

I then developed the map filter options for age and education for all of the main maps.  This was a major overhaul of the maps.  I removed the slide up / slide down of the map area when an option is selected as this was a bit long and distracting.  Now the map area just updates (although there is a bit of a flicker as the data gets replaced).  The filter options unfortunately make the options section rather big, which is going to be an issue on a small screen.  On my mobile phone the options section takes up 100% of the width and 80% of the height of the map area unless I press the ‘full screen’ button.  However, I figured out a way to ensure that the filter options section scrolls if the content extends beyond the bottom of the map.

I also realised that if you’re in full screen mode and you select a filter option the map exits full screen as the map section of the page reloads.  This is very annoying, but I may not be able to fix it as it would mean completely changing how the maps are loaded.  This is because such filters and options were never intended to be included in the maps and the system was never developed to allow for this.  I’ve had to somewhat shoehorn in the filter options and it’s not how I would have done things had I known from the beginning that these options were required.  However, the filters work and I’m sure they will be useful.  I’ve added in filters for age, education and gender, as you can see in the following screenshot:

I also updated the ‘Give your word’ activity that asks to identify younger and older speakers to use the new filters too.  The map defaults to showing ‘all’ and the user then needs to choose an age.  I’m still not sure how useful this activity will be as the total number of dots for each speaker group varies considerably, which can easily give the impression that more of one age group use a form compared to another age group purely because one age group has more dots overall.  The questions don’t actually ask anything about geographical distribution so having the map doesn’t really serve much purpose when it comes to answering the question.  I can’t help but think that just presenting people with percentages would work better, or some other sort of visualisation like a bar graph or something.

I then moved on to working on the quiz for ‘she sounds really clever’ and so far I have completed both the first part of the quiz (questions about ratings in general) and the second part (questions about listeners from a specific region and their ratings of speakers from regions).  It’s taken a lot of brain-power to get this working as I decided to make the system work out the correct answer and to present it as an option alongside randomly selected wrong answers.  This has been pretty tricky to implement (especially as depending on the question the ‘correct’ answer is either the highest or the lowest) but will make the quiz much more flexible – as the data changes so will the quiz.

Part one of the quiz page itself is pretty simple.  There is the usual section on the left with the question and the possible answers.  On the right is a section containing a box to select a speaker and the rating sliders (readonly).  When you select a speaker the sliders animate to their appropriate location.  I decided to not include the map or the audio file as these didn’t really seem necessary for answering the questions, they would clutter up the screen and people can access them via the maps page anyway (well, once I move things from the ‘activities’ section).  Note that the user’s answers are stored in the database (the region selected and whether this was the correct answer at the time).  Part two of the quiz features speaker/listener true/false questions and this also automatically works out the correct answer (currently based on the 50% threshold).  Note that where there is no data for a listener rating a speaker from a region the rating defaults to 50.  We should ensure that we have at least one rating for a listener in each region before we let people answer these questions.  Here is a screenshot of part one of the quiz in action, with randomly selected ‘wrong’ answers and a dynamically outputted ‘right’ answer:

I also wrote a little script to identify duplicate lexemes in categories in the Historical Thesaurus as it turns out there are some occasions where a lexeme appears more than once (with different dates) and this shouldn’t happen.  These will need to be investigated and the correct dates will need to be established.

I will be on holiday again next week so there won’t be another post until the week after I’m back.

 

Week Beginning 4th July 2022

I had a lovely week’s holiday last week and returned to work for one week only before I head off for a further two weeks.  I spent most of my time this week working on the Speak For Yersel project implementing a huge array of changes that the team wanted to make following the periods of testing in schools a couple of weeks ago.  There were also some new sections of the resource to work on as well.

By Tuesday I had completed the restructuring of the site as detailed in the ‘Roadmap’ document, meaning the survey and quizzes have been separated, as have the ‘activities’ and ‘explore maps’.  This has required quite a lot of restructuring of the code, but I think all is working as it should.  I also updated the homepage text.  One thing I wasn’t sure about is what should happen when the user reaches the end of the survey.  Previously this led into the quiz, but for now I’ve created a page that provides links to the quiz, the ‘more activities’ and the ‘explore maps’ options for the survey in question.

The quizzes should work as they did before, but they now have their own progress bar.  Currently at the end of the quiz the only link offered is to explore the maps, but we should perhaps change this.  The ‘more activities’ work slightly differently to how these were laid out in the roadmap.  Previously a user selected an activity then it loaded an index page with links to the activities and the maps.  As the maps are now separated this index page was pretty pointless, so instead when you select an activity it launches straight into it.  The only one that still has an index page is the ‘Clever’ one as this has multiple options.  However, thinking about this activity:  it’s really just an ‘explore’ like the ‘explore maps’ rather than an actual interactive activity per se, so we should perhaps move this to the ‘explore’ page.

I also made all of the changes to the ‘sounds about right’ survey including replacing sound files and adding / removing questions.  I ended up adding a new ‘question order’ field to the database and questions are now ordered using this, as previously the order was just set by the auto-incrementing database ID which meant inserting a new question to appear midway through the survey was very tricky.  Hopefully this change of ordering hasn’t had any knock-on effects elsewhere.

I then made all of the changes to two other activities:  the ‘lexical’ one and the ‘grammatical’ one.  These included quite a lot of tweaks to questions, question options, question orders and the number of answers that could be selected for questions.  With all of this in place I moved onto the ‘Where do you think this speaker is from’ sections.  The ‘survey’ now only consists of the click map and when you press the ‘Check Answers’ button some text appears under the buttons with links through to where the user can go next.

For the ‘more activities’ section the main click activity is now located here.  It took quite a while to get this to work, as moving sections introduced some conflicts in the code that were a bit tricky to identify.  I replaced the explanatory text and I also added in the limit to the number of presses.  I’ve added a section to the right of the buttons that displays the number of presses the user has left.  Once there are no presses left the ‘Press’ button gets disabled.  I still think people are going to reach the 5 click limit too soon and will get annoyed when they realise they can’t add further clicks and they can’t reset the exercise to give it another go.  After you’ve listened to the four speakers a page is displayed saying you’ve completed the activity and giving links to other parts.  Below is a screenshot of the new ‘click’ activity with the limit in place (and also the new site menu):

 

The ’Quiz’ has taken quite some time to implement but is now fully operational.  I had to do a lot of work behind the scenes to get the percentages figured out and to get the quiz to automatically work out which answer should be the correct one, but it all works now.  The map displays the ‘Play’ icons as I figured people would want to be able to hear the clips as well as just see the percentages.  Beside each clip icon the percentage of respondents who correctly identified the location of the speaker is displayed.  The markers are placed at the ‘correct’ points on the map, as shown when you view the correct locations in the survey activities.  Question 1 asks you to identify the most recognised, question 2 the least recognised.  Quiz answers are logged in the database so we’ll be able to track answers.  Here’s a screenshot of the quiz:

I also added the percentage map to the ‘explore maps’ page too, and I gave people the option of focussing on the answers submitted from specific regions.  An ‘All regions’ map displays the same data as the quiz map, but then the user can choose (for example) Glasgow and view the percentages of correctly identified speakers that respondents from the Glasgow area identified, thus allowing them to compare how people in each area managed to identify speakers in the areas.  I decided to add a count of the number of people that have responded too.

The ‘explore maps’ for ‘guess the region’ has a familiar layout – buttons on the left that when pressed on load a map on the right.  The buttons correspond to the region of people who completed the ‘guess the region’ survey.  The first option shows the answers of all respondents from all regions.  This is exactly the same as the map in the quiz, except I’ve also displayed the number of respondents above the map.  Two things to be aware of:

Firstly, a respondent can complete the quiz as many times as they want, so each respondent may have multiple datasets.  Secondly, the click map (both quiz and ‘explore maps’) currently includes people from outside of Scotland as well as people who selected an area when registering.  There are currently 18 respondents and 3 of these are outside of Scotland.

When you click on a specific region button in the left-hand column the results of respondents from that specific region only are displayed on the map.  The number of respondents is also listed above the map.  Most of the regions currently have no respondents, meaning an empty map is displayed and a note above the map explains why.  Ayrshire has one respondent.  Glasgow has two.  Note that the reason there are such varied percentages in Glasgow from just two respondents (rather than just 100%, 50% and 0%) is because one or more of the respondents has completed the quiz more than once.  Lothian has two respondents.  North East has 10.  Here’s how the maps look:

On Friday I began to work on the ‘click transcription’ visualisations, which will display how many times speakers have clicked in each of the sections of the transcriptions they listen to in the ‘click’ activity.  I only managed to get as far as writing the queries and scripts to generate the data, rather than any actual visualisation of the data.  When looking at the aggregated data for the four speakers I discovered that the distribution of clicks across sections was a bit more uniform that I thought it might be.  We might need to consider how we’re going to work out the thresholds for different sizes.  I was going to base it purely on the number of clicks, but I realised that this would not work as the more responses we get the more clicks there will be.  Instead I decided to use percentages of the total number of clicks for a speaker.  E.g. for speaker 4 there are currently a total of 65 clicks so the percentages for each section would be:

 

11% Have you seen the TikTok vids with the illusions?
6% They’re brilliant!
9% I just watched the glass one.
17% The guy’s got this big glass full of water in his hands.
8% He then puts it down,
8% takes out one of those big knives
6% and slices right through it.
6% I sometimes get so fed up with Tiktok
8% – really does my head in –
8% but I’m not joking,
14% I want to see more and more of this guy.

 

(which adds up to 101% with rounding).  But what should the thresholds be?  E.g. 0-6% = regular, 7-10% = bigger, 11-15% even bigger, 16%+ biggest?  I’ll need input from the team about this.  I’m not a statistician but there may be better approaches, such as using standard deviation and such things.

I still have quite a lot of work to do for the project, namely:  Completing the ‘where do you think the speaker is from’ as detailed above; implementing the ‘she sounds really clever’ updates; adding in filter options to the map (age ranges and education levels); investigating dynamically working out the correct answers to map-based quizzes.

In addition to my Speak For Yersel work I participated in an interview with the AHRC about the role of technicians in research projects.  I’d participated in a focus group a few weeks ago and this was a one-on-one follow-up video call to discuss in greater detail some of the points I’d raised in the focus group.  It was a good opportunity to discuss y role and some of the issues I’ve encountered over the years.

I also installed some new themes for the OHOS project website and fixed an issue with the Anglo-Norman Dictionary website, as the editor had noticed that cognate references were not always working.  After some investigation I realised that this was happening when the references for a cognate dictionary included empty tags as well as completed tags.  I had to significantly change how this section of the entry is generated in the XSLT from the XML, which took some time to implement and test.  All seems to be working, though.

I also did some work for the Books and Borrowing project.  Whilst I’d been on holiday I’d been sent page images for a further ten library registers and I needed to process these.  This can be something of a time-consuming process as each set of images needs to be processed in a different way, such as renaming images, removing unnecessary images at the start and end, uploading the images to the server, generating the page images for each register and then bringing the automatically generated page numbers into line with any handwritten page numbers on the images, which may not always be sequentially numbered.  I processed two registers for the Advocates library from the NLS and three registers from Aberdeen library.  I looked into processing the images for a register from the High School of Edinburgh, but I had some questions about the images and didn’t hear back from the researcher before the end of the week, so I needed to leave these.  The remaining registers were from St Andrews and I had further questions about these, as the images are double-page spreads but existing page records in the CMS treat each page separately.  As the researcher dealing with St Andrews was on holiday I’ll need to wait until I’m back to deal with these too.

Also this week I completed the two mandatory Moodle courses about computer security and GDPR, which took a bit longer that I thought they might.

Week Beginning 20th June 2022

I completed an initial version of the Chambers Library map for the Books and Borrowing project this week.  It took quite a lot of time and effort to implement the subscription period range slider.  Searching for a range when the data also has a range of dates rather than a single date means we needed to make a decision about what data gets returned and what doesn’t.  This is because the two ranges (the one chosen as a filter by the user and the one denoting the start and end periods of subscription for each borrower) can overlap in many different ways.  For example, the period chosen by the user is 05 1828 to 06 1829.  Which of the following borrowers should therefore be returned?

  1. Borrowers range is 06 1828 to 02 1829: Borrower’s range is fully within the period so should definitely be included
  2. Borrowers range is 01 1828 to 07 1828: Borrower’s range extends beyond the selected period at the start and ends within the selected period.  Presumably should be included.
  3. Borrowers range is 01 1828 to 09 1829: Borrower’s range extends beyond the selected period in both directions.  Presumably should be included.
  4. Borrowers range is 05 1829 to 09 1829: Borrower’s range begins during the selected period and ends beyond the selected period. Presumably should be included.
  5. Borrowers range is 01 1828 to 04 1828: Borrower’s range is entirely before the selected period. Should not be included
  6. Borrowers range is 07 1829 to 10 1829: Borrower’s range is entirely after the selected period. Should not be included.

Basically if there is any overlap between the selected period and the borrower’s subscription period the borrower will be returned.  But this means most borrowers will always be returned a lot of the time.  It’s a very different sort of filter to one that purely focuses on a single date – e.g. filtering the data to only those borrowers whose subscription periods *begins* between 05 1828 and 06 1829.

Based on the above assumptions I began to write the logic that would decide which borrowers to include when the range slider is altered.  It was further complicated by having to deal with months as well as years.  Here’s the logic in full if you fancy getting a headache:

if(((mapData[i].sYear>startYear || (mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth || mapData[i].sYear>startYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)) || (((mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth) || mapData[i].sYear>startYear) && ((mapData[i].sYear==endYear && mapData[i].sMonth <=endMonth) || mapData[i].sYear<endYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)))

I also added the subscription period to the popups.  The only downside to the range slider is that the occupation marker colours change depending on how many occupations are present during a period, so you can’t always tell an occupation by its colour. I might see if I can fix the colours in place, but it might not be possible.

I also noticed that the jQuery UI sliders weren’t working very well on touchscreens so installed the jQuery TouchPunch library to fix that (https://github.com/furf/jquery-ui-touch-punch).  I also made the library marker bigger and gave it a white border to more easily differentiate it from the borrower markers.

I then moved onto incorporating page images in the resource too.  Where a borrower has borrower records the relevant pages where these borrowing records are found now appear as thumbnails in the borrower popup.  These are generated by the IIIF server based on dimensions passed to it, which is much nicer than having to generate and store thumbnails directly.  I also updated the popup to make it wider when required to give more space for the thumbnails.  Here’s a screenshot of the new thumbnails in action:

Clicking on a thumbnail opens a further popup containing a zoomable / pannable image of the page.  This proved to be rather tricky to implement.  Initially I was going to open a popup in the page (outside of the map container) using a jQuery UI Dialog.  However, I realised that this wouldn’t work when the map was being viewed in full-screen mode, as nothing beyond the map container is visible in such circumstances.  I then considered opening the image in the borrower popup but this wasn’t really big enough.  I then wondered about extending the ‘Map options’ section and replacing the contents of this with the image, but this then caused issues for the contents of the ‘Map options’ section, which didn’t reinitialise properly when the contents were reinstated.  I then found a plugin for the Leaflet mapping library that provides a popup within the map interface (https://github.com/w8r/Leaflet.Modal) and decided to use this.  However, it’s all a little complex as the popup then has to include another mapping library called OpenLayers that enables the zooming and panning of the page image, all within the framework of the overall interactive map.  It is all working and I think it works pretty well, although I guess the map interface is a little cluttered, what with the ‘Map Options’ section, the map legend, the borrower popup and then the page image popup as well.  Here’s a screenshot with the page image open:

All that’s left to do now is add in the introductory text once Alex has prepared it and then make the map live.  We might need to rearrange the site’s menu to add in a link to the Chambers Map as it’s already a bit cluttered.

Also for the project I downloaded images for two further library registers for St Andrews that had previously been missed.  However, there are already records for the registers and pages in the CMS so we’re going to have to figure out a way to work out which image corresponds to which page in the CMS.  One register has a different number of pages in the CMS compared to the image files so we need to work out how to align the start and end and if there are any gaps or issues in the middle.  The other register is more complicated because the images are double pages whereas it looks like the page records in the CMS are for individual pages.  I’m not sure how best to handle this.  I could either try and batch process the images to chop them up or batch process the page records to join them together.  I’ll need to discuss this further with Gerry, who is dealing with the data for St Andrews.

Also this week I prepared for and gave a talk to a group of students from Michigan State University who were learning about digital humanities.  I talked to them for about an hour about a number of projects, such as the Burns Supper map (https://burnsc21.glasgow.ac.uk/supper-map/), the digital edition I’d created for New Modernist Editing (https://nme-digital-ode.glasgow.ac.uk/), the Historical Thesaurus (https://ht.ac.uk/), Books and Borrowing (https://borrowing.stir.ac.uk/) and TheGlasgowStory (https://theglasgowstory.com/).  It went pretty and it was nice to be able to talk about some of the projects I’ve been involved with for a change.

I also made some further tweaks to the Gentle Shepherd Performances page which is now ready to launch, and helped Geert out with a few changes to the WordPress pages of the Anglo-Norman Dictionary.  I also made a few tweaks to the WordPress pages of the DSL website and finally managed to get a hotel room booked for the DHC conference in Sheffield in September.  I also made a couple of changes to the new Gaelic Tongues section of the Seeing Speech website and had a discussion with Eleanor about the filters for Speech Star.  Fraser had been in touch with about 500 Historical Thesaurus categories that had been newly matched to OED categories so I created a little script to add these connections to the online database.

I also had a Zoom call with the Speak For Yersel team.  They had been testing out the resource at secondary schools in the North East and have come away with lots of suggested changes to the content and structure of the resource.  We discussed all of these and agreed that I would work on implementing the changes the week after next.

Next week I’m going to be on holiday, which I have to say I’m quite looking forward to.

Week Beginning 13th June 2022

I worked for several different projects this week.  For the Books and Borrowing project I processed and imported a further register for the Advocates library that had been digitised by the NLS.  I also continued with the interactive map of Chambers library borrowers, although I couldn’t spend as much time on this as I’d hoped as my access to Stirling University’s VPN had stopped working and without VPN access I can’t connect to the database and the project server.  It took a while to resolve the issue as access needs to be approved by some manager or other, but once it was sorted I got to work on some updates.

One thing I’d noticed last week was that when zooming and panning the historical map layer was throwing out hundreds of 403 Forbidden errors to the browser console.  This was not having any impact on the user experience, but was still a bit messy and I wanted to get to the bottom of the issue.  I had a very helpful (as always) chat with Chris Fleet at NLS Maps, who provided the historical map layer and he reckoned it was because the historical map only covers a certain area and moving beyond this was still sending requests for map tiles that didn’t exist.  Thankfully an option exists in Leaflet that allows you to set the boundaries for a map layer (https://leafletjs.com/reference.html#latlngbounds) and I updated the code to do just that, which seems to have stopped the errors.

I then returned to the occupations categorisation, which was including far too many options.  I therefore streamlined the occupations, displaying the top-level occupation only.  I think this works a lot better (although I need to change the icon colour for ‘unknown’).  Full occupation information is still available for each borrower via the popup.

I also had to change the range slider for opacity as standard HTML range sliders don’t allow for double-ended ranges.  We require a double-ended range for the subscription period and I didn’t want to have two range sliders that looked different on one page.  I therefore switched to a range slider offered by the jQuery UI interface library (https://jqueryui.com/slider/#range).  The opacity slider still works as before, it just looks a little different.  Actually, it works better than before, as the opacity now changes as you slide rather than only updating after you mouse-up.

I then began to implement the subscription period slider.  This does not yet update the data.  It’s been pretty tricky to implement this.  The range needs to be dynamically generated based on the earliest and latest dates in the data, and dates are both year and month, which need to be converted into plain integers for the slider and then reinterpreted as years and months when the user updates the end positions.  I think I’ve got this working as it should, though.  When you update the ends of the slider the text above that lists the months and years updates to reflect this.  The next step will be to actually filter the data based on the chosen period.  Here’s a screenshot of the map featuring data categorised by the new streamlined occupations and the new sliders displayed:

For the Speak For Yersel project I made a number of tweaks to the resource, which Jennifer and Mary are piloting with school children in the North East this week.  I added in a new grammatical question and seven grammatical quiz questions.  I tweaked the homepage text and updated the structure of questions 27-29 of the ‘sound about right’ activity.  I ensured that ‘Dumfries’ always appears as ‘Dumfries and Galloway’ in the ‘clever’ activity and follow-on and updated the ‘clever’ activity to remove the stereotype questions.  These were the ones where users had to rate the speakers from a region without first listening to any audio clips and Jennifer reckoned these were taking too long to complete.  I also updated the ‘clever’ follow-on to hide the stereotype options and switched the order of the listener and speaker options in the other follow-on activity for this type.

For the Speech Star project I replaced the data for the child speech error database with a new, expanded dataset and added in ‘Speaker Code’ as a filter option.  I also replicated the child speech and normalised speech databases from the clinical website we’re creating on the more academic teaching site we’re creating and also pulled in the IPA chart from Seeing Speech into this resource too.  Here’s a screenshot of how the child speech error database looks with the new ‘speaker code’ filter with ‘vowel disorder’ selected:

I also responded to Craig Lamont in Scottish literature with some further feedback on the structure of his Burns Manuscript Database spreadsheet, which is now shaping up nicely.  Craig had also sent me an updated spreadsheet with data for the Ramsay Gentle Shepherd performances project.  I’d set this up (interactive map, timeline and filterable tabular data) a few weeks ago, migrating it to the University’s T4 website management system.  All had worked then but when I logged into T4 and previewed the page I previously created I discovered it longer worked.  The page hadn’t been updated since the end of May and I had no idea what’s gone wrong.  I can only assume that the linked content (i.e. the links to the JavaScript files) had somehow become unlinked.  I decided, therefore, that it would be easier to just host the JavaScript files on another server I have direct access to rather than having to shoehorn it all into T4.  I made an updated version with the new dataset and this is working well.

I also made a couple of tweaks to the DSL this week, installing the TablePress plugin for the ancillary pages and creating a further alternative logo for the DSL’s Facebook posts.  I also returned to going some work for the Anglo-Norman Dictionary, offering some advice to the editor Geert about incorporating publications and overhauling how cross references are displayed in the Dictionary Management System.

I updated the ‘View Entry’ page in the DMS.  Previously it only included cross references FROM the entry you’re looking at TO any other entries.  I.e. it only displayed content when the entry was of type ‘xref’ rather than ‘main’.  Now in addition to this there’s a further section listing all cross references TO the entry you’re looking at from any entry of type ‘xref’ that links to it.

In addition there is a button allowing you to view all entries that include a cross reference to the current entry anywhere in their XML – i.e. where an <xref> tag that features the current entry’s slug is found at any level in any other main entry’s XML.  This code is hugely memory intensive to run, as basically all 27,464 main entries need to be pulled into the script, with the full XML contents of each checked for matching xrefs.  For this reason the page doesn’t run the code each time the ‘view entry’ page is loaded but instead only runs when you actively press the button.  It takes a few seconds for the script to process, but after it does the cross references are listed in the same manner as the ‘pure’ xrefs in the preceding sections.

Finally I participated in a Zoom-based focus group for the AHRC about the role of technicians in research projects this week.  It was great to participate to share my views on my role and to hear from other people with similar roles at other organisations.

Week Beginning 6th June 2022

I’d taken Monday off this week to have an extra-long weekend following the jubilee holidays on Thursday and Friday last week.  On Tuesday I returned to another meeting for Speak For Yersel and a list of further tweaks to the site, including many changes to three of the five activities and a new set of colours for the map marker icons, which make the markers much more easy to differentiate.

I spent most of the week working on the Books and Borrowing project.  We’d been sent a new library register from the NLS and I spent a bit of time downloading the 700 or so images, processing them and uploading them into our system.  As usual, page numbers go a bit weird.  Page 632 is written as 634 and then after page 669 comes not 670 but 700!  I ran my script to bring the page numbers in the system into line with the oddities of the written numbers.  On Friday I downloaded a further library register which I’ll need to process next week.

My main focus for the project was the Chambers Library interactive map sub-site.  The map features the John Ainslie 1804 map from the NLS, and currently it uses the same modern map as I’ve used elsewhere in the front-end for consistency, although this may change.  The map defaults to having a ‘Map options’ pane open on the left, and you can open and close this using the button above it.  I also added a ‘Full screen’ button beneath the zoom buttons in the bottom right.  I also added this to the other maps in the front-end too. Borrower markers have a ‘person’ icon and the library itself has the ‘open book’ icon as found on other maps.

By default the data is categorised by borrower gender, with somewhat stereotypical (but possibly helpful) blue and pink colours differentiating the two.  There is one borrower with an ‘unknown’ gender and this is set to green.  The map legend in the top right allows you to turn on and off specific data groups.  The screenshot below shows this categorisation:

The next categorisation option is occupation, and this has some problems.  The first is there are almost 30 different occupations, meaning the legend is awfully long and so many different marker colours are needed that some of them are difficult to differentiate.  Secondly, most occupations only have a handful of people.  Thirdly, some people have multiple occupations, and if so these are treated as one long occupation, so we have both ‘Independent Means > Gentleman’ and then ‘Independent Means > Gentleman, Politics/Office Holders > MP (Britain)’.  It would be tricky to separate these out as the marker would then need to belong to two sets with two colours, plus what happens if you hide one set?  I wonder if we should just use the top-level categorisation for the groupings instead?  This would result in 12 groupings plus ‘unknown’, meaning the legend would be both shorter and narrower.  Below is a screenshot of the occupation categorisation as it currently stands:

The next categorisation is subscription type, which I don’t think needs any explanation.  I then decided to add in a further categorisation for number of borrowings, which wasn’t originally discussed but as I used the page I found myself looking for an option to see who borrowed the most, or didn’t borrow anything.  I added the following groupings, but these may change: 0, 1-10, 11-20, 21-50, 51-70, 70+ and have used a sequential colour scale (darker = more borrowings).  We might want to tweak this, though, as some of the colours are a bit too similar.  I haven’t added in the filter to select subscription period yet, but will look into this next week.

At the bottom of the map options is a facility to change the opacity of the historical map so you can see the modern street layout.  This is handy for example for figuring out why there is a cluster of markers in a field where ‘Ainslie Place’ was presumably built after the historical map was produced.

I decided to not include the marker clustering option in this map for now as clustering would make it more difficult to analyse the categorisation as markers from multiple groupings would end up clustered together and lose their individual colours until the cluster is split.  Marker hover-overs display the borrower name and the pop-ups contain information about the borrower.  I still need to add in the borrowing period data, and also figure out how best to link out to information about the borrowings or page images.  The Chambers Library pin displays the same information as found in the ‘libraries’ page you’ve previously seen.

Also this week I responded to a couple of queries from the DSL people about Google Analytics and the icons that gets used for the site when posting on Facebook.  Facebook was picking out the University of Glasgow logo rather than the DSL one, which wasn’t ideal.  Apparently there’s a ‘meta’ tag that you need to add to the site header in order for Facebook to pick up the correct logo, as discussed here: https://stackoverflow.com/questions/7836753/how-to-customize-the-icon-displayed-on-facebook-when-posting-a-url-onto-wall

I also created a new user for the Ayr place-names project and dealt with a couple of minor issues with the CMS that Simon Taylor had encountered.  I also investigated a certificate error with the ohos.ac.uk website and responded to a query about QR codes from fellow developer David Wilson.  Also, Craig Lamont in Scottish Literature got in touch about a spreadsheet listed Burns manuscripts that he’s been working on with a view to turning it into a searchable online resource and I gave him some feedback about the structure of the spreadsheet.

Finally, I did a bit of work for the Historical Thesaurus, working on a further script to match up HT and OED categories based on suggestions by researcher Beth Beattie.  I found a script I’d produced in from 2018 that ran pattern matching on headings and I adapted this to only look at subcats within 02.02 and 02.03, picking out all unmatched OED subcats from these (there are 627) and then finding all unmatched HT categories where our ‘t’ numbers match the OED path.  Previously the script used the HT oedmaincat column to link up OED and HT but this no longer matches (e.g. HT ‘smarten up’ has ‘t’ nums 02.02.16.02 which matches OED 02.02.16.02 ‘to smarten up’ whereas HT ‘oedmaincat’ is ’02.04.05.02’).

The script lists the various pattern matches at the top of the page and the output is displayed in a table that can be copied and pasted into Excel.  Of the 627 OED subcats there are 528 that match an HT category.  However, some of them potentially match multiple HT categories.  These appear in red while one to one matches appear in green.  Some of these multiple matches are due to Levenshtein matches (e.g. ‘sadism’ and ‘sadist’) but most are due to there being multiple subcats at different levels with the exact same heading.  These can be manually tweaked in Excel and then I could run the updated spreadsheet through a script to insert the connections.  We also had an HT team meeting this week that I attended.

Week Beginning 30th May 2022

It was a three-day week as Thursday and Friday were bank holidays for the Queen’s Platinum Jubilee.  I spent most of the available time working on the Books and Borrowers project.  I had a chat with RA Alex Deans about the data for the Chambers Library sub-project that we’re hoping to launch in July.  Although this data is already in the system it needs additional latitude and longitude data so we can position borrowers on an interactive map.  We decided to add this data and some other data using the ‘additional fields’ system in the CMS and Alex is hopefully going to get this done by next week.

I’d made a start on the API for the project last week, and this week I completed the endpoint that displays all of the data that will be needed for the ‘Browse Libraries’ page, which can be accessed as JSON or CSV data.  This includes counts of registers, borrowing records, books and borrowers plus a breakdown of the number of borrowings per year at each library that will be used for the stacked column chart.  The systems reside on servers at Stirling University, and their setup has the database on a different server to the code.  This means there is an overhead when sending queries to the database as each one needs to be sent as an HTTP request rather than dealt with locally.  This has led me to be a bit more efficient when constructing queries.  For example, rather than running individual ‘count’ queries for each library after running an initial query to retrieve all library details I’ve instead used subqueries as part of the initial query so all the data including the counts gets processed and returned by the database via one HTTP request.

With the data retrieval aspects of the ‘browse libraries’ page completed I then moved on to developing the page itself.  It has an introductory section (with placeholder text for now) then a map showing the locations of the libraries.  Any libraries that currently have lat/lng data appear on this map.  The markers are clustered when zoomed out, with the number referring to the number of libraries in the cluster.  I selected a map design that I thought fitted in with the site, but this might change, and I used an open book icon for the library map marker on a red background (to match the site’s header text colour) and again this may change.  You can hover over a marker to see the library name and press on a marker to open a popup containing a link to the library, the library name and alternative names, location, foundation date, type and statistics about registers, books, borrowers and records.

Beneath the map is a tabular view of the data.  This is the exact same data as is found on the map.  Library names are buttons leading to the library’s page.  You can change the order of the table by pressing on a heading (e.g. to see which library has the most books).  Pressing a second time reverses the order.  Below is a screenshot showing the map and the table, with the table ordered by number of borrowing records:

Beneath the table is a stacked column chart showing borrowings at the libraries over time that I created using the extremely useful HighCharts JavaScript library (See https://www.highcharts.com/demo).  At the moment the borrowing records start somewhere between 1700 and 1710 and end somewhere between 1890 and 1899.  Actually, there are some borrowing records beyond even this but are presumably mistakes (e.g. one had a year of ‘179’ or something like that).  As generating a graph with a bar for each year would result in about 200 bars I decided this wasn’t feasible and instead grouped borrowings into decades.  This sort of works, but we still have many decades at the start and end that only have a few records, but we may limit the decades we focus on.  We’re also visualising the data from 18 libraries in the chart, which is a lot.  This takes up a lot of space under the chart (where you can hover over a name to highlight the data in the bars).  However, you can open the menu to view the chart full screen, which makes it more legible.  You can also view the year data in a table by selecting the ‘data table’ option.  Below is a screenshot of the bar chart:

There are a couple of things I could do to make this more legible if required.  Firstly, we could use a stacked bar chart instead (https://www.highcharts.com/demo/bar-stacked).  The years would then be on the y-axis and we could have a very long chart with all of the years in place rather than aggregating to decades.  This would make it more difficult to view the legend and the x-axis tick marks, as you would need to scroll down to see them.  Secondly, we could stick with the decade view but then give the user the option of selecting a decade to view a new chart featuring the individual years in that decade.  This would make it harder for users to get the big picture all at once, although I guess the decade view would give that.

Also this week I checked up on the Speak For Yersel website, as we had sent the URL out to people with an interest in the Scots language at the end of last week.  When I checked on Wednesday we’d had 168 registered users.  These users had submitted 8,110 answers for the main questions plus 85 for the ‘drag onto map’ and 85 for the transcript.  606 of those main answers are from people who have chosen ‘outside Scotland’.  I also realised that I’d set the markers to be smaller if there were more than 100 answers on a map but the markers looked too small so I’ve updated things to make them the same size no matter how many answers there are.

My other main task for the week was to finalise the transfer of the Uist Saints website.  We managed to get the domain name ownership transferred over to Glasgow and paid the subscription fee for the next nine years and the version of the site hosted at Glasgow can now be found here: https://uistsaints.co.uk/

 

Week Beginning 23rd May 2022

I’d completed all of the outstanding tasks for ‘Speak For Yersel’ last week so this week I turned my attention to several other projects.  For the Books and Borrowing project I wrote a script to strip out duplicate author records from the data and reassign any books associated with the duplicates to the genuine author records.  The script iterated through each author in the ‘duplicates’ spreadsheet, found all rows where the ‘AID’ did not match the ‘AID to keep’ column, reassigned any book author records from the former to the latter and then deleted the author record.  The script deleted 310 duplicate authors and reassigned 735 books to other authors, making the data in the content management system a lot cleaner.

I then migrated the Uist Saints website to a server at Glasgow and got everything working at a temporary URL.  All looked fine to me, although there was an issue with the homepage that needed investigating.  This issue was present on the live site too, resulting in the page content cutting off and displaying a lot of blank space and no footer, with lots of errors being displayed in the console.  I did some investigation into the errors and discovered that these were being caused by some JavaScript embedded in the homepage that has been treated like HTML by WordPress.  It has added HTML line breaks (<br>) wherever there is a line break in the code, thereby breaking the JavaScript.  I updated the page to strip out all of the <br> tags and it now loads without any errors in the console but whatever the JavaScript is supposed to be doing still isn’t working and there’s still a huge expanse of empty space and then no footer.

The JavaScript appears to be attempting to display a map using the Leaflet mapping library, but using some sort of WordPress plugin to do so.  There are over 3000 lines of JavaScript code in the page, which is really crazy.  Every single marker on the map (e.g. “Cladh Choinnich (burial ground and site of chapel)” at [57.157715,-7.301283]) has its own script comprising around 70 lines of code.  Sofia, the project RA looked at the page and decided to try deleting the blocks of JavaScript, and this then seemed to solve to problem, which was great, as I was thinking I’d need to create a new map after somehow extracting all of the data.

I then moved on to the Ramsay ‘Gentle Shepherd’ data, and this week tackled the issue of importing the code I’d written into the University website’s T4 content management system.  I created a ‘one file’ version of the page that has everything incorporated in one single file – all the scripts, the data and the styles.  I was hoping I’d then be able to just upload this to T4 but I ran into a problem:

I selected the ‘Standard plain’ content type as I did for the Enlightenment map I created in T4 many years ago, but the ‘content’ box can only accept a maximum of 80,000 characters.  My ‘one file’ approach is around 404,000 characters so I can’t upload it.  I then wondered about using separate files, as I had done with the Enlightenment map, but the JSON data for the performances on its own is over 227,000 characters.  This data needs to be a single thing and can’t be split up into smaller chunks (at least not without then having to stitch the data back together in the JavaScript before it can be used every time someone loads the page which would have an impact on the speed of the page).

I notice that the Enlightenment map has a further content type called ‘_blank’ that isn’t available to me where the performance data is to go.  This type allows up to 150,000 characters.  Unfortunately this is still not big enough.  The Leaflet JavaScript library which I also need to upload is 141,000 characters so currently can’t be uploaded either. I then looked into uploading the JSON data as a media file and I managed to upload it, but apparently media files only become active in the system when they are linked to from a T4 page using T4’s method of linking to a file.  The JSON file would only ever be loaded in via an AJAX call from the JavaScript code so would never work.  However, I did realise that I could upload the JavaScript file with the JSON data stored directly within it as a media file and then link to this (and also the leaflet JavaScript file and the CSS files) from the T4 HTML file.  However, this wouldn’t work when using regular HTML tags to link to scripts and CSS files as T4 only activates media files when linked to using its own special way of inserting links.

A helpful guy called Rick in the Web Team suggested using the ‘standard’ content type and T4’s way of linking to files to get things working, and this did sort of work, but while the ‘standard’ content type allows you to manually edit the HTML, T4 then processes any HTML you enter, which included stripping out a lot of tags my code needed and overwriting other HTML tags, which was very frustrating.

However, I was able to view the source for the embedded media files in this template and then copy this into my ‘standard plain’ section and this seems to have worked.  There were other issues, though, such as that T4 applies its CSS styles AFTER any locally created styles meaning a lot of my custom styles were being overwritten.  I managed to find a way around this and the section of the page is now working if you preview it in T4.

Unfortunately to get this to work the JSON data needed to be embedded in the JavaScript file rather than loaded in as a separate file.  This is going to make it more difficult for non-technical people to edit the data directly in T4.  In order to do so someone would need to:  Download the ‘gspCode’ file in the Media Library, which T4 unhelpfully converts into a .txt file then rename the file to remove the .txt extension (so it ends in .js instead).  Then find the data array in the file, make the changes to it and then validate it in the handy JSON validator https://jsonlint.com/ before saving the JS file and uploading it as a replacement for the item in the Media Library.

With all of this out of the way I was hoping to begin work on the API and front-end for the Books and Borrowing project, and I did manage to make a start on this.  However, many further tweaks and updates came through from Jennifer Smith for the Speak For Yersel system, which we’re intending to sent out to selected people next week, and I ended up spending most of the rest of the week on this project instead.  This included several Zoom calls and implementing countless minor tweaks to the website content, including homepage text, updating quiz questions and answer options, help text, summary text, replacing images, changing styles and other such things.  I also updated the maps to set their height dynamically based on the height of the browser window, ensuring that the map and the button beneath it are visible without scrolling (but also including a minimum height so the map never gets too small).  I also made the maps wider and the question area narrower as there was previously quite a lot of wasted space with there was a 50/50 split between the two.

I also fixed a bug with the slider-based questions that was only affecting Safari that prevented the ‘next’ button from activating.  This was because the code that listened for the slider changing was set to do something when a slider was clicked on, but for it to work in Safari instead of ‘click’ the event needed to be ‘change’.  I also added in the new dictionary-based question type and added in the questions, although we then took these out again for now as we’d promised the DSL that the embedded school dictionary would only be used by the school children in our pilot.  I also added in a question about whether the user has been to university to the registration page and then cleared out all of the sample data and users that we’d created during our testing before actual users begin using the resource next week.

Week Beginning 16th May 2022

This week I finished off all of the outstanding work for the Speak For Yerself project. The other members of the team (Jennifer and Mary) are both on holiday so I finished off all of the tasks I had on my ‘to do’ list, although there will certainly be more to do once they are both back at work again.  The tasks I completed were a mixture of small tweaks and larger implementations.  I made tweaks to the ‘About’ page text and changed the intro text to the ‘more give your word’ exercise.  I then updated the age maps for this exercise, which proved to be pretty tricky and time-consuming to implement as I needed to pull apart a lot of the existing code.  Previously these maps showed ‘60+’ and ‘under 19’ data for a question, with different colour markers for each age group showing those who would say a term (e.g. ‘Scunnered’) and grey markers for each age group showing those who didn’t say the term.  We have completely changed the approach now.  The maps now default to showing ‘under 19’ data only, with different colours for each different term.  There is now an option in the map legend to switch to viewing the ‘60+’ data instead.  I added in the text ‘press to view’ to try and make it clearer that you can change the map.  Here’s a screenshot:

I also updated the ‘give your word’ follow-on questions so that they are now rated in a new final page that works the same way as the main quiz.  In the main ‘give your word’ exercise I updated the quiz intro text and I ensured that the ‘darker dots’ explanatory text has now been removed for all maps.  I tweaked a few questions to change their text or the number of answers that are selectable and I changed the ‘sounds about right’ follow-on ‘rule’ text and made all of the ‘rule’ words lower case.  I also made it so that when the user presses ‘check answers’ for this exercise a score is displayed to the right and the user is able to proceed directly to the next section without having to correct their answers.  They still can correct their answers if they want.

I then made some changes to the ‘She sounds really clever’ follow-on.  The index for this is now split into two sections, one for ‘stereotype’ data and one for ‘rating speaker’ data and you can view the speaker and speaker/listener results for both types of data.  I added in the option of having different explanatory text for each of the four perception pages (or maybe just two – one for stereotype data, one for speaker ratings) and when viewing the speaker rating data the speaker sound clips now appear beneath the map.  When viewing the speaker rating data the titles above the sliders are slightly different.  Currently when selecting the ‘speaker’ view the title is “This speaker from X sounds…” as opposed to “People from X sound…”.  When selecting the ‘speaker/listener’ view the title is “People from Y think this speaker from X sounds…” as opposed to “People from Y think people from X sound…”.  I also added a ‘back’ button to these perception follow-on pages so it’s easier to choose a different page.  Finally, I added some missing HTML <title> tags to pages (e.g. ‘Register’ and ‘Privacy’) and fixed a bug whereby the ‘explore more’ map sound clips weren’t working.

With my ‘Speak For Yersel’ tasks out of the way I could spend some time looking at other projects that I’d put on hold for a while.  A while back Eleanor Lawson contacted me about adding a new section to the Seeing Speech website where Gaelic speaker videos and data will be accessible, and I completed a first version this week.  I replicated the Speech Star layout rather than the /r/ & /l/ page layout as it seemed more suitable: the latter only really works for a limited number of records while the former works well with lots more (there are about 150 Gaelic records).  What this means is the data has a tabular layout and filter options.  As with Speech Star you can apply multiple filters and you can order the table by a column by clicking on its header (clicking a second time reverses the order).  I’ve also included the option to open multiple videos in the same window.  I haven’t included the playback speed options as the videos already include the clip at different speeds.  Here’s a screenshot of how the feature looks:

On Thursday I had a Zoom call with Laura Rattray and Ailsa Boyd to discuss a new digital edition project they are in the process of planning.  We had a really great meeting and their project has a lot of potential.  I’ve offered to give technical advice and write any technical aspects of the proposal as and when required, and their plan is to submit the proposal in the autumn.

My final major task for the week was to continue to work on the Ramsay ‘Gentle Shepherd’ data.  I overhauled the filter options that I implemented last week so they work in a less confusing way when multiple types are selected now.  I’ve also imported the updated spreadsheet, taking the opportunity to trim whitespace to cut down on strange duplicates in the filter options.  There are some typos you’ll need to fix in the spreadsheet, though (e.g. we have ‘Glagsgow’ and ‘Glagsow’) plus some dates still need to be fixed.

I then created an interactive map for the project and have incorporated the data for which there are latitude and longitude values.  As with the Edinburgh Gazetteer map of reform societies (https://edinburghgazetteer.glasgow.ac.uk/map-of-reform-societies/) the number of performances at a venue is displayed in the map marker.  Hover over a marker to see info about the venue.  Click on it to open a list of performances.  Note that when zoomed out it can be difficult to make out individual markers but we can’t really use clustering as on the Burns Supper map (https://burnsc21.glasgow.ac.uk/supper-map/) because this would get confusing:  we’d have clustered numbers representing the number of markers in a cluster and then induvial markers with a number representing the number of performances.  I guess we could remove the number of performances from the marker and just have this in the tooltip and / or popup, but it is quite useful to see all the numbers on the map.  Here’s a screenshot of how the map currently looks:

I still need to migrate all of this to the University’s T4 system, which I aim to tackle next week.

Also this week I had discussions about migrating an externally hosted project website to Glasgow for Thomas Clancy.  I received a copy of the files and database for the website and have checked over things and all is looking good.  I also submitted a request for a temporary domain and I should be able to get a version of the site up and running next week.  I also regenerated a list of possible duplicate authors in the Books and Borrowing system after the team had carried out some work to remove duplicates.  I will be able to use the spreadsheet I have now to amalgamate duplicate authors, a task which I will tackle next week.

Week Beginning 9th May 2022

I spent most of the week continuing with the Speak For Yersel website, which is now nearing completion.  A lot of my time was spent tweaking things that were already in place, and we had a Zoom call on Wednesday to discuss various matters too.  I updated the ‘explore more’ age maps so they now include markers for young and old who didn’t select ‘scunnered’, meaning people can get an idea of the totals.  I also changed the labels slightly and the new data types have been given two shades of grey and smaller markers, so the data is there but doesn’t catch the eye as much as the data for the selected term.  I’ve updated the lexical ‘explore more’ maps so they now actually have labels and the ‘darker dots’ text (which didn’t make much sense for many maps) has been removed.  Kinship terms now allow for two answers rather than one, which took some time to implement in order to differentiate this question type from the existing ‘up to 3 terms’ option.  I also updated some of the pictures that are used and added in an ‘other’ option to some questions.  I also updated the ‘Sounds about right’ quiz maps so that they display different legends that match the question words rather than the original questionnaire options.  I needed to add in some manual overrides to the scripts that generate the data for use in the site for this to work.

I also added in proper text to the homepage and ‘about’ page.  The former included a series of quotes above some paragraphs of text and I wrote a little script that highlighted each quote in turn, which looked rather nice.  This then led onto the idea of having the quotes positioned on a map on the homepage instead, with different quotes in different places around Scotland.  I therefore created an animated GIF based on some static map images that Mary had created and this looks pretty good.

I then spent some time researching geographical word clouds, which we had been hoping to incorporate into the site.  After much Googling it would appear that there is no existing solution that does what we want, i.e. take a geographical area and use this as the boundaries for a word cloud, featuring different coloured words arranged at various angles and sizes to cover the area.  One potential solution that I was pinning my hopes on was this one: https://github.com/JohnHenryEden/MapToWordCloud which promisingly states “Turn GeoJson polygon data into wordcloud picture of similar shape.”.  I managed to get the demo code to run, but I can’t get it to actually display a word cloud, even though the specifications for one are in the code.  I’ve tried investigating the code but I can’t figure out what’s going wrong.  No errors are thrown and there’s very little documentation.  All that happens is a map with a polygon area is displayed – no word cloud.

The word cloud aspects of the above are based on another package here: https://npm.io/package/wordcloud and this package allows you to specify a shape to use as an outline for the cloud, and one of the examples shows words taking up the shape of Taiwan: https://wordcloud2-js.timdream.org/#taiwan  However, this is a static image not an interactive map – you can’t zoom into it or pan around it.  One possible solution may be to create images of our regions, generate static word cloud images as with the above and then stitch the images together for form a single static map of Scotland.  This would be a static image, though, and not comparable to the interactive maps we use elsewhere in the website.  Programmatically stitching the individual region images together might also be quite tricky.  I guess another option would be to just allow users to select an individual region and view the static word cloud (dynamically generated based on the data available when the user selects to view it) for the selected region, rather than joining them all together.

I also looked at some further options that Mary had tracked down.  The word cloud on a leaflet map (http://hourann.com/2014/js-devs-dont-get-lost/leaflet-wordcloud.html?sydney) only uses a circle for the boundaries of the word cloud.  All of the code is written around the use of a circle (e.g. using diameters to work out placement) so couldn’t really be adapted to work with a complex polygon.  We could work out a central point for each region and have a circular word cloud positioned at that point, but we wouldn’t be able to make the words fill the entire region.  The second of Mary’s links (https://www.jasondavies.com/wordcloud/) as far as I can tell is just a standard word cloud generator with no geographical options.  The third option (https://github.com/peterschretlen/leaflet-wordcloud) has no demo or screenshot or much information about it and I’m afraid I can’t get it to work.

The final option (https://dagjomar.github.io/Leaflet.ParallaxMarker/) is pretty cool but it’s not really a word cloud as such.  Instead it’s a bunch of labels set to specific lat/lng points and given different levels which sets their size and behaviour on scroll.  We could use this to set the highest rated words to the largest level with lower rated words at lower level and position each randomly in a region, but it’s not really a word cloud and it would be likely that words would spill over into neighbouring regions.

Based on the limited options that appear to be out there, I think creating a working, interactive map-based word cloud would be a research project in itself and would take far more time than we have available.

Later on in the week Mary sent me the spreadsheet she’d been working on to list settlements found in postcode areas and to link these areas to the larger geographical regions we use.  This is exactly what we needed to fill in the missing piece in our system and I wrote a script that successfully imported the data.  For our 411 areas we now have 957 postcode records and 1638 settlement records.  After that I needed to make some major updates to the system.  Currently a person is associated with an area (e.g. ‘Aberdeen Southwest’) but I need to update this so that a person is associated with a specific settlement (e.g. ‘Ferryhill, Aberdeen’), which is then connected to the area and from the area to one of our 14 regions (e.g. ‘North East (Aberdeen)’).

I updated the system to make these changes and updated the ‘register’ form, which now features an autocomplete for the location – start typing a place and all matches appear.  Behind the scenes the location is saved and connected up to areas and regions, meaning we can now start generating real data, rather than a person being assigned a random area.  The perception follow-on now connects the respondent up with the larger region when selecting ‘listener is from’, although for now some of this data is not working.

I then needed to further update the registration page to add in an ‘outside Scotland’ option so people who did not grow up in Scotland can use the site.  Adding in this option actually broke much of the site:  registration requires an area with a geoJSON shape associated with the selected location otherwise it fails and the submission of answers requires this shape in order to generate a random marker point and this then failed when the shape wasn’t present.  I updated the scripts to fix these issues, meaning an answer submitted by an ‘outside’ person has a zero for both latitude and longitude, but then I also needed to update the script that gets the map data to ensure that none of these ‘outside’ answers were returned in any of the data used in the site (both for maps and for non-map visualisations such as the sliders).  So, much has changed and hopefully I haven’t broken anything whilst implementing these changes.  It does now mean that ‘outside’ people can now be included and we can export and use their data in future, even though it is not used in the current site.

Further tweaks I implemented this week included: changing the font sizes of some headings and buttons; renaming the ‘activities’ and ‘more’ pages as requested; adding ‘back’ buttons from all ‘activity’ and ‘more’ pages back to the index pages; adding an intro page to the click exercise as previously it just launched into the exercise whereas all others have an intro.  I also added summary pages to the end of the click and perception activities with links through to the ‘more’ pages and removed the temporary ‘skip to quiz’ option.  I also added progress bars to the click and perception activities.  Finally, I switched the location of the map legend from top right to top left as I realised when it was in the top right it was always obscuring Shetland whereas there’s nothing in the top left.  This has meant I’ve had to move the region label to the top right instead.

Also this week I continued to work on the Allan Ramsay ‘Gentle Shepherd’ performance data.  I added in faceted browsing to the tabular view, adding in a series of filter options for location, venue, adaptor and such things.  You can select any combination of filters (e.g. multiple locations and multiple years in combination).  When you select an item of one sort the limit options of other sorts update to only display those relevant to the limited data.  However, the display of limiting options can get a bit confusing once multiple limiting types have been selected.  I will try and sort this out next week.  There are also multiple occurrences of items in the limiting options (e.g. two Glasgows) because the data has spaces in some rows (‘Glasgow’ vs ‘Glasgow ‘) and I’ll need to see about trimming these out next time I import the data.

Also this week I arranged for the old DSL server to be taken offline, as the new website has now been operating successfully for two weeks.  I also had a chat with Katie Halsey about timescales for the development of the Books and Borrowers front-end.  Finally, I imported a new disordered paediatric speech dataset into the Speech Star website.  This included around double the number of records, new video files and a new ‘speaker code’ column.  Finally, I participated in a Zoom call for the Scottish Place-Names database where we discussed the various place-names surveys that are in progress and the possiblity of created an overarching search across all systems.