Week Beginning 1st November 2021

I spent most of my time working on the Speak For Yersel project this week, including Zoom calls on Tuesday and Friday.  Towards the start of the week I created a new version of the exercise I created last week.  This version uses a new sound file and transcript and new colours based on the SCOSYA palette.  It also has different fonts, and has a larger margin on the left and right of the screen.  I’ve also updated the way the exercise works to allow you to listen to the clip up to three times, with the ‘clicks’ on subsequent listens adding to rather than replacing the existing ones.  I’ve had to add in a new ‘Finish’ button as information can no longer be processed automatically when the clip finishes.  I’ve moved the ‘Play’ and ‘Finish buttons to a new line above the progress bar as on a narrow screen the buttons on one line weren’t working well.  I’ve also replaced the icon when logging a ‘click’ and added in ‘Press’ instead of ‘Log’ as the button text.  Here’s a screenshot of the mockup in action:

I then gave some thought to the maps, specifically what data we’ll be generating from the questions and how it might actually form a heatmap or a marker-based map.  I haven’t seen any documents yet that actually go into this and it’s something we need to decide upon if I’m going to start generating maps.  I wrote a document detailing how data could be aggregated and sent it to the team for discussion.  I’m going to include the full text here so I’ve got a record of it:

The information we will have about users is:

  1. Rough location based on the first part of their postcode (e.g. G12) from which we will ascertain a central latitude / longitude point
  2. Which one of the 12 geographical areas this point is in (e.g. Glasgow)

There will likely be many (tens, hundreds or more) users with the same geographical information (e.g. an entire school over several years).  If we’re plotting points on a map this means one point will need to represent the answers of all of these people.

We are not dealing with the same issues as the Manchester Voices heatmaps.  Their heatmaps represent one single term, e.g. ‘Broad’ and the maps represent a binary choice – for a location the term is either there or it isn’t.  What we are dealing with in our examples are multiple options.

For the ‘acceptability’ question such as ‘Gonnae you leave me alone’ we have four possible answers: ‘I’d say this myself’, ‘I wouldn’t say this, but people where I live do’, ‘I’ve heard some people say this (outside my area, on TV etc)’ and ‘I’ve never heard anyone say this’.  If we could convert these into ratings (0-3 with ‘I’d say this myself’ being 3 and ‘I’ve never heard anyone say this’ being 0) then we could plot a heatmap with the data.

However, we are not dealing with comparable data to Manchester, where users draw areas and the intersects of these areas establish the pattern of the heatmap.  What we have are distinct geographical areas (e.g. G12) with no overlap between these areas and possibly hundreds of respondents within each area.  We would need to aggregate the data for each area to get a single figure for it but as we’re not dealing with a binary choice this is tricky.  E.g. if it was like the Manchester study and we were looking for the presence of ‘broad’ and there were 15 respondents at location Y and 10 had selected ‘broad’ then we could generate the percentage and say that 66% of respondents here used ‘broad’.

Instead what we might have for our 15 respondents is 8 said ‘I’d say this myself’ (53%), 4 said ‘I wouldn’t say this, but people where I live do’ (26%), 2 said ‘I’ve heard some people say this (outside my area, on TV etc)’ (13%) and 1 said ‘I’ve never heard anyone say this’ (7%).  So four different figures.  How would we convert this into a single figure that could then be used?

If we assign a rating of 0-3 to the four options then we can multiply the percentages by the rating score and then add all four scores together to give one overall score out of a maximum score of 300 (if 100% of respondents chose the highest rating of 3).  In the example here the scores would be 53% x 3 = 159, 26% x 2 = 52, 13% x 1 = 13 and 7% x 0 = 0, giving a total score of 224 out of 300, or 75% – one single figure for the location that can then be used to give a shade to the marker or used in a heatmap.

For the ‘Word Choice’ exercises (whether we allow a single or multiple words to be selected) we need to aggregate and represent non-numeric data, and this is going to be trickier.  For example, if person A selects ‘Daftie’ and ‘Bampot’ and person B selects ‘Daftie’, ‘Gowk’ and ‘Eejit’ and both people have the same postcode then how are these selections to be represented at the same geographical point on the map?

We could pick out the most popular word at each location and translate it into a percentage.  E.g. at location Y 10 people selected ‘Daftie’, 6 selected ‘Bampot’, 2 selected ‘Eejit’ and 1 selected ‘Gowk’ out of a total of 15 participants.  We then select ‘Daftie’ as the representative term with 66% of participants selecting it.  Across the map wherever ‘Daftie’ is the representative term the marker is given a red colour, with darker shades representing higher percentages.  For areas where ‘Eejit’ is the representative term it could be given shades of blue etc.  We could include a popup or sidebar that gives the actual data, including other words and their percentages at each location, either tabular or visually (e.g. a pie chart).  This approach would work as individual points or could possibly work as a heatmap with multiple colours, although it would then be trickier to include a popup or sidebar.  The overall approach would be similar to the NYT ice-hockey map:

Note, however, that for the map itself we would be ignoring everything other than the most commonly selected term at each location.

Alternatively, we could have individual maps or map layers for each word as a way of representing all selected words rather than just the top-rated one.  We would still convert the selections into a percentage (e.g. out of 15 participants at Location Y 10 people selected ‘Daftie’, giving us a figure of 66%) and assign a colour and shade to each form (e.g. ‘Daftie’ is shades of red with a darker shade meaning a higher percentage) but you’d be able to switch from the map for one form to that of another to show how the distribution changes (e.g. the ‘Daftie’ map has darker shades in the North East, the ‘Eejit’ map has darker shades in the South West), or look at a series of small maps for each form side by side to compare them all at once.  This approach would be comparable to the maps shown towards the end of the Manchester YouTube video for ‘Strong’, ‘Soft’ and ‘Broad’ (https://www.youtube.com/watch?v=ZosWTMPfqio):

Another alternative is we could have clusters of markers at each location, with one marker per term.  So for example if there are 6 possible terms each location on the map would consist of a cluster of 6 markers, each of a different colour representing the term, and each a different shade representing the percentage of people who selected the term at the location.  However, this approach would risk getting very cluttered, especially at zoomed out levels, and may present the user with too much information, and is in many ways similar to the visualisations we investigated and decided not to use for SCOSYA.  For example:

look at the marker for Arbroath.  This could be used to show four terms and the different sizes of each section would show the relative percentages of respondents who chose each.

A further thing to consider is whether we actually want to use heatmaps at all.  A choropleth map might work better.  From this page: https://towardsdatascience.com/all-about-heatmaps-bb7d97f099d7  here is an explanation:

“Choropleth maps are sometimes confused with heat maps. A choropleth map features different shading patterns within geographic boundaries to show the proportion of a variable of interest². In contrast, a heat map does not correspond to geographic boundaries. Choropleth maps visualize the variability of a variable across a geographic area or within a region. A heat map uses regions drawn according to the variable’s pattern, rather than the a priori geographic areas of choropleth maps¹. The Choropleth is aggregated into well-known geographic units, such as countries, states, provinces, and counties.”

An example of a choropleth map is:

We are going to be collecting the postcode area for every respondent and we could use this as the basis for our maps.  GeoJSON encoded data for postcode areas is available.  For example, here are all of the areas for the ‘G’ postcode: https://github.com/missinglink/uk-postcode-polygons/blob/master/geojson/G.geojson

Therefore we could generate choropleth maps comparable to the US one above based on these postcode areas (leaving areas with no respondents blank).  But perhaps postcode districts are too small an area and we may not get sufficient coverage.

There is an interesting article about generating bivariate choropleth maps here:

https://www.joshuastevens.net/cartography/make-a-bivariate-choropleth-map/

These enable two datasets to be displayed on one map, for example the percentage of people selecting ‘Daftie’ split into 25% chunks AND the percentage of people selecting ‘Eejit’ similarly split into 25% chunks, like this (only it would be 4×4 not 3×3):

However, there is a really good reply about why cramming a lot of different data into one map is a bad idea here: https://ux.stackexchange.com/questions/87941/maps-with-multiple-heat-maps-and-other-data and it’s well worth a read (despite calling a choropleth map a heat map).

After circulating the document we had a further meeting and it turns out the team don’t want to aggregate the data as such – what they want to do is have individual markers for each respondent, but to arrange them randomly throughout the geographical area the respondent is from to give a general idea of what the respondents in an area are saying without giving their exact location.  It’s an interesting approach and I’ll need to see whether I can find a way to randomly position markers to cover a geoJSON polygon.

Moving on to other projects, I also worked on the Books and Borrowers project, running a script to remove blank pages from all of the Advocates registers and discussing some issues with the Innerpeffray data and how we might deal with this.  I also set up the initial infrastructure for the ‘Our Heritage, Our Stories’ project website for Marc Alexander and Lorna Hughes and dealt with some requests from the DSL’s IT people about updating the DNS record for the website.  I also had an email conversation with Gerry Carruthers about setting up a website for the archive of the International Journal of Scottish Theatre and Screen and made a few minor tweaks to the mockups for the STAR project.

Finally, I continued to work on the Anglo-Norman Dictionary, firstly sorting out an issue with Greek characters not displaying properly and secondly working on the redating of citations where a date from a varlist tag should be used as the citation date.  I wrote a script that picked out the 465 entries that had been marked as needing updated in a spreadsheet and processed them, firstly updating each entry’s XML to replace the citation with the updated one, then replacing the date fields for the citation and then finally regenerating the earliest date for an entry if the update in citation date has changed this.  The script seemed to run perfectly on my local PC, based on a number of entries I checked, therefore I ran the script on the live database.  All seemed to work fine, but it looks like the earliest dates for entries haven’t been updated as often as expected, so I’m going to have to do some further investigation next week.