Week Beginning 9th May 2022

I spent most of the week continuing with the Speak For Yersel website, which is now nearing completion.  A lot of my time was spent tweaking things that were already in place, and we had a Zoom call on Wednesday to discuss various matters too.  I updated the ‘explore more’ age maps so they now include markers for young and old who didn’t select ‘scunnered’, meaning people can get an idea of the totals.  I also changed the labels slightly and the new data types have been given two shades of grey and smaller markers, so the data is there but doesn’t catch the eye as much as the data for the selected term.  I’ve updated the lexical ‘explore more’ maps so they now actually have labels and the ‘darker dots’ text (which didn’t make much sense for many maps) has been removed.  Kinship terms now allow for two answers rather than one, which took some time to implement in order to differentiate this question type from the existing ‘up to 3 terms’ option.  I also updated some of the pictures that are used and added in an ‘other’ option to some questions.  I also updated the ‘Sounds about right’ quiz maps so that they display different legends that match the question words rather than the original questionnaire options.  I needed to add in some manual overrides to the scripts that generate the data for use in the site for this to work.

I also added in proper text to the homepage and ‘about’ page.  The former included a series of quotes above some paragraphs of text and I wrote a little script that highlighted each quote in turn, which looked rather nice.  This then led onto the idea of having the quotes positioned on a map on the homepage instead, with different quotes in different places around Scotland.  I therefore created an animated GIF based on some static map images that Mary had created and this looks pretty good.

I then spent some time researching geographical word clouds, which we had been hoping to incorporate into the site.  After much Googling it would appear that there is no existing solution that does what we want, i.e. take a geographical area and use this as the boundaries for a word cloud, featuring different coloured words arranged at various angles and sizes to cover the area.  One potential solution that I was pinning my hopes on was this one: https://github.com/JohnHenryEden/MapToWordCloud which promisingly states “Turn GeoJson polygon data into wordcloud picture of similar shape.”.  I managed to get the demo code to run, but I can’t get it to actually display a word cloud, even though the specifications for one are in the code.  I’ve tried investigating the code but I can’t figure out what’s going wrong.  No errors are thrown and there’s very little documentation.  All that happens is a map with a polygon area is displayed – no word cloud.

The word cloud aspects of the above are based on another package here: https://npm.io/package/wordcloud and this package allows you to specify a shape to use as an outline for the cloud, and one of the examples shows words taking up the shape of Taiwan: https://wordcloud2-js.timdream.org/#taiwan  However, this is a static image not an interactive map – you can’t zoom into it or pan around it.  One possible solution may be to create images of our regions, generate static word cloud images as with the above and then stitch the images together for form a single static map of Scotland.  This would be a static image, though, and not comparable to the interactive maps we use elsewhere in the website.  Programmatically stitching the individual region images together might also be quite tricky.  I guess another option would be to just allow users to select an individual region and view the static word cloud (dynamically generated based on the data available when the user selects to view it) for the selected region, rather than joining them all together.

I also looked at some further options that Mary had tracked down.  The word cloud on a leaflet map (http://hourann.com/2014/js-devs-dont-get-lost/leaflet-wordcloud.html?sydney) only uses a circle for the boundaries of the word cloud.  All of the code is written around the use of a circle (e.g. using diameters to work out placement) so couldn’t really be adapted to work with a complex polygon.  We could work out a central point for each region and have a circular word cloud positioned at that point, but we wouldn’t be able to make the words fill the entire region.  The second of Mary’s links (https://www.jasondavies.com/wordcloud/) as far as I can tell is just a standard word cloud generator with no geographical options.  The third option (https://github.com/peterschretlen/leaflet-wordcloud) has no demo or screenshot or much information about it and I’m afraid I can’t get it to work.

The final option (https://dagjomar.github.io/Leaflet.ParallaxMarker/) is pretty cool but it’s not really a word cloud as such.  Instead it’s a bunch of labels set to specific lat/lng points and given different levels which sets their size and behaviour on scroll.  We could use this to set the highest rated words to the largest level with lower rated words at lower level and position each randomly in a region, but it’s not really a word cloud and it would be likely that words would spill over into neighbouring regions.

Based on the limited options that appear to be out there, I think creating a working, interactive map-based word cloud would be a research project in itself and would take far more time than we have available.

Later on in the week Mary sent me the spreadsheet she’d been working on to list settlements found in postcode areas and to link these areas to the larger geographical regions we use.  This is exactly what we needed to fill in the missing piece in our system and I wrote a script that successfully imported the data.  For our 411 areas we now have 957 postcode records and 1638 settlement records.  After that I needed to make some major updates to the system.  Currently a person is associated with an area (e.g. ‘Aberdeen Southwest’) but I need to update this so that a person is associated with a specific settlement (e.g. ‘Ferryhill, Aberdeen’), which is then connected to the area and from the area to one of our 14 regions (e.g. ‘North East (Aberdeen)’).

I updated the system to make these changes and updated the ‘register’ form, which now features an autocomplete for the location – start typing a place and all matches appear.  Behind the scenes the location is saved and connected up to areas and regions, meaning we can now start generating real data, rather than a person being assigned a random area.  The perception follow-on now connects the respondent up with the larger region when selecting ‘listener is from’, although for now some of this data is not working.

I then needed to further update the registration page to add in an ‘outside Scotland’ option so people who did not grow up in Scotland can use the site.  Adding in this option actually broke much of the site:  registration requires an area with a geoJSON shape associated with the selected location otherwise it fails and the submission of answers requires this shape in order to generate a random marker point and this then failed when the shape wasn’t present.  I updated the scripts to fix these issues, meaning an answer submitted by an ‘outside’ person has a zero for both latitude and longitude, but then I also needed to update the script that gets the map data to ensure that none of these ‘outside’ answers were returned in any of the data used in the site (both for maps and for non-map visualisations such as the sliders).  So, much has changed and hopefully I haven’t broken anything whilst implementing these changes.  It does now mean that ‘outside’ people can now be included and we can export and use their data in future, even though it is not used in the current site.

Further tweaks I implemented this week included: changing the font sizes of some headings and buttons; renaming the ‘activities’ and ‘more’ pages as requested; adding ‘back’ buttons from all ‘activity’ and ‘more’ pages back to the index pages; adding an intro page to the click exercise as previously it just launched into the exercise whereas all others have an intro.  I also added summary pages to the end of the click and perception activities with links through to the ‘more’ pages and removed the temporary ‘skip to quiz’ option.  I also added progress bars to the click and perception activities.  Finally, I switched the location of the map legend from top right to top left as I realised when it was in the top right it was always obscuring Shetland whereas there’s nothing in the top left.  This has meant I’ve had to move the region label to the top right instead.

Also this week I continued to work on the Allan Ramsay ‘Gentle Shepherd’ performance data.  I added in faceted browsing to the tabular view, adding in a series of filter options for location, venue, adaptor and such things.  You can select any combination of filters (e.g. multiple locations and multiple years in combination).  When you select an item of one sort the limit options of other sorts update to only display those relevant to the limited data.  However, the display of limiting options can get a bit confusing once multiple limiting types have been selected.  I will try and sort this out next week.  There are also multiple occurrences of items in the limiting options (e.g. two Glasgows) because the data has spaces in some rows (‘Glasgow’ vs ‘Glasgow ‘) and I’ll need to see about trimming these out next time I import the data.

Also this week I arranged for the old DSL server to be taken offline, as the new website has now been operating successfully for two weeks.  I also had a chat with Katie Halsey about timescales for the development of the Books and Borrowers front-end.  Finally, I imported a new disordered paediatric speech dataset into the Speech Star website.  This included around double the number of records, new video files and a new ‘speaker code’ column.  Finally, I participated in a Zoom call for the Scottish Place-Names database where we discussed the various place-names surveys that are in progress and the possiblity of created an overarching search across all systems.