Week Beginning 22nd November 2021

I spent a bit of time this week writing an abstract for the DH2022 conference.  I wrote about how I rescued the data for the Anglo-Norman Dictionary in order to create the new AND website.  The DH abstracts are actually 750-1000 words long so it took a bit of time to write.  I have sent it on to Marc for feedback and I’ll need to run it by the AND editors before submission as well (if it’s worth submitting).  I still don’t know whether there would be sufficient funds for me to attend the event, plus the acceptance rate for papers is very low, so I’ll just need to see how this develops.

Also this week I participated in a Zoom call for the DSL about user feedback and redeveloping the DSL website.  It was a pretty lengthy call, but it was interesting to be a part of.  Marc mentioned a service called Hotjar (https://www.hotjar.com/) that allows you to track how people use your website (e.g. tracking their mouse movements) and this seemed like an interesting way of learning about how an interface works (or doesn’t).  I also had a conversation with Rhona about the updates to the DSL DNS that need to be made to improve the security or their email systems.  Somewhat ironically, recent emails from their IT people had ended up in my spam folder and I hadn’t realised they were asking me for further changes to be made, which unfortunately has caused a delay.

I spoke to Gerry Carruthers about another new project he’s hoping to set up, and we’ll no doubt be having a meeting about this in the coming weeks.  I also gave some advice to the students who are migrating the IJOSTS articles to WordPress too and made some updates to the Iona Placenames website in preparation for their conference.

For the Anglo-Norman Dictionary I fixed an issue with one of the textbase texts that had duplicate notes in one of its pages and then I worked on a new feature for the DMS that enables the editors to search the phrases contained in locutions in entries.  Editors can either match locution phrases beginning with a term (e.g. ta*), ending with a term (e.g. *de) or without a wildcard the term can appear anywhere in the phrase.  Other options found on the public site (e.g. single character wildcards and exact matches) are not included in this search.

The first time a search is performed the system needs to query all entries to retrieve only those that feature a locution.  These results are then stored in the session for use the next time a search is performed.  This means subsequent searches in a session should be quicker, and also means if the entries are updated between sessions to add or remove locutions the updates will be taken into consideration.

Search results work in a similar way to the old DMS option:  Any matching locution phrases are listed, together with their translations if present (if there are multiple senses / subsenses for a locution then all translations are listed, separated by a ‘|’ character).  Any cross references appear with an arrow and then the slug of the cross referenced entry.  There is also a link to the entry the locution is part of, which opens in a new tab on the live site.  A count of the total number of entries with locutions, the number of entries your search matched a phrase in and the total number of locutions is displayed above the results.

I spent the rest of the week working on the Speak For Yersel project.  We had a Zoom call on Monday to discuss the mockups I’d been working on last week and to discuss the user interface that Jennifer and Mary would like me to develop for the site (previous interfaces were just created for test purposes).  I spent the rest of my available time developing a further version of the grammar exercise with the new interface, that included logos, new fonts and colour schemes, sections appearing in different orders and an overall progress bar for the full exercise rather than individual ones for the questionnaire and the quiz sections.

I added in UoG and AHRC logos underneath the exercise area and added both an ‘About’ and ‘Activities’ menu items with ‘Activities’ as the active item.  The active state of the menu wasn’t mentioned in the document but I gave it a bottom border and made the text green not blue (but the difference is not hugely noticeable).  This is also used when hovering over a menu item.  I made the ‘Let’s go’ button blue not green to make it consistent with the navigation button in subsequent stages.  When a new stage loads the page now scrolls to the top as on mobile phones the content was changing but the visible section remained as it was previously, meaning the user had to manually scroll up.  I also retained the ‘I would never say that!’ header in the top-left corner of all stages rather than having ‘activities’ so it’s clearer what activity the user is currently working on.  For the map in the quiz questions I’ve added the ‘Remember’ text above the map rather than above the answer buttons as this seemed more logical and on the quiz the map pane scrolls up and scrolls down when the next question loads so as to make it clearer that it’s changed.  Also, the quiz score and feedback text now scroll down one after the other and in the final ‘explore’ page the clicked on menu item now remains highlighted to make it clearer which map is being displayed.  Here’s a screenshot of how the new interface looks:

Week Beginning 15th November 2021

I had an in-person meeting for the Historical Thesaurus on Tuesday this week – the first such meeting I’ve had since the first lockdown began.  It was a much more enjoyable experience than Zoom-based calls and we had some good discussions about the current state of the HT and where we will head next.  I’m going to continue to work on my radar chart visualisations when I have the time and we will hopefully manage to launch a version of the quiz before Christmas.  There has also been some further work on matching categories and we’ll be looking into this in the coming months.

We also discussed the Digital Humanities conference, which will be taking place in Tokyo next summer.  This is always a really useful conference for me to attend and I wondered about writing a paper about the redevelopment of the Anglo-Norman Dictionary.  I’m not sure at this point whether we would be able to afford to send me to the conference, and the deadline for paper submission is the end of this month.  I did start looking through these blog posts and I extracted all of the sections that relate to the redevelopment of the site.  It’s almost 35,000 words over 74 pages, which shows you how much effort has gone into the redevelopment process.

I also had a meeting with Gerry Carruthers and others about the setting up of an archive for the International Journal of Scottish Theatre and Screen.  I’d set up a WordPress site for this and explored how the volumes, issues and articles could be migrated over from PDFs.  We met with the two students who will now do the work.  I spent the morning before the meeting preparing an instruction document for the students to follow and at the meeting I talked through the processes contained in the document.  Hopefully it will be straightforward for the students to migrate the PDFs, although I suspect it may take them an article or two before they get into the swing of things.

Also this week I fixed an issue with the search results tabs in the left-hand panel of the entry page on the DSL website.  There’s a tooltip on the ‘Up to 1700’ link, but on narrow screens the tooltip was ending up positioned over the link, and when you pressed on it the code was getting confused as to whether you’d pressed on the link or the tooltip.  I repositioned the tooltips so they now appear above the links, meaning they should no longer get in the way on narrow screens.  I also looked into an issue with the DSL’s Paypal account, which wasn’t working.  This turned out to be an issue on the Paypal side rather than with the links through from the DSL’s site.

I also had to rerun the varlist date scripts for the AND as we’d noticed that some quotations had a structure that my script was not set up to deal with.  The expected structure is something like this:

<quotation>ou ses orribles pates paracrosçanz <varlist><ms_var id=””V-43aaf04a”” usevardate=””true””><ms_form>par acros</ms_form><ms_wit>BN</ms_wit><ms_date post=””1300″” pre=””1399″”>s.xiv<sup>in</sup></ms_date></ms_var></varlist> e par ateinanz e par encrés temptacions</quotation>

Where there is one varlist in the quotation, containing one or more ms_var tags.  But the entry ‘purprestur’ has multiple separate varlists in the quotation:

<quotation>Endreit de purprestures voloms qe les nusauntes <varlist><ms_var id=””V-66946b02″”><ms_form>nusantes porprestures</ms_form><ms_wit>W</ms_wit><ms_date>s.xiv</ms_date></ms_var></varlist> soint ostez a coustages de ceux qi lé averount fet <varlist><ms_var id=””V-67f91f67″”><ms_form>des provours</ms_form><ms_wit>A</ms_wit><ms_date>s.xiv</ms_date></ms_var><ms_var id=””V-ea466d5e””><ms_form>des fesours</ms_form><ms_wit>W</ms_wit><ms_date>s.xiv</ms_date></ms_var><ms_var id=””V-88b4b5c2″” usevardate=””true””><ms_form>dé purpresturs</ms_form><ms_wit>M</ms_wit><ms_date post=””1300″” pre=””1310″”>s.xiv<sup>in</sup></ms_date></ms_var><ms_var id=””V-769400cd””><ms_form>des purpernours</ms_form><ms_wit>C</ms_wit><ms_date>s.xiv<sup>1/3</sup></ms_date></ms_var></varlist> </quotation>

I wasn’t aware that this was a possibility, so my script wasn’t set up to catch such situations.  It therefore only looks at the first <varlist>. And the <ms_var> that needs to be used for dating isn’t contained in this, so gets missed.  I therefore updated the script and have run both spreadsheets through it again.  I also updated the DMS so that quotations with multiple varlists can be processed.

Also this week I updated all of the WordPress sites I manage and helped set up the Our Heritage, Our Stories site, and had a further discussion with Sofia about the conference pages for the Iona place-names project.

I spent the rest of the week continuing to work on the mockups for the Speak For Yerself project, creating a further mockup of the grammar quiz that now features all of the required stages.  The ‘word choice’ type of question now has a slightly different layout, with buttons closer together in a block, and after answering the second question there is now an ‘Explore the answers’ button under the map.  Pressing on this loads the summary maps for each question, which are not live maps yet, and underneath the maps is a button for starting the quiz.  There isn’t enough space to have a three-column layout for the quiz so I’ve placed the quiz above the summary maps.  The progress bar also gets reinstated for the quiz and I’ve added the  text ‘Use the maps below to help you’ just to make it clearer what those buttons are for.  The ‘Q1’, ‘Q2’ IDs will probably need to be altered as it just makes it look like the map refers to a particular question in the quiz, which isn’t the case.  It’s possible to keep a map open between quiz questions, and when you press an answer button the ones you didn’t press get greyed out.  If your choice is correct you get a tick, and if not you get a cross and the correct answer gets a tick.  The script keeps track of what questions have been answered correctly in the background and I haven’t implemented a timer yet.  After answering all of the questions (there doesn’t need to be 6 – the code will work with any number) you can finish the section, which displays your score and the ranking.  Here is a screenshot of how the quiz currently looks:

Week Beginning 8th November 2021

I spent a bit of time this week working for the DSL.  I needed to act as the go-between for the DSL’s new IT people who are updating their email system and the University’s IT people who manage the DNS record on behalf of the DSL.  IT took a few attempts before the required changes were successfully in place.  I also read through a document that had been prepared about automatically ‘fixing’ the DSL’s dates to make them machine readable, and gave some feedback on the many different procedures that will need to be performed on the various date forms to produce the desired structure.

I also looked into an issue with cross references within citations that work in the live site but are not functioning in the new site or in the DSL’s editing system.  After some investigation it seems like it’s another case of the original API ‘fixing’ the XML in some way each time it’s processed in order for these links to work.  The XML for ‘put_v’ stored in the original API is as follows:

<cit><cref><date>1591</date> <title>Edinb. B. Rec.</title> V 41 (see <ref>Putting</ref> <i>vbl. n.</i> 1 (1)).</cref></cit>

There is a <ref> tag but no other information in this tag.  This is the same for the XML exported from DPS and used in the new dsl site (which has an additional bibliographic reference in):

<cit><cref refid=”bib013153″><date>1591</date> <title>Edinb. B. Rec.</title> V 41 (see <ref>Putting</ref> <i>vbl. n.</i> 1 (1)).</cref></cit>

The XSLT for both the live and new sites doesn’t include anything to process a <ref> that doesn’t include any attributes so both the live and new sites shouldn’t be displaying a link through to ‘putting’.  But of course the live site does.  I had generated and stored the XML that the original API (which I did not develop) outputs whenever the live site asks for an entry.  When looking at this I found the following:

<cit><cref ref=”db674″><date>1591</date> <title>Edinb. B. Rec.</title> V 41 (see <ref action=”link” href=”dost/putting”>Putting</ref> <i>vbl. n.</i> 1 (1)).</cref></cit>

You can see that the original API is injecting both a bibliographical cross-reference and the ‘putting’ reference.  The former we previously identified and sorted but the latter unfortunately hasn’t, although references that are not in citations do seem to have been fixed.  I updated the XSLT on the new dsl site to process the <ref> so the link now works, however this is not an approach that can be relied upon as all the XSLT is currently doing is taking the contents of the tag (Putting) and making a link out of it.  If the ‘slug’ of the entry doesn’t match the display form then the link is not going to work.  The original API includes a table containing cross references, but this doesn’t differentiate ones in citations from regular ones, and as the ‘putting_v’ entry contains 83 references it’s not going to be easy to pick out from this the ones that still need to be added.  This will need further discussion with the editors.

Continuing on a dictionary theme, I also did some further work for the Anglo-Norman Dictionary.  Last week I processed entries where a varlist date needed to be used as the citation date, but we noticed that the earliest date for entries hadn’t been updated in many cases where it should have been.  This week I figured out what went wrong.  My script only updated the entry’s date if the new date from the varlist was earlier than the existing earliest date for the entry.  This is obviously not what we want as in the majority of cases the varlist date will be later and should replace the earlier date that is erroneous.  Thankfully it was easy to pick out all of the entries that have a ‘usevardate’ and I then reran a corrected version of the script that checks and replaces an entry’s earliest date.

The editor spotted a couple of entries that still hadn’t been updated after this process and I then had to investigate them.  One of them had an error in the edited markup that was preventing the update from being applied.  For the other I realised that my code to update the XML wasn’t looking at all senses, just the first in each entry.  My script was attempting to loop through all senses as follows:

foreach($xml->main_entry->sense -> attestation as $a){

//process here

}

Which unfortunately only loops through all attestations in the first sense.  What I needed to do was:

 

foreach($xml->main_entry->sense as $s){

foreach($s->attestation as $a){

//process here

}

}

As the sense that needed updating for ‘aspreté’ was the last one the XML wasn’t getting changed, this meant ‘usevardate’ wasn’t present in the XML therefore my update to regenerate the earliest dates didn’t catch this entry (despite all dates for citations being successfully updated in the database for the entry).  I then fixed my script and regenerated all data again, including fixing the data so the ones with XML errors were updated.  I then ran a further spreadsheet containing entries that needed updated through the fixed script, resulting in a further 257 citations that had their dates updated.

Finally, I updated the Dictionary Management System so that ‘usevardate’ dates are taken into consideration when processing and publishing uploaded XML files.  If a ‘usevardate’ is found then this date is used for the attestation, which automatically affects the earliest date that is generated for the entry and also the dates used for attestations for search purposes.  I tried this out by downloading the XML for ‘admirable’, which features a ‘usevardate’.  I then edited the XML to remove the ‘usevardate’ before uploading and publishing this version.  As expected the dates for the attestation and the entry’s earliest date were affected by this change.  I then edited the XML to reinstate the ‘usevardate’ and uploaded and published this version, which took into consideration the ‘usevardate’ when generating the entry’s earliest date and attestation dates and returned the entry to the way it was before the test.

Also this week I set up a WordPress site that will be used for the archive of the International Journal of Scottish Theatre and Screen and migrated one of the issues to WordPress, which required me to do the following:

  1. Open the file in a PDF viewer for reference (e.g. Adobe Acrobat)
  2. Open the file in MS Word, which converts it into an editable format
  3. Create a WordPress page for the article with the article’s title as the page title and setting the page ‘parent’ as Volume 1
  4. Copy and paste the article contents from Word into WordPress
  5. Go through the article in WordPress, referencing the file in Acrobat, and manually fixing any issues that I spotted (e.g. fixing the display of headings, fixing some line breaks that were erroneously added). Footnotes proved to be particularly tricky as their layout was not handled very well by Word.  It’s possible that some footnotes are not quite right, especially with the ‘Trainspotting’ article that has more than 70(!) footnotes.
  6. Publish the WordPress page and update the ‘Volume 1’ page to add a link to it.

None of this was particularly difficult to do, but it was somewhat time-consuming.  There are a further 18 issues left to do (as far as I can tell), although some of these will take longer as they contain more articles, and some of these are more structurally complicated (e.g. including images).  Gerry Carruthers is getting a couple of students to do the rest and we have a meeting scheduled next week where I’ll talk through the process.

I also made some further tweaks to the WordPress site for the ‘Our Heritage, Our Stories’ site and dealt with renewing the domain for TheGlasgowStory.com site, which is now safe for a further nine years.  I also generated an Excel spreadsheet of the full lexical dataset from Mapping Metaphor for Wendy Anderson after she had a request for the data from some researchers in Germany.

I spent the rest of the week working for the Speak For Yersel project, continuing to generate mockups of the interactive exercises.  I completed an initial version of the overall structure for both the accessibility and word choice question types for the grammar exercise, so it will be possible to just ‘plug in’ any number of other questions that fit these templates.  What I haven’t done yet is incorporate the maps, the post-questionnaire ‘explore’ or the final quiz, as these need more content.  Here’s how things currently look:

I used another different font for the heading (Slackey), with the same one used for the ‘Question x of y’ text too’.  I also used CSS gradients quite a bit in this version, as the team seemed quite keen on these.  There’s a subtle diagonal gradient in the header and footer backgrounds, and a more obvious  top-to-bottom one in the answer buttons.  I used different combinations of colours too.  I created a progress bar, which works, but with only two questions in the system it’s not especially obvious what it does.  Rather than having people click an answer and then click a ‘next’ button to continue I’ve made it so that clicking an answer automatically loads the next step, and clicking an answer loads a panel with a ‘map’ – this is just a static image for now.  It also loads a ‘next’ button if there is a next question.  Clicking the ‘next’ button slides up the map panel, loads the next question in and advances the progress bar.  Users will be accessing this on many different screen sizes and I’ve tested it out on my Android phone and my iPad in both portrait and landscape orientations and all seems to work well to me.  However, the map panel will be displayed below rather than beside the questions on narrower screens.

I then began experimenting with randomly positioned markers in polygonal areas.  Initially I wanted to see whether this would be possible in ArcGIS, and a bit of Googling suggested it would be, see for example this post: http://gis.mtu.edu/?p=127 which is 10 years old, so the instructions don’t in any way match up to how things work in the current version of ArcGIS, but it at least showed it should be possible.  I loaded the desktop version of ArcGIS up via Glasgow Anywhere and after some experimentation and a fair bit of exasperation I managed to create a polygon shape and add 100 randomly placed marker points to it, which you can see here:

Something we will have to bear in mind is how such points will look when zoomed:

This is just 100 points over a pretty large geographical area.  We might end up with thousands of points, which might make this approach unusable.  Another issue is it took ArcGIS more than a minute to generate and process these 100 random points.  I don’t know how much of this is down to running the software via Glasgow Anywhere, but if we’re dealing with tens of polygons and hundreds or thousands of data points this is just not going to be feasible.

An issue of greater concern is that as far as I can tell (after more than an hour of investigation) the ‘create random points’ option is not available via ArcGIS Online, which is the tool we would need to use to generate maps to share online (if we choose to use ArcGIS).  The online version seems to be really pared back in terms of functionality compared to the desktop version and I just couldn’t see any way of incorporating the random points system.  However, I discovered a way of generating random points using Leaflet and another javascript based geospatial library called turf.js (http://turfjs.org/).  The information about how to go about it is here:  https://gis.stackexchange.com/questions/163044/mapbox-how-to-generate-a-random-coordinate-inside-a-polygon

I created a test map using the SCOSYA area for Campbeltown and the SCOSYA base map.  As a solution I’d say it’s working pretty well – it’s very fast and seems to do what we want it to.  You can view an example of the script output here:

The script generates 100 randomly placed markers each time you load the page.  At zoomed out levels the markers are too big, but I can make them smaller – this is just an initial test.  There is unfortunately going to be some clustering of markers as well, due to the nature of the random number generator.  This may give people to wrong impression.  I could maybe update the code to reject markers that are in too close proximity to another one, but I’d need to see about that.  I’d say it’s looking promising, anyway!

Week Beginning 1st November 2021

I spent most of my time working on the Speak For Yersel project this week, including Zoom calls on Tuesday and Friday.  Towards the start of the week I created a new version of the exercise I created last week.  This version uses a new sound file and transcript and new colours based on the SCOSYA palette.  It also has different fonts, and has a larger margin on the left and right of the screen.  I’ve also updated the way the exercise works to allow you to listen to the clip up to three times, with the ‘clicks’ on subsequent listens adding to rather than replacing the existing ones.  I’ve had to add in a new ‘Finish’ button as information can no longer be processed automatically when the clip finishes.  I’ve moved the ‘Play’ and ‘Finish buttons to a new line above the progress bar as on a narrow screen the buttons on one line weren’t working well.  I’ve also replaced the icon when logging a ‘click’ and added in ‘Press’ instead of ‘Log’ as the button text.  Here’s a screenshot of the mockup in action:

I then gave some thought to the maps, specifically what data we’ll be generating from the questions and how it might actually form a heatmap or a marker-based map.  I haven’t seen any documents yet that actually go into this and it’s something we need to decide upon if I’m going to start generating maps.  I wrote a document detailing how data could be aggregated and sent it to the team for discussion.  I’m going to include the full text here so I’ve got a record of it:

The information we will have about users is:

  1. Rough location based on the first part of their postcode (e.g. G12) from which we will ascertain a central latitude / longitude point
  2. Which one of the 12 geographical areas this point is in (e.g. Glasgow)

There will likely be many (tens, hundreds or more) users with the same geographical information (e.g. an entire school over several years).  If we’re plotting points on a map this means one point will need to represent the answers of all of these people.

We are not dealing with the same issues as the Manchester Voices heatmaps.  Their heatmaps represent one single term, e.g. ‘Broad’ and the maps represent a binary choice – for a location the term is either there or it isn’t.  What we are dealing with in our examples are multiple options.

For the ‘acceptability’ question such as ‘Gonnae you leave me alone’ we have four possible answers: ‘I’d say this myself’, ‘I wouldn’t say this, but people where I live do’, ‘I’ve heard some people say this (outside my area, on TV etc)’ and ‘I’ve never heard anyone say this’.  If we could convert these into ratings (0-3 with ‘I’d say this myself’ being 3 and ‘I’ve never heard anyone say this’ being 0) then we could plot a heatmap with the data.

However, we are not dealing with comparable data to Manchester, where users draw areas and the intersects of these areas establish the pattern of the heatmap.  What we have are distinct geographical areas (e.g. G12) with no overlap between these areas and possibly hundreds of respondents within each area.  We would need to aggregate the data for each area to get a single figure for it but as we’re not dealing with a binary choice this is tricky.  E.g. if it was like the Manchester study and we were looking for the presence of ‘broad’ and there were 15 respondents at location Y and 10 had selected ‘broad’ then we could generate the percentage and say that 66% of respondents here used ‘broad’.

Instead what we might have for our 15 respondents is 8 said ‘I’d say this myself’ (53%), 4 said ‘I wouldn’t say this, but people where I live do’ (26%), 2 said ‘I’ve heard some people say this (outside my area, on TV etc)’ (13%) and 1 said ‘I’ve never heard anyone say this’ (7%).  So four different figures.  How would we convert this into a single figure that could then be used?

If we assign a rating of 0-3 to the four options then we can multiply the percentages by the rating score and then add all four scores together to give one overall score out of a maximum score of 300 (if 100% of respondents chose the highest rating of 3).  In the example here the scores would be 53% x 3 = 159, 26% x 2 = 52, 13% x 1 = 13 and 7% x 0 = 0, giving a total score of 224 out of 300, or 75% – one single figure for the location that can then be used to give a shade to the marker or used in a heatmap.

For the ‘Word Choice’ exercises (whether we allow a single or multiple words to be selected) we need to aggregate and represent non-numeric data, and this is going to be trickier.  For example, if person A selects ‘Daftie’ and ‘Bampot’ and person B selects ‘Daftie’, ‘Gowk’ and ‘Eejit’ and both people have the same postcode then how are these selections to be represented at the same geographical point on the map?

We could pick out the most popular word at each location and translate it into a percentage.  E.g. at location Y 10 people selected ‘Daftie’, 6 selected ‘Bampot’, 2 selected ‘Eejit’ and 1 selected ‘Gowk’ out of a total of 15 participants.  We then select ‘Daftie’ as the representative term with 66% of participants selecting it.  Across the map wherever ‘Daftie’ is the representative term the marker is given a red colour, with darker shades representing higher percentages.  For areas where ‘Eejit’ is the representative term it could be given shades of blue etc.  We could include a popup or sidebar that gives the actual data, including other words and their percentages at each location, either tabular or visually (e.g. a pie chart).  This approach would work as individual points or could possibly work as a heatmap with multiple colours, although it would then be trickier to include a popup or sidebar.  The overall approach would be similar to the NYT ice-hockey map:

Note, however, that for the map itself we would be ignoring everything other than the most commonly selected term at each location.

Alternatively, we could have individual maps or map layers for each word as a way of representing all selected words rather than just the top-rated one.  We would still convert the selections into a percentage (e.g. out of 15 participants at Location Y 10 people selected ‘Daftie’, giving us a figure of 66%) and assign a colour and shade to each form (e.g. ‘Daftie’ is shades of red with a darker shade meaning a higher percentage) but you’d be able to switch from the map for one form to that of another to show how the distribution changes (e.g. the ‘Daftie’ map has darker shades in the North East, the ‘Eejit’ map has darker shades in the South West), or look at a series of small maps for each form side by side to compare them all at once.  This approach would be comparable to the maps shown towards the end of the Manchester YouTube video for ‘Strong’, ‘Soft’ and ‘Broad’ (https://www.youtube.com/watch?v=ZosWTMPfqio):

Another alternative is we could have clusters of markers at each location, with one marker per term.  So for example if there are 6 possible terms each location on the map would consist of a cluster of 6 markers, each of a different colour representing the term, and each a different shade representing the percentage of people who selected the term at the location.  However, this approach would risk getting very cluttered, especially at zoomed out levels, and may present the user with too much information, and is in many ways similar to the visualisations we investigated and decided not to use for SCOSYA.  For example:

look at the marker for Arbroath.  This could be used to show four terms and the different sizes of each section would show the relative percentages of respondents who chose each.

A further thing to consider is whether we actually want to use heatmaps at all.  A choropleth map might work better.  From this page: https://towardsdatascience.com/all-about-heatmaps-bb7d97f099d7  here is an explanation:

“Choropleth maps are sometimes confused with heat maps. A choropleth map features different shading patterns within geographic boundaries to show the proportion of a variable of interest². In contrast, a heat map does not correspond to geographic boundaries. Choropleth maps visualize the variability of a variable across a geographic area or within a region. A heat map uses regions drawn according to the variable’s pattern, rather than the a priori geographic areas of choropleth maps¹. The Choropleth is aggregated into well-known geographic units, such as countries, states, provinces, and counties.”

An example of a choropleth map is:

We are going to be collecting the postcode area for every respondent and we could use this as the basis for our maps.  GeoJSON encoded data for postcode areas is available.  For example, here are all of the areas for the ‘G’ postcode: https://github.com/missinglink/uk-postcode-polygons/blob/master/geojson/G.geojson

Therefore we could generate choropleth maps comparable to the US one above based on these postcode areas (leaving areas with no respondents blank).  But perhaps postcode districts are too small an area and we may not get sufficient coverage.

There is an interesting article about generating bivariate choropleth maps here:

https://www.joshuastevens.net/cartography/make-a-bivariate-choropleth-map/

These enable two datasets to be displayed on one map, for example the percentage of people selecting ‘Daftie’ split into 25% chunks AND the percentage of people selecting ‘Eejit’ similarly split into 25% chunks, like this (only it would be 4×4 not 3×3):

However, there is a really good reply about why cramming a lot of different data into one map is a bad idea here: https://ux.stackexchange.com/questions/87941/maps-with-multiple-heat-maps-and-other-data and it’s well worth a read (despite calling a choropleth map a heat map).

After circulating the document we had a further meeting and it turns out the team don’t want to aggregate the data as such – what they want to do is have individual markers for each respondent, but to arrange them randomly throughout the geographical area the respondent is from to give a general idea of what the respondents in an area are saying without giving their exact location.  It’s an interesting approach and I’ll need to see whether I can find a way to randomly position markers to cover a geoJSON polygon.

Moving on to other projects, I also worked on the Books and Borrowers project, running a script to remove blank pages from all of the Advocates registers and discussing some issues with the Innerpeffray data and how we might deal with this.  I also set up the initial infrastructure for the ‘Our Heritage, Our Stories’ project website for Marc Alexander and Lorna Hughes and dealt with some requests from the DSL’s IT people about updating the DNS record for the website.  I also had an email conversation with Gerry Carruthers about setting up a website for the archive of the International Journal of Scottish Theatre and Screen and made a few minor tweaks to the mockups for the STAR project.

Finally, I continued to work on the Anglo-Norman Dictionary, firstly sorting out an issue with Greek characters not displaying properly and secondly working on the redating of citations where a date from a varlist tag should be used as the citation date.  I wrote a script that picked out the 465 entries that had been marked as needing updated in a spreadsheet and processed them, firstly updating each entry’s XML to replace the citation with the updated one, then replacing the date fields for the citation and then finally regenerating the earliest date for an entry if the update in citation date has changed this.  The script seemed to run perfectly on my local PC, based on a number of entries I checked, therefore I ran the script on the live database.  All seemed to work fine, but it looks like the earliest dates for entries haven’t been updated as often as expected, so I’m going to have to do some further investigation next week.

Week Beginning 25th October 2021

I came down with some sort of stomach bug on Sunday and was off work with it on Monday and Tuesday.  Thankfully I was feeling well again by Wednesday and managed to cram quite a lot into the three remaining days of the week.  I spent about a day working on the Data Management Plan for the new Curious Travellers proposal, sending out a first draft on Wednesday afternoon and dealing with responses to the draft during the rest of the week.  I also had some discussions with the Dictionaries of the Scots Language’s IT people about updating the DNS record regarding emails, responded to a query about the technology behind the SCOTS corpus, updated the images used in the mockups of the STAR website and created the ‘attendees only’ page for the Iona Placenames conference and added some content to it.  I also had a conversation with one of the Books and Borrowing researchers about trimming out the blank pages from the recent page image upload, and I’ll need to write a script to implement this next week.

My main task of the week was to develop a test version of the ‘where is the speaker from?’ exercise for the Speak For Yersel project.  This exercise involves the user listening to an audio clip and pressing a button each time they hear something that identifies the speaker as being from a particular area.  In order to create this I needed to generate my own progress bar that tracks the recording as it’s played, implement ‘play’ and ‘pause’ buttons, implement a button that when pressed grabs the current point in the audio playback and places a marker in the progress bar, and implement a means of extrapolating the exact times of the button press to specific sections of the transcription of the audio file so we can ascertain which section contains the feature the user noted.

It took quite some planning and experimentation to get the various aspects of the feature working, but I managed to complete an initial version that I’m pretty pleased with.  It will still need a lot of work but it demonstrates that we will be able to create such an exercise.  The interface design is not final, it’s just there as a starting point, using the Bootstrap framework (https://getbootstrap.com), the colours from the SCOSYA logo and a couple of fonts from Google Fonts (https://fonts.google.com).  There is a big black bar with a sort of orange vertical line on the right.  Underneath this is the ‘Play’ button and what I’ve called the ‘Log’ button (but we probably want to think of something better).  I’ve used icons from Font Awesome (https://fontawesome.com/) including a speech bubble icon in the ‘log’ button.

As discussed previously, when you press the ‘Play’ button the audio plays and the orange line starts moving across the black area.  The ‘Play’ button also turns into a ‘Pause’ button.  The greyed out ‘Log’ button becomes active when the audio is playing.  If you press the ‘Log’ button a speech bubble icon is added to the black area at the point where the orange ‘needle’ is.

For now the exact log times are outputted in the footer area.  Once the audio clip finishes the ‘Play’ button becomes a ‘Start again’ button.  Pressing on this clears the speech bubble icons and the footer and starts the audio from the beginning again.  The log is also processed.  Currently 1 second is taken off each click time to account for thinking and clicking.  I’ve extracted the data from the transcript of the audio and manually converted it into JSON data which is more easily processed by JavaScript.  Each ‘block’ consists of an ID, the transcribed content and the start and end times of the block in milliseconds.

For the time being for each click the script looks through the transcript data to find an entry where the click time is between the entry’s start and end times.  A tally of clicks for each transcript entry is then stored. This then gets outputted in the footer so you can see how things are getting worked out.  This is of course just test data – we’ll need smaller transcript areas for the real thing.  Currently nothing gets submitted to the server or stored – it’s all just processed in the browser.  I’ve tested the page out in several browsers in Windows, on my iPad and on my Android phone and the interface works perfectly well on mobile phone screens.  Below is a screenshot showing audio playback and four linguistic features ‘logged’:

Also this week I had a conversation with the editor of the AND about updating the varlist dates.  I also updated the DTD to allow the new ‘usevardate’ attribute to be used to identify occasions where a varlist date should be used as the earliest citation date.  We also became aware that a small number of entries in the online dictionary are referencing an old DTD on the wrong server so I updated these.