
Month: May 2022
Week Beginning 23rd May 2022
I’d completed all of the outstanding tasks for ‘Speak For Yersel’ last week so this week I turned my attention to several other projects. For the Books and Borrowing project I wrote a script to strip out duplicate author records from the data and reassign any books associated with the duplicates to the genuine author records. The script iterated through each author in the ‘duplicates’ spreadsheet, found all rows where the ‘AID’ did not match the ‘AID to keep’ column, reassigned any book author records from the former to the latter and then deleted the author record. The script deleted 310 duplicate authors and reassigned 735 books to other authors, making the data in the content management system a lot cleaner.
I then migrated the Uist Saints website to a server at Glasgow and got everything working at a temporary URL. All looked fine to me, although there was an issue with the homepage that needed investigating. This issue was present on the live site too, resulting in the page content cutting off and displaying a lot of blank space and no footer, with lots of errors being displayed in the console. I did some investigation into the errors and discovered that these were being caused by some JavaScript embedded in the homepage that has been treated like HTML by WordPress. It has added HTML line breaks (<br>) wherever there is a line break in the code, thereby breaking the JavaScript. I updated the page to strip out all of the <br> tags and it now loads without any errors in the console but whatever the JavaScript is supposed to be doing still isn’t working and there’s still a huge expanse of empty space and then no footer.
The JavaScript appears to be attempting to display a map using the Leaflet mapping library, but using some sort of WordPress plugin to do so. There are over 3000 lines of JavaScript code in the page, which is really crazy. Every single marker on the map (e.g. “Cladh Choinnich (burial ground and site of chapel)” at [57.157715,-7.301283]) has its own script comprising around 70 lines of code. Sofia, the project RA looked at the page and decided to try deleting the blocks of JavaScript, and this then seemed to solve to problem, which was great, as I was thinking I’d need to create a new map after somehow extracting all of the data.
I then moved on to the Ramsay ‘Gentle Shepherd’ data, and this week tackled the issue of importing the code I’d written into the University website’s T4 content management system. I created a ‘one file’ version of the page that has everything incorporated in one single file – all the scripts, the data and the styles. I was hoping I’d then be able to just upload this to T4 but I ran into a problem:
I selected the ‘Standard plain’ content type as I did for the Enlightenment map I created in T4 many years ago, but the ‘content’ box can only accept a maximum of 80,000 characters. My ‘one file’ approach is around 404,000 characters so I can’t upload it. I then wondered about using separate files, as I had done with the Enlightenment map, but the JSON data for the performances on its own is over 227,000 characters. This data needs to be a single thing and can’t be split up into smaller chunks (at least not without then having to stitch the data back together in the JavaScript before it can be used every time someone loads the page which would have an impact on the speed of the page).
I notice that the Enlightenment map has a further content type called ‘_blank’ that isn’t available to me where the performance data is to go. This type allows up to 150,000 characters. Unfortunately this is still not big enough. The Leaflet JavaScript library which I also need to upload is 141,000 characters so currently can’t be uploaded either. I then looked into uploading the JSON data as a media file and I managed to upload it, but apparently media files only become active in the system when they are linked to from a T4 page using T4’s method of linking to a file. The JSON file would only ever be loaded in via an AJAX call from the JavaScript code so would never work. However, I did realise that I could upload the JavaScript file with the JSON data stored directly within it as a media file and then link to this (and also the leaflet JavaScript file and the CSS files) from the T4 HTML file. However, this wouldn’t work when using regular HTML tags to link to scripts and CSS files as T4 only activates media files when linked to using its own special way of inserting links.
A helpful guy called Rick in the Web Team suggested using the ‘standard’ content type and T4’s way of linking to files to get things working, and this did sort of work, but while the ‘standard’ content type allows you to manually edit the HTML, T4 then processes any HTML you enter, which included stripping out a lot of tags my code needed and overwriting other HTML tags, which was very frustrating.
However, I was able to view the source for the embedded media files in this template and then copy this into my ‘standard plain’ section and this seems to have worked. There were other issues, though, such as that T4 applies its CSS styles AFTER any locally created styles meaning a lot of my custom styles were being overwritten. I managed to find a way around this and the section of the page is now working if you preview it in T4.
Unfortunately to get this to work the JSON data needed to be embedded in the JavaScript file rather than loaded in as a separate file. This is going to make it more difficult for non-technical people to edit the data directly in T4. In order to do so someone would need to: Download the ‘gspCode’ file in the Media Library, which T4 unhelpfully converts into a .txt file then rename the file to remove the .txt extension (so it ends in .js instead). Then find the data array in the file, make the changes to it and then validate it in the handy JSON validator https://jsonlint.com/ before saving the JS file and uploading it as a replacement for the item in the Media Library.
With all of this out of the way I was hoping to begin work on the API and front-end for the Books and Borrowing project, and I did manage to make a start on this. However, many further tweaks and updates came through from Jennifer Smith for the Speak For Yersel system, which we’re intending to sent out to selected people next week, and I ended up spending most of the rest of the week on this project instead. This included several Zoom calls and implementing countless minor tweaks to the website content, including homepage text, updating quiz questions and answer options, help text, summary text, replacing images, changing styles and other such things. I also updated the maps to set their height dynamically based on the height of the browser window, ensuring that the map and the button beneath it are visible without scrolling (but also including a minimum height so the map never gets too small). I also made the maps wider and the question area narrower as there was previously quite a lot of wasted space with there was a 50/50 split between the two.
I also fixed a bug with the slider-based questions that was only affecting Safari that prevented the ‘next’ button from activating. This was because the code that listened for the slider changing was set to do something when a slider was clicked on, but for it to work in Safari instead of ‘click’ the event needed to be ‘change’. I also added in the new dictionary-based question type and added in the questions, although we then took these out again for now as we’d promised the DSL that the embedded school dictionary would only be used by the school children in our pilot. I also added in a question about whether the user has been to university to the registration page and then cleared out all of the sample data and users that we’d created during our testing before actual users begin using the resource next week.
Week Beginning 16th May 2022
This week I finished off all of the outstanding work for the Speak For Yerself project. The other members of the team (Jennifer and Mary) are both on holiday so I finished off all of the tasks I had on my ‘to do’ list, although there will certainly be more to do once they are both back at work again. The tasks I completed were a mixture of small tweaks and larger implementations. I made tweaks to the ‘About’ page text and changed the intro text to the ‘more give your word’ exercise. I then updated the age maps for this exercise, which proved to be pretty tricky and time-consuming to implement as I needed to pull apart a lot of the existing code. Previously these maps showed ‘60+’ and ‘under 19’ data for a question, with different colour markers for each age group showing those who would say a term (e.g. ‘Scunnered’) and grey markers for each age group showing those who didn’t say the term. We have completely changed the approach now. The maps now default to showing ‘under 19’ data only, with different colours for each different term. There is now an option in the map legend to switch to viewing the ‘60+’ data instead. I added in the text ‘press to view’ to try and make it clearer that you can change the map. Here’s a screenshot:
I also updated the ‘give your word’ follow-on questions so that they are now rated in a new final page that works the same way as the main quiz. In the main ‘give your word’ exercise I updated the quiz intro text and I ensured that the ‘darker dots’ explanatory text has now been removed for all maps. I tweaked a few questions to change their text or the number of answers that are selectable and I changed the ‘sounds about right’ follow-on ‘rule’ text and made all of the ‘rule’ words lower case. I also made it so that when the user presses ‘check answers’ for this exercise a score is displayed to the right and the user is able to proceed directly to the next section without having to correct their answers. They still can correct their answers if they want.
I then made some changes to the ‘She sounds really clever’ follow-on. The index for this is now split into two sections, one for ‘stereotype’ data and one for ‘rating speaker’ data and you can view the speaker and speaker/listener results for both types of data. I added in the option of having different explanatory text for each of the four perception pages (or maybe just two – one for stereotype data, one for speaker ratings) and when viewing the speaker rating data the speaker sound clips now appear beneath the map. When viewing the speaker rating data the titles above the sliders are slightly different. Currently when selecting the ‘speaker’ view the title is “This speaker from X sounds…” as opposed to “People from X sound…”. When selecting the ‘speaker/listener’ view the title is “People from Y think this speaker from X sounds…” as opposed to “People from Y think people from X sound…”. I also added a ‘back’ button to these perception follow-on pages so it’s easier to choose a different page. Finally, I added some missing HTML <title> tags to pages (e.g. ‘Register’ and ‘Privacy’) and fixed a bug whereby the ‘explore more’ map sound clips weren’t working.
With my ‘Speak For Yersel’ tasks out of the way I could spend some time looking at other projects that I’d put on hold for a while. A while back Eleanor Lawson contacted me about adding a new section to the Seeing Speech website where Gaelic speaker videos and data will be accessible, and I completed a first version this week. I replicated the Speech Star layout rather than the /r/ & /l/ page layout as it seemed more suitable: the latter only really works for a limited number of records while the former works well with lots more (there are about 150 Gaelic records). What this means is the data has a tabular layout and filter options. As with Speech Star you can apply multiple filters and you can order the table by a column by clicking on its header (clicking a second time reverses the order). I’ve also included the option to open multiple videos in the same window. I haven’t included the playback speed options as the videos already include the clip at different speeds. Here’s a screenshot of how the feature looks:
On Thursday I had a Zoom call with Laura Rattray and Ailsa Boyd to discuss a new digital edition project they are in the process of planning. We had a really great meeting and their project has a lot of potential. I’ve offered to give technical advice and write any technical aspects of the proposal as and when required, and their plan is to submit the proposal in the autumn.
My final major task for the week was to continue to work on the Ramsay ‘Gentle Shepherd’ data. I overhauled the filter options that I implemented last week so they work in a less confusing way when multiple types are selected now. I’ve also imported the updated spreadsheet, taking the opportunity to trim whitespace to cut down on strange duplicates in the filter options. There are some typos you’ll need to fix in the spreadsheet, though (e.g. we have ‘Glagsgow’ and ‘Glagsow’) plus some dates still need to be fixed.
I then created an interactive map for the project and have incorporated the data for which there are latitude and longitude values. As with the Edinburgh Gazetteer map of reform societies (https://edinburghgazetteer.glasgow.ac.uk/map-of-reform-societies/) the number of performances at a venue is displayed in the map marker. Hover over a marker to see info about the venue. Click on it to open a list of performances. Note that when zoomed out it can be difficult to make out individual markers but we can’t really use clustering as on the Burns Supper map (https://burnsc21.glasgow.ac.uk/supper-map/) because this would get confusing: we’d have clustered numbers representing the number of markers in a cluster and then induvial markers with a number representing the number of performances. I guess we could remove the number of performances from the marker and just have this in the tooltip and / or popup, but it is quite useful to see all the numbers on the map. Here’s a screenshot of how the map currently looks:
I still need to migrate all of this to the University’s T4 system, which I aim to tackle next week.
Also this week I had discussions about migrating an externally hosted project website to Glasgow for Thomas Clancy. I received a copy of the files and database for the website and have checked over things and all is looking good. I also submitted a request for a temporary domain and I should be able to get a version of the site up and running next week. I also regenerated a list of possible duplicate authors in the Books and Borrowing system after the team had carried out some work to remove duplicates. I will be able to use the spreadsheet I have now to amalgamate duplicate authors, a task which I will tackle next week.
Week Beginning 9th May 2022
I spent most of the week continuing with the Speak For Yersel website, which is now nearing completion. A lot of my time was spent tweaking things that were already in place, and we had a Zoom call on Wednesday to discuss various matters too. I updated the ‘explore more’ age maps so they now include markers for young and old who didn’t select ‘scunnered’, meaning people can get an idea of the totals. I also changed the labels slightly and the new data types have been given two shades of grey and smaller markers, so the data is there but doesn’t catch the eye as much as the data for the selected term. I’ve updated the lexical ‘explore more’ maps so they now actually have labels and the ‘darker dots’ text (which didn’t make much sense for many maps) has been removed. Kinship terms now allow for two answers rather than one, which took some time to implement in order to differentiate this question type from the existing ‘up to 3 terms’ option. I also updated some of the pictures that are used and added in an ‘other’ option to some questions. I also updated the ‘Sounds about right’ quiz maps so that they display different legends that match the question words rather than the original questionnaire options. I needed to add in some manual overrides to the scripts that generate the data for use in the site for this to work.
I also added in proper text to the homepage and ‘about’ page. The former included a series of quotes above some paragraphs of text and I wrote a little script that highlighted each quote in turn, which looked rather nice. This then led onto the idea of having the quotes positioned on a map on the homepage instead, with different quotes in different places around Scotland. I therefore created an animated GIF based on some static map images that Mary had created and this looks pretty good.
I then spent some time researching geographical word clouds, which we had been hoping to incorporate into the site. After much Googling it would appear that there is no existing solution that does what we want, i.e. take a geographical area and use this as the boundaries for a word cloud, featuring different coloured words arranged at various angles and sizes to cover the area. One potential solution that I was pinning my hopes on was this one: https://github.com/JohnHenryEden/MapToWordCloud which promisingly states “Turn GeoJson polygon data into wordcloud picture of similar shape.”. I managed to get the demo code to run, but I can’t get it to actually display a word cloud, even though the specifications for one are in the code. I’ve tried investigating the code but I can’t figure out what’s going wrong. No errors are thrown and there’s very little documentation. All that happens is a map with a polygon area is displayed – no word cloud.
The word cloud aspects of the above are based on another package here: https://npm.io/package/wordcloud and this package allows you to specify a shape to use as an outline for the cloud, and one of the examples shows words taking up the shape of Taiwan: https://wordcloud2-js.timdream.org/#taiwan However, this is a static image not an interactive map – you can’t zoom into it or pan around it. One possible solution may be to create images of our regions, generate static word cloud images as with the above and then stitch the images together for form a single static map of Scotland. This would be a static image, though, and not comparable to the interactive maps we use elsewhere in the website. Programmatically stitching the individual region images together might also be quite tricky. I guess another option would be to just allow users to select an individual region and view the static word cloud (dynamically generated based on the data available when the user selects to view it) for the selected region, rather than joining them all together.
I also looked at some further options that Mary had tracked down. The word cloud on a leaflet map (http://hourann.com/2014/js-devs-dont-get-lost/leaflet-wordcloud.html?sydney) only uses a circle for the boundaries of the word cloud. All of the code is written around the use of a circle (e.g. using diameters to work out placement) so couldn’t really be adapted to work with a complex polygon. We could work out a central point for each region and have a circular word cloud positioned at that point, but we wouldn’t be able to make the words fill the entire region. The second of Mary’s links (https://www.jasondavies.com/wordcloud/) as far as I can tell is just a standard word cloud generator with no geographical options. The third option (https://github.com/peterschretlen/leaflet-wordcloud) has no demo or screenshot or much information about it and I’m afraid I can’t get it to work.
The final option (https://dagjomar.github.io/Leaflet.ParallaxMarker/) is pretty cool but it’s not really a word cloud as such. Instead it’s a bunch of labels set to specific lat/lng points and given different levels which sets their size and behaviour on scroll. We could use this to set the highest rated words to the largest level with lower rated words at lower level and position each randomly in a region, but it’s not really a word cloud and it would be likely that words would spill over into neighbouring regions.
Based on the limited options that appear to be out there, I think creating a working, interactive map-based word cloud would be a research project in itself and would take far more time than we have available.
Later on in the week Mary sent me the spreadsheet she’d been working on to list settlements found in postcode areas and to link these areas to the larger geographical regions we use. This is exactly what we needed to fill in the missing piece in our system and I wrote a script that successfully imported the data. For our 411 areas we now have 957 postcode records and 1638 settlement records. After that I needed to make some major updates to the system. Currently a person is associated with an area (e.g. ‘Aberdeen Southwest’) but I need to update this so that a person is associated with a specific settlement (e.g. ‘Ferryhill, Aberdeen’), which is then connected to the area and from the area to one of our 14 regions (e.g. ‘North East (Aberdeen)’).
I updated the system to make these changes and updated the ‘register’ form, which now features an autocomplete for the location – start typing a place and all matches appear. Behind the scenes the location is saved and connected up to areas and regions, meaning we can now start generating real data, rather than a person being assigned a random area. The perception follow-on now connects the respondent up with the larger region when selecting ‘listener is from’, although for now some of this data is not working.
I then needed to further update the registration page to add in an ‘outside Scotland’ option so people who did not grow up in Scotland can use the site. Adding in this option actually broke much of the site: registration requires an area with a geoJSON shape associated with the selected location otherwise it fails and the submission of answers requires this shape in order to generate a random marker point and this then failed when the shape wasn’t present. I updated the scripts to fix these issues, meaning an answer submitted by an ‘outside’ person has a zero for both latitude and longitude, but then I also needed to update the script that gets the map data to ensure that none of these ‘outside’ answers were returned in any of the data used in the site (both for maps and for non-map visualisations such as the sliders). So, much has changed and hopefully I haven’t broken anything whilst implementing these changes. It does now mean that ‘outside’ people can now be included and we can export and use their data in future, even though it is not used in the current site.
Further tweaks I implemented this week included: changing the font sizes of some headings and buttons; renaming the ‘activities’ and ‘more’ pages as requested; adding ‘back’ buttons from all ‘activity’ and ‘more’ pages back to the index pages; adding an intro page to the click exercise as previously it just launched into the exercise whereas all others have an intro. I also added summary pages to the end of the click and perception activities with links through to the ‘more’ pages and removed the temporary ‘skip to quiz’ option. I also added progress bars to the click and perception activities. Finally, I switched the location of the map legend from top right to top left as I realised when it was in the top right it was always obscuring Shetland whereas there’s nothing in the top left. This has meant I’ve had to move the region label to the top right instead.
Also this week I continued to work on the Allan Ramsay ‘Gentle Shepherd’ performance data. I added in faceted browsing to the tabular view, adding in a series of filter options for location, venue, adaptor and such things. You can select any combination of filters (e.g. multiple locations and multiple years in combination). When you select an item of one sort the limit options of other sorts update to only display those relevant to the limited data. However, the display of limiting options can get a bit confusing once multiple limiting types have been selected. I will try and sort this out next week. There are also multiple occurrences of items in the limiting options (e.g. two Glasgows) because the data has spaces in some rows (‘Glasgow’ vs ‘Glasgow ‘) and I’ll need to see about trimming these out next time I import the data.
Also this week I arranged for the old DSL server to be taken offline, as the new website has now been operating successfully for two weeks. I also had a chat with Katie Halsey about timescales for the development of the Books and Borrowers front-end. Finally, I imported a new disordered paediatric speech dataset into the Speech Star website. This included around double the number of records, new video files and a new ‘speaker code’ column. Finally, I participated in a Zoom call for the Scottish Place-Names database where we discussed the various place-names surveys that are in progress and the possiblity of created an overarching search across all systems.
Week Beginning 2nd May 2022
Monday was the May Day holiday so it was a four-day week. I spent three of the available days working on the Speak For Yersel project. I completed work on the age-based questions for the lexical follow-on section. We wanted to split responses based on the age of the respondent, but I had a question about this: Should the age filters be fixed or dynamic? We say 18 and younger / 60 and older but we don’t register ages for users, we register dates of birth. I can therefore make the age filters fixed (i.e. birth >=2004 for 18, birth <=1962 for 60) or dynamic (e.g. birth >= currentyear-18 and birth <= currentyear -60). However, each of these approaches have issues. With the former with each passing year the boundaries will change. With the latter we end up losing data with each passing year (if someone is 18 when they submitted their data in 2022 then their data will be automatically excluded next year). I realised that there is a third way: When a person registers I log the exact time of registration so I can ascertain their age at the point when they registered and this will never change. I decided to do this instead, although it does mean that the answers of someone who is 18 today will be lumped in with the answers of someone who is 18 in 10 years time, which might cause issues. However, we can always change how the age boundaries work at a later date. Below is a screenshot of one of the date questions (more data is obviously still needed):
Whilst working on this I realised there is another problem with this type of question: Unless we have equal numbers of young and old respondents is it not likely that the data visualised on the map will be misleading? Say we have 100 ‘older’ respondents but 1000 ‘younger’ ones due to us targeting school children. If 50% of the older respondents say ‘scunnered’ then there will be 50 ‘older’ markers on the map. If 10% of the younger respondents say ‘scunnered’ then there will be 100 ‘younger’ markers on the map, meaning our answer ‘older’ (which is marked as ‘correct’) will look wrong even though statistically it is correct. I’m not sure how we can get around this unless we maybe plot the markers for each age group who don’t use the form as well, so as to let people see the total number of people in each group. Maybe using a smaller marker and / or a lighter shade for the people who didn’t say a form. I raised this issue with the team and this is the approach we will probably take.
I then moved onto the follow-on activities for the ‘Sounds about right’ section. Tis involved creating a ‘drag and drop’ feature where possible answers need to be dropped into boxes. The mockup suggested that the draggable boxes should disappear from the list of options when dropped elsewhere but I’ve changed it so that the choices don’t disappear from the list, but instead the action copies the contents to the dotted area when you drop your selection. The reason I’ve done it this way is that if the entire contents move over we could end up with someone dropping several into one box, or if they drop an option into the wrong box they would then have to drag it from the wrong box into the right one before they can try another word in the same box and it can all get very messy (e.g. if there are several words dropped into one box then do we consider this ‘correct’ if one of the words is the right one?). This way keeps things a lot simpler. However, it does mean the words the user has already successfully dropped still appear as selectable in the list, which might confuse people and I could disable or remove an option once it’s been correctly placed. Below is a screenshot of the activity with one of the options dropped:
The next activity asks people to see whether rules apply to all words with the same sounds by selecting ‘yes’ or ‘no’ for each. I set it up so that the ‘check answers’ button only appears once the user has selected ‘yes’ or ‘no’ for all of the words, and on checking the answers a tick or a cross is added to the right of the ‘yes’ and ‘no’ options. The user must correct their answers and select ‘check answers’ again before the ‘Check answers’ button is replaced with a ‘Next’ button. See a screenshot below:
With these in place I then moved onto the ‘perception’ activity, that I’d started to look into last week. I completed stages 1 and 2 of this activity, allowing the user to rate how they think a person from a region sounds using the seven sliding scales as criteria, as you can see below:
And then rating actual sound clips of speakers from certain areas using the same seven criteria, as the screenshot below shows:
Finally, I created the ‘explore more’ option for the perception activity, which consists of two sections. The first allows the user to select a region and view the average rating given by all respondents for that region, plotted on ‘read only’ versions of the same sliding scales. The team had requested that the scales animated to their new locations when a new region was selected and although it took me a little bit of time to implement this I got it working in the end and I think it works really well. The second option is very similar only it allows the user to select both the speaker and the listener, so you can see (for example) how people from Glasgow rate people from Edinburgh. At the moment we don’t have information in the system that links up a user and the broader region, so for now this option is using sample data, but the actual system is fully operational. Below is a screenshot of the first ‘explore’ option:
I feel like I’ve made really good progress with the project this week, but there is still a lot more to implement and I’ll continue with this next week.
I spent Friday working on another project, generating some views of performance data relating to performances of The Gentle Shepherd by Allan Ramsay ahead of a project launch at the end of the month. I’d been given a spreadsheet of the data so my first step was to write a little script to extract the data, format it (e.g. extracting years from the dates) and save it as JSON, which I would then use to generate a timeline, a table view and a map-based view. On Friday I completed an initial version of the timeline view and the table view.
I made the timeline vertical rather than horizontal as there are so many years and so much data that a horizontal timeline would be very long, and these days most people use touchscreens and are more used to scrolling down a page than along a page. I added a ‘jump to year’ feature that lists all of the years as buttons. Pressing on one of these scrolls to the appropriate year. There are rather a lot of years so I’ve hidden them in a ‘Jump to Year’ section. It may be better to have a drop-down list of options instead and I’ll maybe change this. Each year has a header and a dividing line and a ‘top’ button that allows you to quickly scroll back to the top of the timeline. Each item in the timeline is listed in a fixed-width box, with multiple boxes per row depending on your screen width and the data available. Currently all fields are displayed, but this can be changed.
The table view displays all of the data in a table. You can click on a column heading to sort the data by that heading. Pressing a heading a second time reverses the order. I still need to add in the filter options to the table view and then work on the map view once I’m given the latitude and longitude data that is still needed for this view to work. I’ll continue with this next week.
Also this week I make a couple of minor tweaks to the DSL website and had some discussions with the DSL people about the SLD data and the fate of the old DSL website. I also updated some of the data for the Books and Borrowing project and had a chat with Thomas Clancy about hosting an external website that is in danger of disappearing.
Week Beginning 25th April 2022
We launched the new version of the DSL website on Tuesday this week, which involved switching the domain name to point at the new server where I’d been developing the new site. When we’ve done this previously (e.g. for the Anglo-Norman Dictionary) the switchover has been pretty speedy, but this time it took about 24 hours for the DNS updates to propagate, during which time the site was working for some people and not for others. This is because there is a single SSL certificate for the dsl.ac.uk domain and as it was moved to the new server, the site on the old server (which was still being accessed by people whose ISP’s had not updated their domain name servers) was displaying a certificate error. This was all a bit frustrating as the problem was out of our hands, but thankfully everything was working normally again by Wednesday.
I made a few final tweaks to the site this week too, including updating the text that is displayed when too many results are returned, updating the ‘cite this entry’ text, fixing a few broken links and fixing the directory permissions on the new site to allow file uploads. I also gave some advice about the layout of a page for a new Scots / Polish app that the DSL people are going to publish.
I spent almost all of the rest of the week working on the Speak For Yersel project, for which I still have an awful lot to do in a pretty short period of time, as we need to pilot the resource in schools during the week of the 13th of June and need to sent it out to other people for testing and to populate it with initial data before then. We had a team meeting on Thursday to go through some of the outstanding tasks, which was helpful.
This week I worked on the maps quite a bit, making the markers smaller and giving them a white border to help them stand out a bit. I updated the rating colours as suggested, although I think we might need to change some of the shades used for ratings and words as after using the maps quite a bit I personally find it almost impossible to differentiate some of the shades, as you can see in the screenshot below. We have all the colours of the rainbow at our disposal and while I can appreciate why shades are preferred from an aesthetical point of view, in terms of usability it seems a bit silly to me. I remember having this discussion with SCOSYA too. I think it is MUCH easier to read the maps when different colours are used, as with Our Dialects (e.g. https://www.ourdialects.uk/maps/bread/).
As you can also see from the above screenshot, I implemented the map legends as well, with only the options that have been chosen and have data appearing in the legend. Options appear with their text, a coloured spot so you can tell which option is which, and a checkbox that allows you to turn on / off a particular answer, which I think will be helpful in differentiating the data once the map fills up. For the ‘sound choice’ questions a ‘play’ button appears next to each option in the legend. I then ensured that the maps work for the quiz questions too: rather than showing a map of answers submitted for the quiz question the maps now display the data for the associated questionnaire (e.g. the ‘Gonnae you’ map). Maps are also now working for the ‘Explore more’ section too. I also added in the pop-up for ‘Attribution and Copyright’ (the link in the bottom right of the map).
I then added further quiz questions to the ‘Give your word’ exercise, but the final quiz question in the document I was referencing had a very different structure, with multiple specific answer options from several different questions on the same map. I spent about half a day making updates to the system to allow for such a question structure. I needed to update the database structure, the way data is pulled into the website, the way maps are generated, how quiz questions are displayed and how they are processed.
The multi-choice quiz works in a similar way to the multi-choice questionnaire in that you can select more than one answer. Unlike the questionnaire there is no limit to the number of options you can select. When at least one choice is selected a ‘check your answers’ button appears. The map displays all of the data for each of the listed words, even though these come from different questionnaires (this took some figuring out). There are 9 words here and we only have 8 shades so the ninth is currently appearing as red. The map legend lists the words alphabetically, which doesn’t match up with the quiz option order, but I can’t do anything about this (at least not without a lot of hacking about). You can turn off/on map layers to help see the coverage.
When you press on the ‘Check your answers’ button all quiz options are greyed out and your selection is compared to the correct answers. You get a tick for a correct one and a cross for an incorrect one. In addition, any options you didn’t select that are correct are given a tick (in the greyed out button) so you can see what was correct that you missed. If you selected all of the correct answers and didn’t select any incorrect answers then the overall question is marked as correct in the tally that gives your final score. If you missed any correct answers or selected any incorrect ones then this question is not counted as correct overall. Below is a screenshot showing how this type of question works:
Unfortunately, when we met on Thursday it turned out that Jennifer and Mary were not wanting this question to be presented on one single map, but instead for each answer option to have its own map, meaning the time I spent developing the above was wasted. However, it does mean the question is much more simple, which is probably a good thing. We decided to split the question up into individual questions to make things more straightforward for users and to ensure that getting one of the options incorrect didn’t mean they were marked as getting the entire multi-part question wrong.
Also this week I began implementing the perception questionnaire with seven interactive sliders allowing the user to rate an accent. Styling the sliders was initially rather tricky but thankfully I found a handy resource that allows you to customise a slider and generates the CSS for you (https://www.cssportal.com/style-input-range/). Below is a screenshot of the perception activity as it currently stands:
I also replaced one of the sound recordings and fixed the perception activity layout on narrow screens (as previously on narrow screens the labels ended up positioned at the wrong ends of the slider). I added a ‘continue’ button under the perception activity that is greyed out and added a check to see whether the user has pressed on every slider. If they have then the ‘continue’ button text changes and is no longer greyed out. I also added area names to the top-left corner of the map when you hover over an area, so now no-one will confuse Orkney and Shetland!
We had also agreed to create a ‘more activities’ page and to have follow-on activities and the ‘explore more’ maps situated there. I created a new top-level menu item currently labelled ‘More’. If you click on this you find an index page similar to the ‘Activities’ page. Press on an option (only the first page options work so far) and you’re given the choice to either start the further activities (not functioning yet) or explore the maps. The latter is fully functional. In the regular activities page I then removed the ‘explore more stage’ so that now when you finish the quiz the button underneath your score leads you to the ‘More’ page for the exercise in question. Finally, I began working on the follow-on activities that display age-based maps, but I’ll discuss these in more detail next week.
I also spoke to Laura Rattray and Ailsa Boyd about a proposal they are putting together and arranged a Zoom meeting with them in a couple of weeks and spoke to Craig Lamont about the Ramsay project I’m hopefully going to be able to start working on next week.