Monday was the May Day holiday so it was a four-day week. I spent three of the available days working on the Speak For Yersel project. I completed work on the age-based questions for the lexical follow-on section. We wanted to split responses based on the age of the respondent, but I had a question about this: Should the age filters be fixed or dynamic? We say 18 and younger / 60 and older but we don’t register ages for users, we register dates of birth. I can therefore make the age filters fixed (i.e. birth >=2004 for 18, birth <=1962 for 60) or dynamic (e.g. birth >= currentyear-18 and birth <= currentyear -60). However, each of these approaches have issues. With the former with each passing year the boundaries will change. With the latter we end up losing data with each passing year (if someone is 18 when they submitted their data in 2022 then their data will be automatically excluded next year). I realised that there is a third way: When a person registers I log the exact time of registration so I can ascertain their age at the point when they registered and this will never change. I decided to do this instead, although it does mean that the answers of someone who is 18 today will be lumped in with the answers of someone who is 18 in 10 years time, which might cause issues. However, we can always change how the age boundaries work at a later date. Below is a screenshot of one of the date questions (more data is obviously still needed):
Whilst working on this I realised there is another problem with this type of question: Unless we have equal numbers of young and old respondents is it not likely that the data visualised on the map will be misleading? Say we have 100 ‘older’ respondents but 1000 ‘younger’ ones due to us targeting school children. If 50% of the older respondents say ‘scunnered’ then there will be 50 ‘older’ markers on the map. If 10% of the younger respondents say ‘scunnered’ then there will be 100 ‘younger’ markers on the map, meaning our answer ‘older’ (which is marked as ‘correct’) will look wrong even though statistically it is correct. I’m not sure how we can get around this unless we maybe plot the markers for each age group who don’t use the form as well, so as to let people see the total number of people in each group. Maybe using a smaller marker and / or a lighter shade for the people who didn’t say a form. I raised this issue with the team and this is the approach we will probably take.
I then moved onto the follow-on activities for the ‘Sounds about right’ section. Tis involved creating a ‘drag and drop’ feature where possible answers need to be dropped into boxes. The mockup suggested that the draggable boxes should disappear from the list of options when dropped elsewhere but I’ve changed it so that the choices don’t disappear from the list, but instead the action copies the contents to the dotted area when you drop your selection. The reason I’ve done it this way is that if the entire contents move over we could end up with someone dropping several into one box, or if they drop an option into the wrong box they would then have to drag it from the wrong box into the right one before they can try another word in the same box and it can all get very messy (e.g. if there are several words dropped into one box then do we consider this ‘correct’ if one of the words is the right one?). This way keeps things a lot simpler. However, it does mean the words the user has already successfully dropped still appear as selectable in the list, which might confuse people and I could disable or remove an option once it’s been correctly placed. Below is a screenshot of the activity with one of the options dropped:
The next activity asks people to see whether rules apply to all words with the same sounds by selecting ‘yes’ or ‘no’ for each. I set it up so that the ‘check answers’ button only appears once the user has selected ‘yes’ or ‘no’ for all of the words, and on checking the answers a tick or a cross is added to the right of the ‘yes’ and ‘no’ options. The user must correct their answers and select ‘check answers’ again before the ‘Check answers’ button is replaced with a ‘Next’ button. See a screenshot below:
With these in place I then moved onto the ‘perception’ activity, that I’d started to look into last week. I completed stages 1 and 2 of this activity, allowing the user to rate how they think a person from a region sounds using the seven sliding scales as criteria, as you can see below:
And then rating actual sound clips of speakers from certain areas using the same seven criteria, as the screenshot below shows:
Finally, I created the ‘explore more’ option for the perception activity, which consists of two sections. The first allows the user to select a region and view the average rating given by all respondents for that region, plotted on ‘read only’ versions of the same sliding scales. The team had requested that the scales animated to their new locations when a new region was selected and although it took me a little bit of time to implement this I got it working in the end and I think it works really well. The second option is very similar only it allows the user to select both the speaker and the listener, so you can see (for example) how people from Glasgow rate people from Edinburgh. At the moment we don’t have information in the system that links up a user and the broader region, so for now this option is using sample data, but the actual system is fully operational. Below is a screenshot of the first ‘explore’ option:
I feel like I’ve made really good progress with the project this week, but there is still a lot more to implement and I’ll continue with this next week.
I spent Friday working on another project, generating some views of performance data relating to performances of The Gentle Shepherd by Allan Ramsay ahead of a project launch at the end of the month. I’d been given a spreadsheet of the data so my first step was to write a little script to extract the data, format it (e.g. extracting years from the dates) and save it as JSON, which I would then use to generate a timeline, a table view and a map-based view. On Friday I completed an initial version of the timeline view and the table view.
I made the timeline vertical rather than horizontal as there are so many years and so much data that a horizontal timeline would be very long, and these days most people use touchscreens and are more used to scrolling down a page than along a page. I added a ‘jump to year’ feature that lists all of the years as buttons. Pressing on one of these scrolls to the appropriate year. There are rather a lot of years so I’ve hidden them in a ‘Jump to Year’ section. It may be better to have a drop-down list of options instead and I’ll maybe change this. Each year has a header and a dividing line and a ‘top’ button that allows you to quickly scroll back to the top of the timeline. Each item in the timeline is listed in a fixed-width box, with multiple boxes per row depending on your screen width and the data available. Currently all fields are displayed, but this can be changed.
The table view displays all of the data in a table. You can click on a column heading to sort the data by that heading. Pressing a heading a second time reverses the order. I still need to add in the filter options to the table view and then work on the map view once I’m given the latitude and longitude data that is still needed for this view to work. I’ll continue with this next week.
Also this week I make a couple of minor tweaks to the DSL website and had some discussions with the DSL people about the SLD data and the fate of the old DSL website. I also updated some of the data for the Books and Borrowing project and had a chat with Thomas Clancy about hosting an external website that is in danger of disappearing.
We launched the new version of the DSL website on Tuesday this week, which involved switching the domain name to point at the new server where I’d been developing the new site. When we’ve done this previously (e.g. for the Anglo-Norman Dictionary) the switchover has been pretty speedy, but this time it took about 24 hours for the DNS updates to propagate, during which time the site was working for some people and not for others. This is because there is a single SSL certificate for the dsl.ac.uk domain and as it was moved to the new server, the site on the old server (which was still being accessed by people whose ISP’s had not updated their domain name servers) was displaying a certificate error. This was all a bit frustrating as the problem was out of our hands, but thankfully everything was working normally again by Wednesday.
I made a few final tweaks to the site this week too, including updating the text that is displayed when too many results are returned, updating the ‘cite this entry’ text, fixing a few broken links and fixing the directory permissions on the new site to allow file uploads. I also gave some advice about the layout of a page for a new Scots / Polish app that the DSL people are going to publish.
I spent almost all of the rest of the week working on the Speak For Yersel project, for which I still have an awful lot to do in a pretty short period of time, as we need to pilot the resource in schools during the week of the 13th of June and need to sent it out to other people for testing and to populate it with initial data before then. We had a team meeting on Thursday to go through some of the outstanding tasks, which was helpful.
This week I worked on the maps quite a bit, making the markers smaller and giving them a white border to help them stand out a bit. I updated the rating colours as suggested, although I think we might need to change some of the shades used for ratings and words as after using the maps quite a bit I personally find it almost impossible to differentiate some of the shades, as you can see in the screenshot below. We have all the colours of the rainbow at our disposal and while I can appreciate why shades are preferred from an aesthetical point of view, in terms of usability it seems a bit silly to me. I remember having this discussion with SCOSYA too. I think it is MUCH easier to read the maps when different colours are used, as with Our Dialects (e.g. https://www.ourdialects.uk/maps/bread/).
As you can also see from the above screenshot, I implemented the map legends as well, with only the options that have been chosen and have data appearing in the legend. Options appear with their text, a coloured spot so you can tell which option is which, and a checkbox that allows you to turn on / off a particular answer, which I think will be helpful in differentiating the data once the map fills up. For the ‘sound choice’ questions a ‘play’ button appears next to each option in the legend. I then ensured that the maps work for the quiz questions too: rather than showing a map of answers submitted for the quiz question the maps now display the data for the associated questionnaire (e.g. the ‘Gonnae you’ map). Maps are also now working for the ‘Explore more’ section too. I also added in the pop-up for ‘Attribution and Copyright’ (the link in the bottom right of the map).
I then added further quiz questions to the ‘Give your word’ exercise, but the final quiz question in the document I was referencing had a very different structure, with multiple specific answer options from several different questions on the same map. I spent about half a day making updates to the system to allow for such a question structure. I needed to update the database structure, the way data is pulled into the website, the way maps are generated, how quiz questions are displayed and how they are processed.
The multi-choice quiz works in a similar way to the multi-choice questionnaire in that you can select more than one answer. Unlike the questionnaire there is no limit to the number of options you can select. When at least one choice is selected a ‘check your answers’ button appears. The map displays all of the data for each of the listed words, even though these come from different questionnaires (this took some figuring out). There are 9 words here and we only have 8 shades so the ninth is currently appearing as red. The map legend lists the words alphabetically, which doesn’t match up with the quiz option order, but I can’t do anything about this (at least not without a lot of hacking about). You can turn off/on map layers to help see the coverage.
When you press on the ‘Check your answers’ button all quiz options are greyed out and your selection is compared to the correct answers. You get a tick for a correct one and a cross for an incorrect one. In addition, any options you didn’t select that are correct are given a tick (in the greyed out button) so you can see what was correct that you missed. If you selected all of the correct answers and didn’t select any incorrect answers then the overall question is marked as correct in the tally that gives your final score. If you missed any correct answers or selected any incorrect ones then this question is not counted as correct overall. Below is a screenshot showing how this type of question works:
Unfortunately, when we met on Thursday it turned out that Jennifer and Mary were not wanting this question to be presented on one single map, but instead for each answer option to have its own map, meaning the time I spent developing the above was wasted. However, it does mean the question is much more simple, which is probably a good thing. We decided to split the question up into individual questions to make things more straightforward for users and to ensure that getting one of the options incorrect didn’t mean they were marked as getting the entire multi-part question wrong.
Also this week I began implementing the perception questionnaire with seven interactive sliders allowing the user to rate an accent. Styling the sliders was initially rather tricky but thankfully I found a handy resource that allows you to customise a slider and generates the CSS for you (https://www.cssportal.com/style-input-range/). Below is a screenshot of the perception activity as it currently stands:
I also replaced one of the sound recordings and fixed the perception activity layout on narrow screens (as previously on narrow screens the labels ended up positioned at the wrong ends of the slider). I added a ‘continue’ button under the perception activity that is greyed out and added a check to see whether the user has pressed on every slider. If they have then the ‘continue’ button text changes and is no longer greyed out. I also added area names to the top-left corner of the map when you hover over an area, so now no-one will confuse Orkney and Shetland!
We had also agreed to create a ‘more activities’ page and to have follow-on activities and the ‘explore more’ maps situated there. I created a new top-level menu item currently labelled ‘More’. If you click on this you find an index page similar to the ‘Activities’ page. Press on an option (only the first page options work so far) and you’re given the choice to either start the further activities (not functioning yet) or explore the maps. The latter is fully functional. In the regular activities page I then removed the ‘explore more stage’ so that now when you finish the quiz the button underneath your score leads you to the ‘More’ page for the exercise in question. Finally, I began working on the follow-on activities that display age-based maps, but I’ll discuss these in more detail next week.
I also spoke to Laura Rattray and Ailsa Boyd about a proposal they are putting together and arranged a Zoom meeting with them in a couple of weeks and spoke to Craig Lamont about the Ramsay project I’m hopefully going to be able to start working on next week.
I divided most of my time between the Speak For Yersel project and the Dictionaries of the Scots Language this week. For Speak For Yersel I continued to work on the user management side of things. I implemented the registration form (apart from the ‘where you live’ bit, which still requires data) and all now works, uploading the user’s details to our database and saving them within the user’s browser using HTML5 Storage. I added in checks to ensure that a year of birth and gender must be supplied too.
I then updated all activities and quizzes so that the user’s answers are uploaded to our database, tracking the user throughout the site so we can tell which user has submitted what. For the ‘click map’ activity I also record the latitude and longitude of the user’s markers when they check their answers, although a user can check their answers multiple times, and each time the answers will be logged, even if the user has pressed on the ‘view correct locations’ first. Transcript sections and specific millisecond times are stored in our database for the main click activity now, and I’ve updated the interface for this so that the output is no longer displayed on screen.
With all of this in place I then began working on the maps, replacing the placeholder maps and their sample data with maps that use real data. Now when a user selects an option a random location within their chosen area is generated and stored along with their answer. As we still don’t have selectable area data at the point of registration, whenever you register with the site at the moment you are randomly assigned to one or our 411 areas, so by registering and answering some questions test data is then generated. My first two test users were assigned areas south of Inverness and around Dunoon.
With location data now being saved for answers I then updated all of the maps on the site to remove the sample data and display the real data. The quiz and ‘explore’ maps are not working properly yet but the general activity ones are. I replaced the geographical areas visible on the map with those as used in the click map, as requested, but have removed the colours we used on the click map as they were making the markers hard to see. Acceptability questions use the four rating colours as were used on the sample maps. Other questions use the ‘lexical’ colours (up to 8 different ones) as specified.
The markers were very small and difficult to spot when there are so few of them so I placed a little check that alters their size depending on the number of returned markers. If there are less than 100 then each marker is size 6. If there are 100 or more then the size is 3. Previously all markers were size 2. I may update the marker size or put more granular size options in place in future. The answer submitted by the current user appears on the map when they view the map, which I think is nice. There is still a lot to do, though. I still need to implement a legend for the map so you can actually tell which coloured marker refers to what, and also provide links to the audio clips where applicable. I also still need to implement the quiz question and ‘explore’ maps as I mentioned. I’ll look into these issues next week.
For the DSL I processed the latest data export from the DSL’s editing system and set up a new version of the API that uses it. The test DSL website now uses this API and is pretty much ready to go live next week. After that I spent some time tweaking the search facilities of the new site. Rhona had noticed that searches involving two single character wildcards (question marks) were returning unexpected results and I spent some time investigating this.
A second thing was causing further problems: A quick search by default performs an exact match search (surrounded by double quotes) if you ignore the dropdown suggestions and press the search button. But an exact match was set up to be just that – single wildcard characters were not being treated as wildcard characters, meaning a search for “sc??m” was looking for exactly that and finding nothing. I’ve fixed this now, allowing single character wildcards to appear within an exact search.
After fixing this we realised that the new site’s use of the asterisk wildcard didn’t match its use in the live site. Rhona was expected a search such as ‘sc*m’ to work on the new site, returning all headwords beginning ‘sc’ and ending in ‘m’. However, in the new site the asterisk wildcard only matches the beginning or end of words, e.g. ‘wor*’ finds all words beginning with ‘wor’ and ‘*ord’ finds all words ending with ‘ord’. You can combine the two with a Boolean search, though: ‘sc* AND *m’ and this will work in exactly the same way as ‘sc*m’.
However, I decided to enable the mid-wildcard search on the new site in addition to using Boolean AND, because it’s better to be consistent with the old site, plus I also discovered that the full text search in the new site does allow for mid-asterisk searches. I therefore spent a bit of time implementing the mid-asterisk search, both in the drop-down list of options in the quick search box as well as the main quick search and the advanced search headword search.
Rhona then spotted that a full-text mid-asterisk search was listing results alphabetically rather than by relevance. I looked into this and it seems to be a limitation with that sort of wildcard search in the Solr search engine. If you look here https://solr.apache.org/guide/8_7/the-standard-query-parser.html#differences-between-lucenes-classic-query-parser-and-solrs-standard-query-parser the penultimate bullet point says “Range queries (“[a TO z]”), prefix queries (“a*”), and wildcard queries (“a*b”) are constant-scoring (all matching documents get an equal score).”
I’m guessing the original API that powers the live site uses Lucene rather than Solr’s indexing system, but I don’t really know for certain. Also, while the live site’s ordering of mid-asterisk wildcard searches is definitely not alphabetical, it doesn’t really seem to be organising properly by relevance either. I’m afraid we might just have to live with alphabetical ordering for mid-asterisk search results, and I’ll alter the ‘Results are ordered’ statement in such cases to make it clearer that the ordering is alphabetical.
My final DSL tasks for the week were to make some tweaks to the XSLT that processes the layout of bibliographical entries. This involved fixing the size of author names, ensuring that multiple authors are handled correctly and adding in editors’ names for SND items. I also spotted a few layout issues that are still cropping up. The order of some elements is displayed incorrectly and some individual <bibl> items have multiple titles and the stylesheet isn’t expecting this so only displays the first ones. I think I may need to completely rewrite the stylesheet to fix these issues. As there were lots of rules for arranging the bibliography I wrote the stylesheet to pick out and display specific elements rather than straightforwardly going through the XML and transforming each XML tag into a corresponding HTML tag. This meant I could ensure (for example) authors always appear first and titles each get indented, but it is rather rigid – any content that isn’t structured as the stylesheet expects may get displayed in the wrong place or not at all (like the unexpected second titles). I’m afraid I’m not going to have time to rewrite the stylesheet before the launch of the new site next week and this update will need to be added to the list of things to do for a future release.
Also this week I fixed an issue with the Historical Thesaurus which involved shifting a category and its children one level up and helped sort out an issue with an email address for a project using a top-level ‘ac.uk’ domain. Next week I’ll hopefully launch the new version of the DSL on Tuesday and press on with the outstanding Speak For Yersel exercises.
I was back at work on Monday this week after a lovely week off last week. It was only a four-day week, however, as the week ended with the Good Friday holiday. I’ll also be off next Monday too. I had rather a lot to squeeze into the four working days. For the DSL I did some further troubleshooting for integrating Google Analytics with the DSL’s new https://macwordle.co.uk/ site. I also had discussions about the upcoming switchover to the new DSL website, which we scheduled in for the week after next, although later in the week it turned out that all of the data has already been finalised so I’ll begin processing it next week.
I participated in a meeting for the Historical Thesaurus on Tuesday, after which I investigated the server stats for the site, which needed fixing. I also enquired about setting up a domain URL for one of the ‘ac.uk’ sites we host, and it turned out to be something that IT Support could set up really quickly, which is good to know for future reference. I also had a chat with Craig Lamont about a database / timeline / map interface for some data for the Allan Ramsay project that he would like me to put together to coincide with a book launch at the end of May. Unfortunately they want this to be part of the University’s T4 website, which makes development somewhat tricky but not impossible. I had to spend some time familiarising myself with T4 again and arranging for access to the part of the system where the Ramsay content resides. Now I have this sorted I’ve agreed to look into developing this in early May. I also deleted a couple of unnecessary entries from the Anglo-Norman Dictionary after the editor requested their removal and created a new version of the requirements document for the front-end for the Books and Borrowing project following feedback form the project team on the previous version.
The rest of my week was spent on the Speak For Yerself project, for which I still have an awful lot to do and not much time to do it in. I had a meeting with the team on Monday to go over some recent developments, and following that I tracked down a few bugs in the existing code (e.g. a couple of ‘undefined’ buttons in the ‘explore’ maps). I then replaced all of the audio files in the ‘click’ exercise as the team had decided to use a standardised sentence spoken by many different regional speakers rather than having different speakers saying different things. As the speakers were not always from the same region as the previous audio clips I needed to change the ‘correct’ regions and also regenerated the MP3 files and transcript data.
The next step will be to populate the table holding specific locations within a postcode area once this data is available. After that I’ll be able to create the user information form and then I’ll need to update the activities so the selected options are actually saved. In the meantime I began to implement the user management system. A user icon now appears in the top right of every page, either with a green background and a tick if you’ve registered or a red background and a cross if you haven’t. I haven’t created the registration form yet, but have just included a button to register, and when you press this you’ll be registered and this will be remembered in your browser even if you close your browser or turn your device off. Press on the green tick user icon to view the details recorded about the registered person (none yet) and find an option to sign out if this isn’t you or you want to clear your details. If you’re not registered and you try to access the activities the page will redirect you to the registration form as we don’t want unregistered people completing the activities. I’ll continue with this next week, hopefully getting to the point where the choices a user makes are actually logged in the database. After that I’ll be able to generate maps with real data, which will be an important step.
I was on strike last week, and I’m going to be on holiday next week, so I had a lot to try and cram into this week. This was made slightly harder when my son tested positive for Covid again on Tuesday evening. It’s his fourth time, and the last bout was only seven weeks ago. Thankfully he wasn’t especially ill, but he was off school from Wednesday onwards.
I worked on several different projects this week. For the Books and Borrowing project I updated the front-end requirements document based on my discussions with the PI and Co-I and set it on for the rest of the team to give feedback on. I also uploaded a new batch of register images from St Andrews (more than 2,500 page images taking up about 50Gb) and created all of the necessary register and page records. I also did the same for a couple of smaller registers from Glasgow. I also exported spreadsheets of authors, edition formats and edition languages for the team to edit too.
For the Anglo-Norman Dictionary I fixed an issue with the advanced search for citations, where entries with multiple citations were having the same date and reference displayed for each snippet rather than the individual dates and references. I also updated the display of snippets in the search results so they appear in date order.
I also responded to an email from editor Heather Pagan about how language tags are used in the AND XML. There are 491 entries that have a language tag and I wrote a little script to list the distinct languages and a count of a number of times each appears. Here’s the output (bearing in mind that an entry may have multiple language tags):
[Latin] => 79
[M.E.] => 369
[Dutch] => 3
[Arabic] => 12
[Hebrew] => 20
[M.L.] => 4
[Greek] => 2
[A.F._and_M.E.] => 3
[Irish] => 2
[M.E._and_A.F.] => 8
[A-S.] => 3
[Gascon] => 1
There seem to be two ways the language tag appears. One in a sense, and these appear in the entry, e.g. https://anglo-norman.net/entry/Scotland and one in <head> and these don’t currently seem to get displayed. E.g. https://anglo-norman.net/entry/ganeir has:
<head> <language lang=”M.E.”/>
But ‘M.E’ doesn’t appear anywhere. I could probably write another little script that moves language to the head as above, and then I could update the XSLT so that this type of language tag gets displayed. Or I could update the XSLT first so we can see how it might look with entries that already have this structure. I’ll need to hear back from Heather before I do more.
For the Dictionaries of the Scots Language I spent quite a bit of time working with the XSLT for the display of bibliographies. There are quite a lot of different structures for bibliographical entries, sometimes where the structure of the XML is the same but a different layout is required, so it proved to be rather tricky to get things looking right. By the end of the week I think I had got everything to display as requested, but I’ll need to see if the team discover any further quirks.
I also wrote a script that extracts citations and their dates from DSL entries. I created a new citations table that stores the dates, the quotes and associated entry and bibliography IDs. The table has 747,868 rows in it. Eventually we’ll be able to use this table for some kind of date search, plus there’s now an easy to access record of all of the bib IDs for each entry / entry IDs for each bib, so displaying lists of entries associated with each bibliography should also be straightforward when the time comes. I also added new firstdate and lastdate columns to the entry table, picking out the earliest and latest date associated with each entry and storing these. This means we can add first dates to the browse, something I decided to add in for test purposes later in the week.
I added the first recorded date (the display version not the machine readable version) to the ‘browse’ for DOST and SND. The dates are right-aligned and grey to make them stand out less than the main browse label. This does however make the date of the currently selected entry in the browse a little hard to read. Not all entries have dates available. Any that don’t are entries where either the new date attributes haven’t been applied or haven’t worked. This is really just a proof of concept and I will remove the dates from the browse before we go live, as we’re not going to do anything with the new date information until a later point.
I also processed the ‘History of Scots’ ancillary pages. Someone had gone through these to add in links to entries (hundreds of links), but unfortunately they hadn’t got the structure quite right. The links had been added in Word, meaning regular double quotes had been converted into curly quotes, which are not valid HTML. Also the links only included the entry ID, rather than the path to the entry page. A couple of quick ‘find and replace’ jobs fixed these issues, but I also needed to update the API to allow old DSL IDs to be passed without also specifying the source. I also set up a Google Analytics account for the DSL’s version of Wordle (https://macwordle.co.uk/)
For the Speak For Yersel project I had a meeting with Mary on Thursday to discuss some new exercises that I’ll need to create. I also spent some time creating the ‘Sounds about right’ activity. This had a slightly different structure to other activities in that the questionnaire has three parts with an introduction for each part. This required some major reworking of the code as things like the questionnaire numbers and the progress bar relied on the fact that there was one block of questions with no non-question screens in between. The activity also featured a new question type with multiple sound clips. I had to process these (converting them from WAV to MP3) and then figure out how to embed them in the questions.
Finally, for the Speech Star project I updated the extIPA chart to improve the layout of the playback speed options. I also made the page remember the speed selection between opening videos – so if you want to view them all ‘slow’ then you don’t need to keep selecting ‘slow’ each time you open one. I also updated the chart to provide an option to switch between MRI and animation videos and added in two animation MP4s that Eleanor had supplied me with. I then added the speed selector to the Normalised Speech Database video popups and then created a new ‘Disordered Paediatric Speech Database’, featuring many videos, filters to limit the display of data and the video speed selector. It was quite a rush to get this finished by the end of the week, but I managed it.
I will be on holiday next week so there will be no post from me then.
With the help of Raymond at Arts IT Support we migrated the test version of the DSL website to the new server this week, and also set up the Solr free-text indexes for the new DSL data too. This test version of the site will become the live version when we’re ready to launch it in April and the migration all went pretty smoothly, although I did encounter an error with the htaccess script that processed URLs for dictionary pages due to underscores not needing to be escaped on the old server but requiring a backslash as an escape character on the new server.
I also replaced the test version’s WordPress database with a copy of the live site’s WordPress database, plus copied over some of the customisations from the live site such as changes to logos and the content of the header and the footer, bringing the test version’s ancillary content and design into alignment with the live site whilst retaining some of the additional tweaks I’d made to the test site (e.g. the option to hide the ‘browse’ column and the ‘about this entry’ box).
One change to the structure of the DSL data that has been implemented is that dates are now machine readable, with ‘from’, ‘to’ and ‘prefix’ attributes. I had started to look at extracting these for use in the site (e.g. maybe displaying the earliest citation date alongside the headword in the ‘browse’ lists) when I spotted an issue with the data: Rather than having a date in the ‘to’ attribute, some entries had an error code – for example there are 6,278 entries that feature a date with ‘PROBLEM6’ as a ‘to’ attribute. I flagged this up with the DSL people and after some investigation they figured out that the date processing script wasn’t expecting to find a circa in a date ending a range (e.g. c1500-c1512). When the script encountered such a case it was giving an error instead. The DSL people were able to fix this issue and a new data export was prepared, although I won’t be using it just yet, as they will be sending me a further update before we go live and to save time I decided to just wait until they send this on. I also completed work on the XSLT for displaying bibliography entries and created a new ‘versions and changes’ page, linking to it from a statement in the footer that notes the data version number.
For the ‘Speak For Yersel’ project I made a number of requested updates to the exercises that I’d previously created. I added a border around the selected answer and ensured the active state of a selected button doesn’t stay active and I added handy ‘skip to quiz’ and ‘skip to explore’ links underneath the grammar and lexical quizzes so we don’t have to click through all those questions to check out the other parts of the exercise. I italicised ‘you’ and ‘others’ on the activity index pages and I fixed a couple of bugs on the grammar questionnaire. Previously only the map rolled up and an issue was caused when an answer was pressed on whilst the map was still animating. Now the entire question area animates so it’s impossible to press on an answer when the map isn’t available. I updated the quiz questions so they now have the same layout as the questionnaire, with options on the left and the map on the right and I made all maps taller to see how this works.
For the ‘Who says what where’ exercise the full sentence text is now included and I made the page scroll to the top of the map if this isn’t visible when you press on an item. I also updated the map and rating colours, although there is still just one placeholder map that loads so the lexical quiz with its many possible options doesn’t have its own map that represents this. The map still needs some work – e.g. adding in a legend and popups. I also made all requested changes to the lexical question wording and made the ‘v4’ click activity the only version, making it accessible via the activities menu and updated the colours for the correct and incorrect click answers.
For the Books and Borrowing project I completed a first version of the requirements for the public website, which has taken a lot of time and a lot of thought to put together, resulting in a document that’s more than 5,000 words long. On Friday I had a meeting with PI Katie and Co-I Matt to discuss the document. We spent an hour going through it and a list of questions I’d compiled whilst writing it, and I’ll need to make some modifications to the document based on our discussions. I also downloaded images of more library registers from St Andrews and one further register from Glasgow that I will need to process when I’m back at work too.
I also spent a bit of time writing a script to export a flat CSV version of the Historical Thesaurus, then made some updates based on feedback from the HT team before exporting a further version. We also spotted that adjectives of ‘parts of insects’ appeared to be missing from the website and I investigated what was going on with it. It turned out that there was an empty main category missing, and as all the other data was held in subcategories these didn’t appear, as all subcategories need a main category to hang off. After adding in a maincat all of the data was restored.
Finally, I did a bit of work for the Speech Star project. Firstly, I fixed a couple of layout issues with the ExtIPA chart symbols. There was an issue with the diacritics for the symbol that looks like a theta, resulting in them being offset. I reduced the size of the symbol slightly and have adjusted the margins of the symbols above and below and this seems to have done the trick. In addition, I did a little bit of research into setting the playback speed and it looks like this will be pretty easy to do whilst still using the default video player. See this page: https://stackoverflow.com/questions/3027707/how-to-change-the-playing-speed-of-videos-in-html5. I added a speed switcher to the popup as a little test to see how it works. The design would still need some work (buttons with the active option highlighted) but it’s good to have a proof of concept. Pressing ‘normal’ or ‘slow’ sets the speed for the current video in the popup and works both when the video is playing and when it’s stopped.
Also, I was sure that jumping to points in the videos wasn’t working before, but it seems to work fine now – you can click and drag the progress bar and the video jumps to the required point, either when playing or paused. I wonder if there was something in the codec that was previously being used that prevented this. So fingers cross we’ll be able to just use the standard HTML5 video player to achieve everything the projects requires.
I’ll be participating in the UCU strike action for all of next week so it will be the week beginning the 28th of March before I’m back in work again.
This was my first five-day week after the recent UCU strike action and it was pretty full-on, involving many different projects. I spent about a day working on the Speak For Yersel project. I added in the content for all 32 ‘I would never say that’ questions and completed work on the new ‘Give your word’ lexical activity, which features a further 30 questions of several types. This includes questions that have associated images and questions where multiple answers can be selected. For the latter no more than three answers are allowed to be selected and this question type needs to be handed differently as we don’t want the map to load as soon as one answer is selected. Instead the user can select / deselect answers. If at least one answer is selected a ‘Continue’ button appears under the question. When you press on this the answers become read only and the map appears. I made it so that no more than three options can be selected – you need to deselect one before you can add another. I think we’ll need to look into the styling of the buttons, though, as currently ‘active’ (when a button is hovered over or has been pressed and nothing else has yet been pressed) is the same colour is ‘selected’. So if you select ‘ginger’ then deselect it the button still looks selected until you press somewhere else, which is confusing. Also if you press a fourth button it looks like it has been selected when in actual fact it’s just ‘active’ and isn’t really selected.
I also spent about a day continuing to work on the requirements document for the Books and Borrowing project. I haven’t quite finished this initial version of the document but I’ve made good progress and I aim to have it completed next week. Also for the project I participated in a Zoom call with RA Alex Deans and NLS Maps expert Chris Fleet about a subproject we’re going to develop for B&B for the Chambers Library in Edinburgh. This will feature a map-based interface showing where the borrowers lived and will use a historical map layer for the centre of Edinburgh.
Chris also talked about a couple of projects at the NLS that were very useful to see. The first one was the Jamaica journal of Alexander Innes (https://geo.nls.uk/maps/innes/) which features journal entries plotted on a historical map and a slider allowing you to quickly move through the journal entries. The second was the Stevenson maps of Scotland (https://maps.nls.uk/projects/stevenson/) that provides options to select different subjects and date periods. He also mentioned a new crowdsourcing project to transcribe all of the names on the Roy Military Survey of Scotland (1747-55) maps which launched in February and already has 31,000 first transcriptions in place, which is great. As with the GB1900 project, the data produced here will be hugely useful for things like place-name projects.
I also participated in a Zoom call with the Historical Thesaurus team where we discussed ongoing work. This mainly involves a lot of manual linking of the remaining unlinked categories and looking at sensitive words and categories so there’s not much for me to do at this stage, but it was good to be kept up to date.
I continued to work on the new extIPA charts for the Speech Star project, which I had started on last week. Last week I had some difficulties replicating the required phonetic symbols but this week Eleanor directed me to an existing site that features the extIPA chart (https://teaching.ncl.ac.uk/ipa/consonants-extra.html). This site uses standard Unicode characters in combinations that work nicely, without requiring any additional fonts to be used. I’ve therefore copied the relevant codes from there (this is just character codes like b̪ – I haven’t copied anything other than this from the site). With the symbols in place I managed to complete an initial version of the chart, including pop-ups featuring all of the videos, but unfortunately the videos seem to have been encoded with an encoder that requires QuickTime for playback. So although the videos are MP4 they’re not playing properly in browsers on my Windows PC – instead all I can hear is the audio. It’s very odd as the videos play fine directly from Windows Explorer, but in Firefox, Chrome or MS Edge I just get audio and the static ‘poster’ image. When I access the site on my iPad the videos play fine (as QuickTime is an Apple product). Eleanor is still looking into re-encoding the videos and will hopefully get updated versions to me next week.
I also did a bit more work for the Anglo-Norman Dictionary this week. I fixed a couple of minor issues with the DTD, for example the ‘protect’ attribute was an enumerated list that could either be ‘yes’ or ‘no’ but for some entries the attribute was present but empty, and this was against the rules. I looked into whether an enumerated list could also include an empty option (as opposed to not being present, which is a different matter) but it looks like this is not possible (see for example http://lists.xml.org/archives/xml-dev/200309/msg00129.html). What I did instead was to change the ‘protect’ attribute from an enumerated list with options ‘yes’ and ‘no’ to a regular data field, meaning the attribute can now include anything (including being empty). The ‘protect’ attribute is a hangover from the old system and doesn’t do anything whatsoever in the new system so it shouldn’t really matter. And it does mean that the XML files should now validate.
The AND people also noticed that some entries that are present in the old version of the site are missing from the new version. I looked through the database and also older versions of the data from the new site and it looks like these entries have never been present in the new site. The script I ran to originally export the entries from the old site used a list of headwords taken from another dataset (I can’t remember where from exactly) but I can only assume that this list was missing some headwords and this is why these entries are not in the new site. This is a bit concerning, but thankfully the old site is still accessible. I managed to write a little script that grabs the entire contents of the browse list from the old website, separating it into two lists, one for main entries and one for xrefs. I then ran each headword against a local version of the current AND database, separating out homonym numbers then comparing the headword with the ‘lemma’ field in the DB and the hom with the hom. Initially I ran main and xref queries separately, comparing main to main and xref to xref, but I realised that some entries had changed types (legitimately so, I guess) so stopped making a distinction.
The script outputted 1540 missing entries. This initially looks pretty horrifying, but I’m fairly certain most of them are legitimate. There are a whole bunch of weird ‘n’ forms in the old site that have a strange character (e.g. ‘nun⋮abilité’) that are not found in the new site, I guess intentionally so. Also, there are lots of ‘S’ and ‘R’ words but I think most of these are because of joining or splitting homonyms. Geert, the editor, looked through the output and thankfully it turns out that only a handful of entries are missing, and also that these were also missing from the old DMS version of the data so their omission occurred before I became involved in the project.
Finally this week I worked with a new dataset of the Dictionaries of the Scots Language. I successfully imported the new data and have set up a new ‘dps-v2’ api. There are 80,319 entries in the new data compared to 80,432 in the previous output from DPS. I have updated our test site to use the new API and its new data, although I have not been able to set up the free-text data in Solr yet so the advanced search for full text / quotations only will not work yet. Everything else should, though.
Also today I began to work on the layout of the bibliography page. I have completed the display of DOST bibs but haven’t started on SND yet. This includes the ‘style guide’ link when a note is present. I think we may still need to tweak the layout, however. I’ll continue to work with the new data next week.
I participated in the UCU strike action from Monday to Wednesday this week, making it a two-day week for me. I’d heard earlier in the week that the paper I’d submitted about the redevelopment of the Anglo-Norman Dictionary had been accepted for DH2022 in Tokyo, which was great. However, the organisers have decided to make the conference online only, which is disappointing, although probably for the best given the current geopolitical uncertainty. I didn’t want to participate in an online only event that would be taking place in Tokyo time (nine hours ahead of the UK) so I’ve asked to withdraw my paper.
On Thursday I had a meeting with the Speak For Yersel project to discuss the content that the team have prepared and what I’ll need to work on next. I also spend a bit of time looking into creating a geographical word cloud which would fit word cloud output into a geoJSON polygon shape. I found one possible solution here: https://npm.io/package/maptowordcloud but I haven’t managed to make it work yet.
I also received a new set of videos for the Speech Star project, relating to the extIPA consonants, and I began looking into how to present these. This was complicated by the extIPA symbols not being standard Unicode characters. I did a bit of research into how these could be presented, and found this site http://www.wazu.jp/gallery/Test_IPA.html#ExtIPAChart but here the marks appear to the right of the main symbol rather than directly above or below. I contacted Eleanor to see if she had any other ideas and she got back to me with some alternatives which I’ll need to look into next week.
I spent a bit of time working for the DSL this week too, looking into a question about Google Analytics from Pauline Graham (and finding this very handy suite of free courses on how to interpret Google Analytics here https://analytics.google.com/analytics/academy/). The DSL people had also wanted me to look into creating a Levenshtein distance option, whereby words that are spelled similarly to an entered term are given as suggestions, in a similar way to this page: http://chrisgilmour.co.uk/scots/levensht.php?search=drech. I created a test script that allows you to enter a term and view the SND headwords that have a Levenshtein distance of two or less from your term, with any headwords with a distance of one highlighted in bold. However, Levenshtein is a bit of a blunt tool, and as it stands I’m not sure the results of the script are all that promising. My test term ‘drech’ brings back 84 matches, including things like ‘french’ which is unfortunately only two letters different from ‘drech’. I’m fairly certain my script is using the same algorithm as used by the site linked to above, it’s just that we have a lot more possible matches. However, this is just a simple Levenshtein test – we could also add in further tests to limit (or expand) the output, such as a rule that changes vowels in certain places as in the ‘a’ becomes ‘ai’ example suggested by Rhona at our meeting last week. Or we could limit the output to words beginning with the same letter.
Also this week I had a chat with the Historical Thesaurus people, arranging a meeting for next week and exporting a recent version of the database for them to use offline. I also tweaked a couple of entries for the AND and spent an hour or so upgrading all of the WordPress sites I manage to the latest WordPress version.
I participated in the UCU strike action for all of last week and on Monday and Tuesday this week. I divided the remaining three days between three projects: the Anglo-Norman Dictionary, the Dictionaries of the Scots Language and Books and Borrowing.
For AND I continued to work on the publication of a major update of the letter S. I had deleted all of the existing S entries and had imported all of the new data into our test instance the week before the strike, giving the editors time to check through it all and work on the new data via the content management system of the test instance. They had noticed that some of the older entries hadn’t been deleted, and this was causing some new entries to not get displayed (as both old and new entries had the same ‘slug’ and therefore the older entry was still getting picked up when the entry’s page was loaded). It turned out that I had forgotten that not all S entries actually have a headword beginning with ‘s’ – there are some that have brackets and square brackets. There were 119 of these entries still left in the system and I updated my deletion scripts to remove these additional entries, ensuring that only the older versions and not the new ones were removed. This fixed the issues with new entries not appearing. With this task completed and the data approved by the editors we replaced the live data with the data from the test instance.
The update has involved 2,480 ‘main’ entries, containing 4,109 main senses, 1,295 subsenses, 2,627 locutions, 1,753 locution senses, 204 locution subsenses and 16,450 citations. In addition, 4,623 ‘xref’ entries have been created or updated. I also created a link checker which goes through every entry, pulls out all cross references from anywhere in the entry’s XML and checks to see whether each cross-referenced entry actually exists in the system. The vast majority of links were all working fine but there were still a substantial number that were broken (around 800). I’ve passed a list of these over to the editors who will need to manually fix the entries over time.
For the DSL I had a meeting on Thursday morning with Rhona, Ann and Pauline to discuss the major update to the DSL’s data that is going to go live soon. This involves a new batch of data exported from their new editing system that will have a variety of significant structural changes, such as a redesigned ‘head’ section, and an overhauled method of recording dates. We will also be migrating the live site to a new server, a new API and a new Solr instance so it’s a pretty major change. We had been planning to have all of this completed by the end of March, but due to the strike we now think it’s best to push this back to the end of April, although we may launch earlier if I manage to get all of the updates sorted before then. Following the meeting I made a few updates to our test instance of the system (e.g. reinstating some superscript numbers from SND that we’d previously hidden) and had a further email conversation with Ann about some ancillary pages.
For the Books and Borrowing project I downloaded a new batch of images for five more registers that had been digitised for us by the NLS. I then processed these, uploaded them to our server and generated register and page records for each page image. I also processed the data from the Royal High School of Edinburgh that had been sent to me in a spreadsheet. There were records from five different registers and it took quite some time to write a script that would process all of the data, including splitting up borrower and book data, generating book items where required and linking everything together so that a borrower and a book only exist once in the system even if they are associated with many borrowing records. Thankfully I’d done this all before for previous external datasets, but the process is always different for each dataset so there was still much in the way of reworking to be done.
I completed my scripts and ran them on a test instance of the database running on my local PC to start with. When all was checked and looking good I ran the scripts on the live server to incorporate the new register data with the main project dataset. After completing the task there were 19,994 borrowing records across 1,438 register pages, involving 1,932 books and 2,397 borrowers. Some tweaking of the data may be required (e.g. I noticed there are two ‘Alexander Adam’ borrowers, which seems to have occurred because there was a space character before the forename sometimes) but on the whole it’s all looking good to me.
Next week I’ll be on strike again on Monday to Wednesday.
It’s been a pretty full-on week ahead of the UCU strike action, which begins on Monday. I spent quite a bit of time working on the Speak For Yersel project, starting with a Zoom call on Monday, after which I continued to work on the ‘click’ map I’d developed last week. The team liked what I’d created but wanted some changes to be made. They didn’t like that the area containing the markers was part of the map and you needed to move the map back to the marker area to grab and move a marker. Instead they wanted to have the markers initially stored in a separate section beside the map. I thought this would be very tricky to implement but decided to investigate anyway and unfortunately I was proved right. In the original version the markers are part of the mapping library – all we’re doing is moving them around the map. To have the icons outside the map means the icons initially cannot be part of the mapping library, but instead need to be simple HTML elements, but when they are dragged into the map they then have to become map markers with latitude and longitude values, ideally with a smooth transition from plain HTML to map icon as the element is dragged from the general website into the map pane.
It took many hours to figure out how this might work and to update the map to implement the new way of doing things. I discovered that HTML5’s default drag and drop functionality could be used (see this example: https://jsfiddle.net/430oz1pj/197/), which allows you to drag an HTML element and drop it somewhere. If the element is dropped over the map then a marker can be created at this point. However, this proved to be more complicated than it looks to implement as I needed to figure out a way to pass the ID of the HTML marker to the mapping library, and also handle the audio files associated with the icons. Also, the latitude and longitude generated in the above example was not in any way an accurate representation of the cursor pointer location. For this reason I integrated a Leaflet plugin that displays the coordinates of the mouse cursor (https://github.com/MrMufflon/Leaflet.Coordinates). I hid this on the map, but it still runs in the background, allowing my script to grab the latitude and longitude of the cursor at the point where the HTML element is dropped. I also updated the marker icons to add a number to each one, making it easier to track which icon is which. This also required me to rework the play and pause audio logic. With all of this in place I completed ‘v2’ of the click map and I thought the task was completed until I did some final testing on my iPad and Android phone. And unfortunately I discovered that the icons don’t drag on touchscreen devices (even the touchscreen on my Windows 11 laptop). This was a major setback as clearly we need the resource to work on touchscreens.
I then created a further ‘v4’ version has the updated areas (Shetland and Orkney, Western Isles and Argyll are now split) and use the broader areas around Shetland and the Western Isles for ‘correct’ areas. I’ve also updated the style of the marker box and made it so that the ‘View correct locations’ and ‘Continue’ buttons only become active after the user has dragged all of the markers onto the map.
The ‘View correct locations’ button also now works again. The team had also wanted the correct locations to appear on a new map that would appear beside the existing map. Thinking more about this I really don’t think it’s a good idea. Introducing another map is likely to confuse people and on smaller screens the existing map already takes up a lot of space. A second map would need to appear below the first map and people might not even realise there are two maps as both wouldn’t fit on screen at the same time. What I’ve done instead is to slow down the animation of markers to their correct location when the ‘view’ button is pressed so it’s easier to see which marker is moving where. I think this in combination with the markers now being numbered makes it clearer. Here’s a screenshot of this ‘v4’ version showing two markers on the map, one correct, the other wrong:
There is still the issue of including the transcriptions of the speech. We’d discussed adding popups to the markers to contain these, but again the more I think about this the more I reckon it’s a bad idea. Opening a popup requires a click and the markers already have a click event (playing / stopping the audio). We could change the click event after the ‘View correct locations’ button is pressed, so that from that point onwards clicking on a marker opens a popup instead of playing the audio, but I think this would be horribly confusing. We did talk about maybe always having the markers open a popup when they’re clicked and then having a further button to play the audio in the popup along with the transcription, but requiring two clicks to listen to the audio is pretty cumbersome. Plus marker popups are part of the mapping library so the plain HMTL markers outside the map couldn’t have popups, or at least not the same sort.
I wondered if we’re attempting to overcomplicate the map. I would imagine most school children aren’t even going to bother looking at the transcripts and cluttering up the map with them might not be all that useful. An alternative might be to have the transcripts in a collapsible section underneath the ‘Continue’ button that appears after the ‘check answers’ button is pressed. We could have some text saying something like ‘Interested in reading what the speakers said? Look at the transcripts below’. The section could be hidden by default and then pressing on it opens up headings for speakers 1-8. Pressing on a heading then expands a section where the transcript can be read.
On Tuesday I had a call with the PI and Co-I of the Books and Borrowing project about the requirements for the front-end and the various search and browse functionality it would need to have. I’d started writing a requirements document before the meeting and we discussed this, plus their suggestions and input from others. It was a very productive meeting and I continued with the requirements document after the call. There’s still a lot to put into it, and the project’s data and requirements are awfully complicated, but I feel like we’re making good progress and things are beginning to make sense.
I also made some further tweaks to the speech database for the Speech Star project. I’d completed an initial version of this last week, including the option to view multiple selected videos side by side. However, while the videos worked fine in Firefox in other browsers only the last video loaded in successfully. It turns out that there’s a limit to the number of open connections Chrome will allow. If I set the videos so that the content doesn’t preload then all videos work when you press to play them. However, this does introduce a further problem: without preloading the video nothing gets displayed where the video appears unless you add in a ‘poster’, which is an image file to use as a placeholder, usually a still from the video. We had these for all of the videos for Seeing Speech, but we don’t have them for the new STAR videos. I’ve made a couple manually for the test page, but I don’t want to have to manually create hundreds of such images. I did wonder about doing this via YouTube as it generates placeholder images, but even this is going to take a long time as you can only upload 15 videos at once to Youtube, then you need to wait for them to be processed, then you need to manually download the image you want.
I found a post that gave some advice on programmatically generating poster images from video files (https://stackoverflow.com/questions/2043007/generate-preview-image-from-video-file) but the PHP library seemed to require some kind of weird package installer to first be installed in order to function. The library also required https://ffmpeg.org/download.html to be installed to function, and I decided to not bother with the PHP library and just use FFMPEG directly, calling it from the command line via a PHP script and iterating through the hundreds of videos to make the posters. It worked very well and now the ‘multivideo’ feature works perfectly in all browsers.
Also this week I had a Zoom call with Ophira Gamliel in Theology about a proposal she’s putting together. After the call I wrote sections of a Data Management Plan for the proposal and answered several emails over the remainder of the week. I also had a chat with the DSL people about the switch to the new server that we have scheduled for March. There’s quite a bit to do with the new data (and new structures in the new data) before we go live to March is going to be quite a busy time.
Finally this week I spent some time on the Anglo-Norman Dictionary. I finished generating the KWIC data for one of the textbase texts now that the server will allow scripts to execute for a longer time. I also investigated an issue with the XML proofreader that was giving errors. It turned out that the errors were being caused by errors in the XML files and I found out that oXygen offers a very nice batch validation facility that you can run on massive batches of XML files at the same time (See https://www.oxygenxml.com/doc/versions/24.0/ug-editor/topics/project-validation-and-transformation.html). I also began working with a new test instance of the AND site, through which I am going to publish the new data for the letter S. There are many thousand XML files that need to be integrated and it’s taking some time to ensure the scripts to process these work properly, but all is looking encouraging.
I will be participating in the UCU strike action over the coming weeks so that’s all for now.