Week Beginning 7th February 2022

It’s been a pretty full-on week ahead of the UCU strike action, which begins on Monday.  I spent quite a bit of time working on the Speak For Yersel project, starting with a Zoom call on Monday, after which I continued to work on the ‘click’ map I’d developed last week.  The team liked what I’d created but wanted some changes to be made.  They didn’t like that the area containing the markers was part of the map and you needed to move the map back to the marker area to grab and move a marker.  Instead they wanted to have the markers initially stored in a separate section beside the map.  I thought this would be very tricky to implement but decided to investigate anyway and unfortunately I was proved right.  In the original version the markers are part of the mapping library – all we’re doing is moving them around the map.  To have the icons outside the map means the icons initially cannot be part of the mapping library, but instead need to be simple HTML elements, but when they are dragged into the map they then have to become map markers with latitude and longitude values, ideally with a smooth transition from plain HTML to map icon as the element is dragged from the general website into the map pane.

It took many hours to figure out how this might work and to update the map to implement the new way of doing things.  I discovered that HTML5’s default drag and drop functionality could be used (see this example: https://jsfiddle.net/430oz1pj/197/), which allows you to drag an HTML element and drop it somewhere.  If the element is dropped over the map then a marker can be created at this point.  However, this proved to be more complicated than it looks to implement as I needed to figure out a way to pass the ID of the HTML marker to the mapping library, and also handle the audio files associated with the icons.  Also, the latitude and longitude generated in the above example was not in any way an accurate representation of the cursor pointer location.  For this reason I integrated a Leaflet plugin that displays the coordinates of the mouse cursor (https://github.com/MrMufflon/Leaflet.Coordinates).  I hid this on the map, but it still runs in the background, allowing my script to grab the latitude and longitude of the cursor at the point where the HTML element is dropped.  I also updated the marker icons to add a number to each one, making it easier to track which icon is which.  This also required me to rework the play and pause audio logic.  With all of this in place I completed ‘v2’ of the click map and I thought the task was completed until I did some final testing on my iPad and Android phone.  And unfortunately I discovered that the icons don’t drag on touchscreen devices (even the touchscreen on my Windows 11 laptop).  This was a major setback as clearly we need the resource to work on touchscreens.

It turns out HTML5 drag and drop simply does not work with touchscreens.  So I’m afraid it was back to the drawing board.  I remembered that I successfully used drag and drop on touchscreens for the MetaphorIC website (see question 5 on this page: https://mappingmetaphor.arts.gla.ac.uk/metaphoric/exercise.html?id=49).  This website uses a different JavaScript framework called jQuery UI (see https://jqueryui.com/draggable/) so I figured I’d integrate this with the SFY website.  However, on doing so and updating the HTML and JavaScript to use the new library the HTML elements still wouldn’t drag on a touchscreen.  Thankfully I remembered that a further JavaScript library called jQuery UI Touch Punch (https://github.com/furf/jquery-ui-touch-punch) was needed for the drag functionality to work on touchscreens.  With this in place the icons could now be dragged around the screen.  However, getting the jQuery UI library to interact with the Leaflet mapping library also proved to be tricky.  The draggable icons ended up disappearing behind the map pane rather than dragged over the top.  They would then drop in the correct location, but having them invisible until you dropped them was no good.  I fixed this by updating the z-index of the icons (this controls the order HTML elements appear on the screen).  Finally the icon would glide across the map before being dropped.  But this also prevented the Leaflet Coordinates plugin from picking up the location of the cursor when the icon was dropped, meaning the icon either appeared on the map in the wrong location or simply disappeared entirely.  I almost gave up at this point, but I decided to go back to the method of positioning the marker as found in the first link above – the one that positioned a dropped icon, but in the wrong location.  The method used in this example did actually work with my new drag and drop approach, which was encouraging.  I also happened to return to the page that linked to the example: https://gis.stackexchange.com/questions/296126/dragging-and-dropping-images-into-leaflet-map-to-create-image-markers and found a comment further down the page that noted the incorrect location of the dropped marker and proposed a solution.  After experimenting with this I thankfully discovered that it worked, meaning I could finishe work on ‘v3’ of the click map, which is identical to ‘v2’ other than the fact that is works on touchscreens.

I then created a further ‘v4’ version has the updated areas (Shetland and Orkney, Western Isles and Argyll are now split) and use the broader areas around Shetland and the Western Isles for ‘correct’ areas.  I’ve also updated the style of the marker box and made it so that the ‘View correct locations’ and ‘Continue’ buttons only become active after the user has dragged all of the markers onto the map.

The ‘View correct locations’ button also now works again.  The team had also wanted the correct locations to appear on a new map that would appear beside the existing map.  Thinking more about this I really don’t think it’s a good idea.  Introducing another map is likely to confuse people and on smaller screens the existing map already takes up a lot of space.  A second map would need to appear below the first map and people might not even realise there are two maps as both wouldn’t fit on screen at the same time.  What I’ve done instead is to slow down the animation of markers to their correct location when the ‘view’ button is pressed so it’s easier to see which marker is moving where.  I think this in combination with the markers now being numbered makes it clearer.  Here’s a screenshot of this ‘v4’ version showing two markers on the map, one correct, the other wrong:

There is still the issue of including the transcriptions of the speech.  We’d discussed adding popups to the markers to contain these, but again the more I think about this the more I reckon it’s a bad idea.  Opening a popup requires a click and the markers already have a click event (playing / stopping the audio).  We could change the click event after the ‘View correct locations’ button is pressed, so that from that point onwards clicking on a marker opens a popup instead of playing the audio, but I think this would be horribly confusing.  We did talk about maybe always having the markers open a popup when they’re clicked and then having a further button to play the audio in the popup along with the transcription, but requiring two clicks to listen to the audio is pretty cumbersome.  Plus marker popups are part of the mapping library so the plain HMTL markers outside the map couldn’t have popups, or at least not the same sort.

I wondered if we’re attempting to overcomplicate the map.  I would imagine most school children aren’t even going to bother looking at the transcripts and cluttering up the map with them might not be all that useful.  An alternative might be to have the transcripts in a collapsible section underneath the ‘Continue’ button that appears after the ‘check answers’ button is pressed.  We could have some text saying something like ‘Interested in reading what the speakers said?  Look at the transcripts below’.  The section could be hidden by default and then pressing on it opens up headings for speakers 1-8.  Pressing on a heading then expands a section where the transcript can be read.

On Tuesday I had a call with the PI and Co-I of the Books and Borrowing project about the requirements for the front-end and the various search and browse functionality it would need to have.  I’d started writing a requirements document before the meeting and we discussed this, plus their suggestions and input from others.  It was a very productive meeting and I continued with the requirements document after the call.  There’s still a lot to put into it, and the project’s data and requirements are awfully complicated, but I feel like we’re making good progress and things are beginning to make sense.

I also made some further tweaks to the speech database for the Speech Star project.  I’d completed an initial version of this last week, including the option to view multiple selected videos side by side.  However, while the videos worked fine in Firefox in other browsers only the last video loaded in successfully.   It turns out that there’s a limit to the number of open connections Chrome will allow.  If I set the videos so that the content doesn’t preload then all videos work when you press to play them.  However, this does introduce a further problem:  without preloading the video nothing gets displayed where the video appears unless you add in a ‘poster’, which is an image file to use as a placeholder, usually a still from the video.  We had these for all of the videos for Seeing Speech, but we don’t have them for the new STAR videos.  I’ve made a couple manually for the test page, but I don’t want to have to manually create hundreds of such images.  I did wonder about doing this via YouTube as it generates placeholder images, but even this is going to take a long time as you can only upload 15 videos at once to Youtube, then you need to wait for them to be processed, then you need to manually download the image you want.

I found a post that gave some advice on programmatically generating poster images from video files (https://stackoverflow.com/questions/2043007/generate-preview-image-from-video-file) but the PHP library seemed to require some kind of weird package installer to first be installed in order to function.  The library also required https://ffmpeg.org/download.html to be installed to function, and I decided to not bother with the PHP library and just use FFMPEG directly, calling it from the command line via a PHP script and iterating through the hundreds of videos to make the posters.  It worked very well and now the ‘multivideo’ feature works perfectly in all browsers.

Also this week I had a Zoom call with Ophira Gamliel in Theology about a proposal she’s putting together.  After the call I wrote sections of a Data Management Plan for the proposal and answered several emails over the remainder of the week.  I also had a chat with the DSL people about the switch to the new server that we have scheduled for March.  There’s quite a bit to do with the new data (and new structures in the new data) before we go live to March is going to be quite a busy time.

Finally this week I spent some time on the Anglo-Norman Dictionary.  I finished generating the KWIC data for one of the textbase texts now that the server will allow scripts to execute for a longer time.  I also investigated an issue with the XML proofreader that was giving errors.  It turned out that the errors were being caused by errors in the XML files and I found out that oXygen offers a very nice batch validation facility that you can run on massive batches of XML files at the same time (See https://www.oxygenxml.com/doc/versions/24.0/ug-editor/topics/project-validation-and-transformation.html).  I also began working with a new test instance of the AND site, through which I am going to publish the new data for the letter S.  There are many thousand XML files that need to be integrated and it’s taking some time to ensure the scripts to process these work properly, but all is looking encouraging.

I will be participating in the UCU strike action over the coming weeks so that’s all for now.

Week Beginning 31st January 2022

I split my time over many different projects this week.  For the Books and Borrowing project I completed the work I started last week on processing the Wigtown data, writing a little script that amalgamated borrowing records that had the same page order number on any page.  These occurrences arose when multiple volumes of a book were borrowed by a person at the same time and each volume was recorded separately.  My script worked perfectly and many such records were amalgamated.

I then moved onto incorporating images of register pages from Leighton into the CMS.  This proved to be a rather complicated process for one of the four registers as around 30 pages for the register had already been manually created in the CMS and had borrowing records associated with them.  However, these pages had been created in a somewhat random order, starting at folio number 25 and mostly being in order down to 43, at which point the numbers are all over the place, presumably because the pages were created in the order that they were transcribed.    As it stands the CMS relies on the ‘page ID’ order when generating lists of pages as ‘Folio Number’ isn’t necessarily in numerical order (e.g. front / back matter with Roman numerals).  If out of sequence pages crop up a lot we may have to think about adding a new ‘page order’ column, or possibly use the ‘previous’ and ‘next’ IDs to ascertain the order pages should be displayed.  After some discussion with the team it looks like pages are usually created in page order and Leighton is an unusual case, so we can keep using the auto-incrementing page ID for listing pages in the contents page.  I therefore generated a fresh batch of pages for the Leighton register then moved the borrowing records from the existing mixed up pages to the appropriate new page, then deleted the existing pages so everything is all in order.

For the Speak For Yersel project I created a new exercise whereby users are presented with a map of Scotland divided into 12 geographical areas and there are eight map markers in a box in the sea to the east of Scotland.  Each marker is clickable, and clicking on it plays a sound file.  Each marker is also draggable and after listening to the sound file the user should then drag the marker to whichever area they think the speaker in the sound file is from.  After dragging all of the markers the user can then press a ‘check answers’ button to see which they got right, and press a ‘view correct locations’ button which animates the markers to their correct locations on the map.  It was a lot of fun making the exercise and I think it works pretty well.  It’s still just an initial version and no doubt we will be changing it, but here’s a screenshot of how it currently looks (with one answer correct and the rest incorrect):

For the Speech Star project I made some further changes to the speech database.  Videos no longer autoplay, as requested.  Also, the tables now feature checkboxes beside them.  You can select up to four videos by pressing on these checkboxes.  If you select more than four the earliest one you pressed is deselected, keeping a maximum of four no matter how many checkboxes you try to click on.  When at least one checkbox is pressed the tab contents will slide down and a button labelled ‘Open selected videos’ will appear.  If you press on this a wider popup will open, containing all of your chosen videos and the metadata about each.  This has required quite a lot of reworking to implement, but it seemed to be working well, until I realised that while the multiple videos load and play successfully in Firefox, in Chrome and MS Edge (which is based on Chrome) only the final video loads in properly, with only audio playing on the other videos.  I’ll need to investigate this further next week.  But here’s a screenshot of how things look in Firefox:

Also this week I spoke to Thomas Clancy about the Place-names of Iona project, including discussing how the front-end map will function (Thomas wants an option to view all data on a single map, which should work although we may need to add in clustering at higher zoom levels.  We also discussed how to handle external links and what to do about the elements database, that includes a lot of irrelevant elements from other projects.

I also had an email conversation with Ophira Gamliel in Theology about a proposal she’s putting together that will involve an interactive map, gave some advice to Diane Scott about cookie policy pages, worked with Raymond in Arts IT Support to fix an issue with a server update that was affecting the playback of videos on the Seeing Speech and Dynamic Dialects websites and updated a script that Fraser Dallachy needed access to for his work on a Scots Thesaurus.

Finally, I had some email conversations with the DSL people and made an update to the interface of the new DSL website to incorporate an ‘abbreviations’ button, which links to the appropriate DOST or SND abbreviations page.