Week Beginning 7th February 2022

It’s been a pretty full-on week ahead of the UCU strike action, which begins on Monday.  I spent quite a bit of time working on the Speak For Yersel project, starting with a Zoom call on Monday, after which I continued to work on the ‘click’ map I’d developed last week.  The team liked what I’d created but wanted some changes to be made.  They didn’t like that the area containing the markers was part of the map and you needed to move the map back to the marker area to grab and move a marker.  Instead they wanted to have the markers initially stored in a separate section beside the map.  I thought this would be very tricky to implement but decided to investigate anyway and unfortunately I was proved right.  In the original version the markers are part of the mapping library – all we’re doing is moving them around the map.  To have the icons outside the map means the icons initially cannot be part of the mapping library, but instead need to be simple HTML elements, but when they are dragged into the map they then have to become map markers with latitude and longitude values, ideally with a smooth transition from plain HTML to map icon as the element is dragged from the general website into the map pane.

It took many hours to figure out how this might work and to update the map to implement the new way of doing things.  I discovered that HTML5’s default drag and drop functionality could be used (see this example: https://jsfiddle.net/430oz1pj/197/), which allows you to drag an HTML element and drop it somewhere.  If the element is dropped over the map then a marker can be created at this point.  However, this proved to be more complicated than it looks to implement as I needed to figure out a way to pass the ID of the HTML marker to the mapping library, and also handle the audio files associated with the icons.  Also, the latitude and longitude generated in the above example was not in any way an accurate representation of the cursor pointer location.  For this reason I integrated a Leaflet plugin that displays the coordinates of the mouse cursor (https://github.com/MrMufflon/Leaflet.Coordinates).  I hid this on the map, but it still runs in the background, allowing my script to grab the latitude and longitude of the cursor at the point where the HTML element is dropped.  I also updated the marker icons to add a number to each one, making it easier to track which icon is which.  This also required me to rework the play and pause audio logic.  With all of this in place I completed ‘v2’ of the click map and I thought the task was completed until I did some final testing on my iPad and Android phone.  And unfortunately I discovered that the icons don’t drag on touchscreen devices (even the touchscreen on my Windows 11 laptop).  This was a major setback as clearly we need the resource to work on touchscreens.

It turns out HTML5 drag and drop simply does not work with touchscreens.  So I’m afraid it was back to the drawing board.  I remembered that I successfully used drag and drop on touchscreens for the MetaphorIC website (see question 5 on this page: https://mappingmetaphor.arts.gla.ac.uk/metaphoric/exercise.html?id=49).  This website uses a different JavaScript framework called jQuery UI (see https://jqueryui.com/draggable/) so I figured I’d integrate this with the SFY website.  However, on doing so and updating the HTML and JavaScript to use the new library the HTML elements still wouldn’t drag on a touchscreen.  Thankfully I remembered that a further JavaScript library called jQuery UI Touch Punch (https://github.com/furf/jquery-ui-touch-punch) was needed for the drag functionality to work on touchscreens.  With this in place the icons could now be dragged around the screen.  However, getting the jQuery UI library to interact with the Leaflet mapping library also proved to be tricky.  The draggable icons ended up disappearing behind the map pane rather than dragged over the top.  They would then drop in the correct location, but having them invisible until you dropped them was no good.  I fixed this by updating the z-index of the icons (this controls the order HTML elements appear on the screen).  Finally the icon would glide across the map before being dropped.  But this also prevented the Leaflet Coordinates plugin from picking up the location of the cursor when the icon was dropped, meaning the icon either appeared on the map in the wrong location or simply disappeared entirely.  I almost gave up at this point, but I decided to go back to the method of positioning the marker as found in the first link above – the one that positioned a dropped icon, but in the wrong location.  The method used in this example did actually work with my new drag and drop approach, which was encouraging.  I also happened to return to the page that linked to the example: https://gis.stackexchange.com/questions/296126/dragging-and-dropping-images-into-leaflet-map-to-create-image-markers and found a comment further down the page that noted the incorrect location of the dropped marker and proposed a solution.  After experimenting with this I thankfully discovered that it worked, meaning I could finishe work on ‘v3’ of the click map, which is identical to ‘v2’ other than the fact that is works on touchscreens.

I then created a further ‘v4’ version has the updated areas (Shetland and Orkney, Western Isles and Argyll are now split) and use the broader areas around Shetland and the Western Isles for ‘correct’ areas.  I’ve also updated the style of the marker box and made it so that the ‘View correct locations’ and ‘Continue’ buttons only become active after the user has dragged all of the markers onto the map.

The ‘View correct locations’ button also now works again.  The team had also wanted the correct locations to appear on a new map that would appear beside the existing map.  Thinking more about this I really don’t think it’s a good idea.  Introducing another map is likely to confuse people and on smaller screens the existing map already takes up a lot of space.  A second map would need to appear below the first map and people might not even realise there are two maps as both wouldn’t fit on screen at the same time.  What I’ve done instead is to slow down the animation of markers to their correct location when the ‘view’ button is pressed so it’s easier to see which marker is moving where.  I think this in combination with the markers now being numbered makes it clearer.  Here’s a screenshot of this ‘v4’ version showing two markers on the map, one correct, the other wrong:

There is still the issue of including the transcriptions of the speech.  We’d discussed adding popups to the markers to contain these, but again the more I think about this the more I reckon it’s a bad idea.  Opening a popup requires a click and the markers already have a click event (playing / stopping the audio).  We could change the click event after the ‘View correct locations’ button is pressed, so that from that point onwards clicking on a marker opens a popup instead of playing the audio, but I think this would be horribly confusing.  We did talk about maybe always having the markers open a popup when they’re clicked and then having a further button to play the audio in the popup along with the transcription, but requiring two clicks to listen to the audio is pretty cumbersome.  Plus marker popups are part of the mapping library so the plain HMTL markers outside the map couldn’t have popups, or at least not the same sort.

I wondered if we’re attempting to overcomplicate the map.  I would imagine most school children aren’t even going to bother looking at the transcripts and cluttering up the map with them might not be all that useful.  An alternative might be to have the transcripts in a collapsible section underneath the ‘Continue’ button that appears after the ‘check answers’ button is pressed.  We could have some text saying something like ‘Interested in reading what the speakers said?  Look at the transcripts below’.  The section could be hidden by default and then pressing on it opens up headings for speakers 1-8.  Pressing on a heading then expands a section where the transcript can be read.

On Tuesday I had a call with the PI and Co-I of the Books and Borrowing project about the requirements for the front-end and the various search and browse functionality it would need to have.  I’d started writing a requirements document before the meeting and we discussed this, plus their suggestions and input from others.  It was a very productive meeting and I continued with the requirements document after the call.  There’s still a lot to put into it, and the project’s data and requirements are awfully complicated, but I feel like we’re making good progress and things are beginning to make sense.

I also made some further tweaks to the speech database for the Speech Star project.  I’d completed an initial version of this last week, including the option to view multiple selected videos side by side.  However, while the videos worked fine in Firefox in other browsers only the last video loaded in successfully.   It turns out that there’s a limit to the number of open connections Chrome will allow.  If I set the videos so that the content doesn’t preload then all videos work when you press to play them.  However, this does introduce a further problem:  without preloading the video nothing gets displayed where the video appears unless you add in a ‘poster’, which is an image file to use as a placeholder, usually a still from the video.  We had these for all of the videos for Seeing Speech, but we don’t have them for the new STAR videos.  I’ve made a couple manually for the test page, but I don’t want to have to manually create hundreds of such images.  I did wonder about doing this via YouTube as it generates placeholder images, but even this is going to take a long time as you can only upload 15 videos at once to Youtube, then you need to wait for them to be processed, then you need to manually download the image you want.

I found a post that gave some advice on programmatically generating poster images from video files (https://stackoverflow.com/questions/2043007/generate-preview-image-from-video-file) but the PHP library seemed to require some kind of weird package installer to first be installed in order to function.  The library also required https://ffmpeg.org/download.html to be installed to function, and I decided to not bother with the PHP library and just use FFMPEG directly, calling it from the command line via a PHP script and iterating through the hundreds of videos to make the posters.  It worked very well and now the ‘multivideo’ feature works perfectly in all browsers.

Also this week I had a Zoom call with Ophira Gamliel in Theology about a proposal she’s putting together.  After the call I wrote sections of a Data Management Plan for the proposal and answered several emails over the remainder of the week.  I also had a chat with the DSL people about the switch to the new server that we have scheduled for March.  There’s quite a bit to do with the new data (and new structures in the new data) before we go live to March is going to be quite a busy time.

Finally this week I spent some time on the Anglo-Norman Dictionary.  I finished generating the KWIC data for one of the textbase texts now that the server will allow scripts to execute for a longer time.  I also investigated an issue with the XML proofreader that was giving errors.  It turned out that the errors were being caused by errors in the XML files and I found out that oXygen offers a very nice batch validation facility that you can run on massive batches of XML files at the same time (See https://www.oxygenxml.com/doc/versions/24.0/ug-editor/topics/project-validation-and-transformation.html).  I also began working with a new test instance of the AND site, through which I am going to publish the new data for the letter S.  There are many thousand XML files that need to be integrated and it’s taking some time to ensure the scripts to process these work properly, but all is looking encouraging.

I will be participating in the UCU strike action over the coming weeks so that’s all for now.