I completed an initial version of the Chambers Library map for the Books and Borrowing project this week. It took quite a lot of time and effort to implement the subscription period range slider. Searching for a range when the data also has a range of dates rather than a single date means we needed to make a decision about what data gets returned and what doesn’t. This is because the two ranges (the one chosen as a filter by the user and the one denoting the start and end periods of subscription for each borrower) can overlap in many different ways. For example, the period chosen by the user is 05 1828 to 06 1829. Which of the following borrowers should therefore be returned?
- Borrowers range is 06 1828 to 02 1829: Borrower’s range is fully within the period so should definitely be included
- Borrowers range is 01 1828 to 07 1828: Borrower’s range extends beyond the selected period at the start and ends within the selected period. Presumably should be included.
- Borrowers range is 01 1828 to 09 1829: Borrower’s range extends beyond the selected period in both directions. Presumably should be included.
- Borrowers range is 05 1829 to 09 1829: Borrower’s range begins during the selected period and ends beyond the selected period. Presumably should be included.
- Borrowers range is 01 1828 to 04 1828: Borrower’s range is entirely before the selected period. Should not be included
- Borrowers range is 07 1829 to 10 1829: Borrower’s range is entirely after the selected period. Should not be included.
Basically if there is any overlap between the selected period and the borrower’s subscription period the borrower will be returned. But this means most borrowers will always be returned a lot of the time. It’s a very different sort of filter to one that purely focuses on a single date – e.g. filtering the data to only those borrowers whose subscription periods *begins* between 05 1828 and 06 1829.
Based on the above assumptions I began to write the logic that would decide which borrowers to include when the range slider is altered. It was further complicated by having to deal with months as well as years. Here’s the logic in full if you fancy getting a headache:
if(((mapData[i].sYear>startYear || (mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth || mapData[i].sYear>startYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)) || (((mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth) || mapData[i].sYear>startYear) && ((mapData[i].sYear==endYear && mapData[i].sMonth <=endMonth) || mapData[i].sYear<endYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)))
I also added the subscription period to the popups. The only downside to the range slider is that the occupation marker colours change depending on how many occupations are present during a period, so you can’t always tell an occupation by its colour. I might see if I can fix the colours in place, but it might not be possible.
I also noticed that the jQuery UI sliders weren’t working very well on touchscreens so installed the jQuery TouchPunch library to fix that (https://github.com/furf/jquery-ui-touch-punch). I also made the library marker bigger and gave it a white border to more easily differentiate it from the borrower markers.
I then moved onto incorporating page images in the resource too. Where a borrower has borrower records the relevant pages where these borrowing records are found now appear as thumbnails in the borrower popup. These are generated by the IIIF server based on dimensions passed to it, which is much nicer than having to generate and store thumbnails directly. I also updated the popup to make it wider when required to give more space for the thumbnails. Here’s a screenshot of the new thumbnails in action:
Clicking on a thumbnail opens a further popup containing a zoomable / pannable image of the page. This proved to be rather tricky to implement. Initially I was going to open a popup in the page (outside of the map container) using a jQuery UI Dialog. However, I realised that this wouldn’t work when the map was being viewed in full-screen mode, as nothing beyond the map container is visible in such circumstances. I then considered opening the image in the borrower popup but this wasn’t really big enough. I then wondered about extending the ‘Map options’ section and replacing the contents of this with the image, but this then caused issues for the contents of the ‘Map options’ section, which didn’t reinitialise properly when the contents were reinstated. I then found a plugin for the Leaflet mapping library that provides a popup within the map interface (https://github.com/w8r/Leaflet.Modal) and decided to use this. However, it’s all a little complex as the popup then has to include another mapping library called OpenLayers that enables the zooming and panning of the page image, all within the framework of the overall interactive map. It is all working and I think it works pretty well, although I guess the map interface is a little cluttered, what with the ‘Map Options’ section, the map legend, the borrower popup and then the page image popup as well. Here’s a screenshot with the page image open:
All that’s left to do now is add in the introductory text once Alex has prepared it and then make the map live. We might need to rearrange the site’s menu to add in a link to the Chambers Map as it’s already a bit cluttered.
Also for the project I downloaded images for two further library registers for St Andrews that had previously been missed. However, there are already records for the registers and pages in the CMS so we’re going to have to figure out a way to work out which image corresponds to which page in the CMS. One register has a different number of pages in the CMS compared to the image files so we need to work out how to align the start and end and if there are any gaps or issues in the middle. The other register is more complicated because the images are double pages whereas it looks like the page records in the CMS are for individual pages. I’m not sure how best to handle this. I could either try and batch process the images to chop them up or batch process the page records to join them together. I’ll need to discuss this further with Gerry, who is dealing with the data for St Andrews.
Also this week I prepared for and gave a talk to a group of students from Michigan State University who were learning about digital humanities. I talked to them for about an hour about a number of projects, such as the Burns Supper map (https://burnsc21.glasgow.ac.uk/supper-map/), the digital edition I’d created for New Modernist Editing (https://nme-digital-ode.glasgow.ac.uk/), the Historical Thesaurus (https://ht.ac.uk/), Books and Borrowing (https://borrowing.stir.ac.uk/) and TheGlasgowStory (https://theglasgowstory.com/). It went pretty and it was nice to be able to talk about some of the projects I’ve been involved with for a change.
I also made some further tweaks to the Gentle Shepherd Performances page which is now ready to launch, and helped Geert out with a few changes to the WordPress pages of the Anglo-Norman Dictionary. I also made a few tweaks to the WordPress pages of the DSL website and finally managed to get a hotel room booked for the DHC conference in Sheffield in September. I also made a couple of changes to the new Gaelic Tongues section of the Seeing Speech website and had a discussion with Eleanor about the filters for Speech Star. Fraser had been in touch with about 500 Historical Thesaurus categories that had been newly matched to OED categories so I created a little script to add these connections to the online database.
I also had a Zoom call with the Speak For Yersel team. They had been testing out the resource at secondary schools in the North East and have come away with lots of suggested changes to the content and structure of the resource. We discussed all of these and agreed that I would work on implementing the changes the week after next.
Next week I’m going to be on holiday, which I have to say I’m quite looking forward to.
I worked for several different projects this week. For the Books and Borrowing project I processed and imported a further register for the Advocates library that had been digitised by the NLS. I also continued with the interactive map of Chambers library borrowers, although I couldn’t spend as much time on this as I’d hoped as my access to Stirling University’s VPN had stopped working and without VPN access I can’t connect to the database and the project server. It took a while to resolve the issue as access needs to be approved by some manager or other, but once it was sorted I got to work on some updates.
One thing I’d noticed last week was that when zooming and panning the historical map layer was throwing out hundreds of 403 Forbidden errors to the browser console. This was not having any impact on the user experience, but was still a bit messy and I wanted to get to the bottom of the issue. I had a very helpful (as always) chat with Chris Fleet at NLS Maps, who provided the historical map layer and he reckoned it was because the historical map only covers a certain area and moving beyond this was still sending requests for map tiles that didn’t exist. Thankfully an option exists in Leaflet that allows you to set the boundaries for a map layer (https://leafletjs.com/reference.html#latlngbounds) and I updated the code to do just that, which seems to have stopped the errors.
I then returned to the occupations categorisation, which was including far too many options. I therefore streamlined the occupations, displaying the top-level occupation only. I think this works a lot better (although I need to change the icon colour for ‘unknown’). Full occupation information is still available for each borrower via the popup.
I also had to change the range slider for opacity as standard HTML range sliders don’t allow for double-ended ranges. We require a double-ended range for the subscription period and I didn’t want to have two range sliders that looked different on one page. I therefore switched to a range slider offered by the jQuery UI interface library (https://jqueryui.com/slider/#range). The opacity slider still works as before, it just looks a little different. Actually, it works better than before, as the opacity now changes as you slide rather than only updating after you mouse-up.
I then began to implement the subscription period slider. This does not yet update the data. It’s been pretty tricky to implement this. The range needs to be dynamically generated based on the earliest and latest dates in the data, and dates are both year and month, which need to be converted into plain integers for the slider and then reinterpreted as years and months when the user updates the end positions. I think I’ve got this working as it should, though. When you update the ends of the slider the text above that lists the months and years updates to reflect this. The next step will be to actually filter the data based on the chosen period. Here’s a screenshot of the map featuring data categorised by the new streamlined occupations and the new sliders displayed:
For the Speak For Yersel project I made a number of tweaks to the resource, which Jennifer and Mary are piloting with school children in the North East this week. I added in a new grammatical question and seven grammatical quiz questions. I tweaked the homepage text and updated the structure of questions 27-29 of the ‘sound about right’ activity. I ensured that ‘Dumfries’ always appears as ‘Dumfries and Galloway’ in the ‘clever’ activity and follow-on and updated the ‘clever’ activity to remove the stereotype questions. These were the ones where users had to rate the speakers from a region without first listening to any audio clips and Jennifer reckoned these were taking too long to complete. I also updated the ‘clever’ follow-on to hide the stereotype options and switched the order of the listener and speaker options in the other follow-on activity for this type.
For the Speech Star project I replaced the data for the child speech error database with a new, expanded dataset and added in ‘Speaker Code’ as a filter option. I also replicated the child speech and normalised speech databases from the clinical website we’re creating on the more academic teaching site we’re creating and also pulled in the IPA chart from Seeing Speech into this resource too. Here’s a screenshot of how the child speech error database looks with the new ‘speaker code’ filter with ‘vowel disorder’ selected:
I also made a couple of tweaks to the DSL this week, installing the TablePress plugin for the ancillary pages and creating a further alternative logo for the DSL’s Facebook posts. I also returned to going some work for the Anglo-Norman Dictionary, offering some advice to the editor Geert about incorporating publications and overhauling how cross references are displayed in the Dictionary Management System.
I updated the ‘View Entry’ page in the DMS. Previously it only included cross references FROM the entry you’re looking at TO any other entries. I.e. it only displayed content when the entry was of type ‘xref’ rather than ‘main’. Now in addition to this there’s a further section listing all cross references TO the entry you’re looking at from any entry of type ‘xref’ that links to it.
In addition there is a button allowing you to view all entries that include a cross reference to the current entry anywhere in their XML – i.e. where an <xref> tag that features the current entry’s slug is found at any level in any other main entry’s XML. This code is hugely memory intensive to run, as basically all 27,464 main entries need to be pulled into the script, with the full XML contents of each checked for matching xrefs. For this reason the page doesn’t run the code each time the ‘view entry’ page is loaded but instead only runs when you actively press the button. It takes a few seconds for the script to process, but after it does the cross references are listed in the same manner as the ‘pure’ xrefs in the preceding sections.
Finally I participated in a Zoom-based focus group for the AHRC about the role of technicians in research projects this week. It was great to participate to share my views on my role and to hear from other people with similar roles at other organisations.
I spent most of the week continuing with the Speak For Yersel website, which is now nearing completion. A lot of my time was spent tweaking things that were already in place, and we had a Zoom call on Wednesday to discuss various matters too. I updated the ‘explore more’ age maps so they now include markers for young and old who didn’t select ‘scunnered’, meaning people can get an idea of the totals. I also changed the labels slightly and the new data types have been given two shades of grey and smaller markers, so the data is there but doesn’t catch the eye as much as the data for the selected term. I’ve updated the lexical ‘explore more’ maps so they now actually have labels and the ‘darker dots’ text (which didn’t make much sense for many maps) has been removed. Kinship terms now allow for two answers rather than one, which took some time to implement in order to differentiate this question type from the existing ‘up to 3 terms’ option. I also updated some of the pictures that are used and added in an ‘other’ option to some questions. I also updated the ‘Sounds about right’ quiz maps so that they display different legends that match the question words rather than the original questionnaire options. I needed to add in some manual overrides to the scripts that generate the data for use in the site for this to work.
I also added in proper text to the homepage and ‘about’ page. The former included a series of quotes above some paragraphs of text and I wrote a little script that highlighted each quote in turn, which looked rather nice. This then led onto the idea of having the quotes positioned on a map on the homepage instead, with different quotes in different places around Scotland. I therefore created an animated GIF based on some static map images that Mary had created and this looks pretty good.
I then spent some time researching geographical word clouds, which we had been hoping to incorporate into the site. After much Googling it would appear that there is no existing solution that does what we want, i.e. take a geographical area and use this as the boundaries for a word cloud, featuring different coloured words arranged at various angles and sizes to cover the area. One potential solution that I was pinning my hopes on was this one: https://github.com/JohnHenryEden/MapToWordCloud which promisingly states “Turn GeoJson polygon data into wordcloud picture of similar shape.”. I managed to get the demo code to run, but I can’t get it to actually display a word cloud, even though the specifications for one are in the code. I’ve tried investigating the code but I can’t figure out what’s going wrong. No errors are thrown and there’s very little documentation. All that happens is a map with a polygon area is displayed – no word cloud.
The word cloud aspects of the above are based on another package here: https://npm.io/package/wordcloud and this package allows you to specify a shape to use as an outline for the cloud, and one of the examples shows words taking up the shape of Taiwan: https://wordcloud2-js.timdream.org/#taiwan However, this is a static image not an interactive map – you can’t zoom into it or pan around it. One possible solution may be to create images of our regions, generate static word cloud images as with the above and then stitch the images together for form a single static map of Scotland. This would be a static image, though, and not comparable to the interactive maps we use elsewhere in the website. Programmatically stitching the individual region images together might also be quite tricky. I guess another option would be to just allow users to select an individual region and view the static word cloud (dynamically generated based on the data available when the user selects to view it) for the selected region, rather than joining them all together.
I also looked at some further options that Mary had tracked down. The word cloud on a leaflet map (http://hourann.com/2014/js-devs-dont-get-lost/leaflet-wordcloud.html?sydney) only uses a circle for the boundaries of the word cloud. All of the code is written around the use of a circle (e.g. using diameters to work out placement) so couldn’t really be adapted to work with a complex polygon. We could work out a central point for each region and have a circular word cloud positioned at that point, but we wouldn’t be able to make the words fill the entire region. The second of Mary’s links (https://www.jasondavies.com/wordcloud/) as far as I can tell is just a standard word cloud generator with no geographical options. The third option (https://github.com/peterschretlen/leaflet-wordcloud) has no demo or screenshot or much information about it and I’m afraid I can’t get it to work.
The final option (https://dagjomar.github.io/Leaflet.ParallaxMarker/) is pretty cool but it’s not really a word cloud as such. Instead it’s a bunch of labels set to specific lat/lng points and given different levels which sets their size and behaviour on scroll. We could use this to set the highest rated words to the largest level with lower rated words at lower level and position each randomly in a region, but it’s not really a word cloud and it would be likely that words would spill over into neighbouring regions.
Based on the limited options that appear to be out there, I think creating a working, interactive map-based word cloud would be a research project in itself and would take far more time than we have available.
Later on in the week Mary sent me the spreadsheet she’d been working on to list settlements found in postcode areas and to link these areas to the larger geographical regions we use. This is exactly what we needed to fill in the missing piece in our system and I wrote a script that successfully imported the data. For our 411 areas we now have 957 postcode records and 1638 settlement records. After that I needed to make some major updates to the system. Currently a person is associated with an area (e.g. ‘Aberdeen Southwest’) but I need to update this so that a person is associated with a specific settlement (e.g. ‘Ferryhill, Aberdeen’), which is then connected to the area and from the area to one of our 14 regions (e.g. ‘North East (Aberdeen)’).
I updated the system to make these changes and updated the ‘register’ form, which now features an autocomplete for the location – start typing a place and all matches appear. Behind the scenes the location is saved and connected up to areas and regions, meaning we can now start generating real data, rather than a person being assigned a random area. The perception follow-on now connects the respondent up with the larger region when selecting ‘listener is from’, although for now some of this data is not working.
I then needed to further update the registration page to add in an ‘outside Scotland’ option so people who did not grow up in Scotland can use the site. Adding in this option actually broke much of the site: registration requires an area with a geoJSON shape associated with the selected location otherwise it fails and the submission of answers requires this shape in order to generate a random marker point and this then failed when the shape wasn’t present. I updated the scripts to fix these issues, meaning an answer submitted by an ‘outside’ person has a zero for both latitude and longitude, but then I also needed to update the script that gets the map data to ensure that none of these ‘outside’ answers were returned in any of the data used in the site (both for maps and for non-map visualisations such as the sliders). So, much has changed and hopefully I haven’t broken anything whilst implementing these changes. It does now mean that ‘outside’ people can now be included and we can export and use their data in future, even though it is not used in the current site.
Further tweaks I implemented this week included: changing the font sizes of some headings and buttons; renaming the ‘activities’ and ‘more’ pages as requested; adding ‘back’ buttons from all ‘activity’ and ‘more’ pages back to the index pages; adding an intro page to the click exercise as previously it just launched into the exercise whereas all others have an intro. I also added summary pages to the end of the click and perception activities with links through to the ‘more’ pages and removed the temporary ‘skip to quiz’ option. I also added progress bars to the click and perception activities. Finally, I switched the location of the map legend from top right to top left as I realised when it was in the top right it was always obscuring Shetland whereas there’s nothing in the top left. This has meant I’ve had to move the region label to the top right instead.
Also this week I continued to work on the Allan Ramsay ‘Gentle Shepherd’ performance data. I added in faceted browsing to the tabular view, adding in a series of filter options for location, venue, adaptor and such things. You can select any combination of filters (e.g. multiple locations and multiple years in combination). When you select an item of one sort the limit options of other sorts update to only display those relevant to the limited data. However, the display of limiting options can get a bit confusing once multiple limiting types have been selected. I will try and sort this out next week. There are also multiple occurrences of items in the limiting options (e.g. two Glasgows) because the data has spaces in some rows (‘Glasgow’ vs ‘Glasgow ‘) and I’ll need to see about trimming these out next time I import the data.
Also this week I arranged for the old DSL server to be taken offline, as the new website has now been operating successfully for two weeks. I also had a chat with Katie Halsey about timescales for the development of the Books and Borrowers front-end. Finally, I imported a new disordered paediatric speech dataset into the Speech Star website. This included around double the number of records, new video files and a new ‘speaker code’ column. Finally, I participated in a Zoom call for the Scottish Place-Names database where we discussed the various place-names surveys that are in progress and the possiblity of created an overarching search across all systems.
I was on strike last week, and I’m going to be on holiday next week, so I had a lot to try and cram into this week. This was made slightly harder when my son tested positive for Covid again on Tuesday evening. It’s his fourth time, and the last bout was only seven weeks ago. Thankfully he wasn’t especially ill, but he was off school from Wednesday onwards.
I worked on several different projects this week. For the Books and Borrowing project I updated the front-end requirements document based on my discussions with the PI and Co-I and set it on for the rest of the team to give feedback on. I also uploaded a new batch of register images from St Andrews (more than 2,500 page images taking up about 50Gb) and created all of the necessary register and page records. I also did the same for a couple of smaller registers from Glasgow. I also exported spreadsheets of authors, edition formats and edition languages for the team to edit too.
For the Anglo-Norman Dictionary I fixed an issue with the advanced search for citations, where entries with multiple citations were having the same date and reference displayed for each snippet rather than the individual dates and references. I also updated the display of snippets in the search results so they appear in date order.
I also responded to an email from editor Heather Pagan about how language tags are used in the AND XML. There are 491 entries that have a language tag and I wrote a little script to list the distinct languages and a count of a number of times each appears. Here’s the output (bearing in mind that an entry may have multiple language tags):
[Latin] => 79
[M.E.] => 369
[Dutch] => 3
[Arabic] => 12
[Hebrew] => 20
[M.L.] => 4
[Greek] => 2
[A.F._and_M.E.] => 3
[Irish] => 2
[M.E._and_A.F.] => 8
[A-S.] => 3
[Gascon] => 1
There seem to be two ways the language tag appears. One in a sense, and these appear in the entry, e.g. https://anglo-norman.net/entry/Scotland and one in <head> and these don’t currently seem to get displayed. E.g. https://anglo-norman.net/entry/ganeir has:
<head> <language lang=”M.E.”/>
But ‘M.E’ doesn’t appear anywhere. I could probably write another little script that moves language to the head as above, and then I could update the XSLT so that this type of language tag gets displayed. Or I could update the XSLT first so we can see how it might look with entries that already have this structure. I’ll need to hear back from Heather before I do more.
For the Dictionaries of the Scots Language I spent quite a bit of time working with the XSLT for the display of bibliographies. There are quite a lot of different structures for bibliographical entries, sometimes where the structure of the XML is the same but a different layout is required, so it proved to be rather tricky to get things looking right. By the end of the week I think I had got everything to display as requested, but I’ll need to see if the team discover any further quirks.
I also wrote a script that extracts citations and their dates from DSL entries. I created a new citations table that stores the dates, the quotes and associated entry and bibliography IDs. The table has 747,868 rows in it. Eventually we’ll be able to use this table for some kind of date search, plus there’s now an easy to access record of all of the bib IDs for each entry / entry IDs for each bib, so displaying lists of entries associated with each bibliography should also be straightforward when the time comes. I also added new firstdate and lastdate columns to the entry table, picking out the earliest and latest date associated with each entry and storing these. This means we can add first dates to the browse, something I decided to add in for test purposes later in the week.
I added the first recorded date (the display version not the machine readable version) to the ‘browse’ for DOST and SND. The dates are right-aligned and grey to make them stand out less than the main browse label. This does however make the date of the currently selected entry in the browse a little hard to read. Not all entries have dates available. Any that don’t are entries where either the new date attributes haven’t been applied or haven’t worked. This is really just a proof of concept and I will remove the dates from the browse before we go live, as we’re not going to do anything with the new date information until a later point.
I also processed the ‘History of Scots’ ancillary pages. Someone had gone through these to add in links to entries (hundreds of links), but unfortunately they hadn’t got the structure quite right. The links had been added in Word, meaning regular double quotes had been converted into curly quotes, which are not valid HTML. Also the links only included the entry ID, rather than the path to the entry page. A couple of quick ‘find and replace’ jobs fixed these issues, but I also needed to update the API to allow old DSL IDs to be passed without also specifying the source. I also set up a Google Analytics account for the DSL’s version of Wordle (https://macwordle.co.uk/)
For the Speak For Yersel project I had a meeting with Mary on Thursday to discuss some new exercises that I’ll need to create. I also spent some time creating the ‘Sounds about right’ activity. This had a slightly different structure to other activities in that the questionnaire has three parts with an introduction for each part. This required some major reworking of the code as things like the questionnaire numbers and the progress bar relied on the fact that there was one block of questions with no non-question screens in between. The activity also featured a new question type with multiple sound clips. I had to process these (converting them from WAV to MP3) and then figure out how to embed them in the questions.
Finally, for the Speech Star project I updated the extIPA chart to improve the layout of the playback speed options. I also made the page remember the speed selection between opening videos – so if you want to view them all ‘slow’ then you don’t need to keep selecting ‘slow’ each time you open one. I also updated the chart to provide an option to switch between MRI and animation videos and added in two animation MP4s that Eleanor had supplied me with. I then added the speed selector to the Normalised Speech Database video popups and then created a new ‘Disordered Paediatric Speech Database’, featuring many videos, filters to limit the display of data and the video speed selector. It was quite a rush to get this finished by the end of the week, but I managed it.
I will be on holiday next week so there will be no post from me then.
With the help of Raymond at Arts IT Support we migrated the test version of the DSL website to the new server this week, and also set up the Solr free-text indexes for the new DSL data too. This test version of the site will become the live version when we’re ready to launch it in April and the migration all went pretty smoothly, although I did encounter an error with the htaccess script that processed URLs for dictionary pages due to underscores not needing to be escaped on the old server but requiring a backslash as an escape character on the new server.
I also replaced the test version’s WordPress database with a copy of the live site’s WordPress database, plus copied over some of the customisations from the live site such as changes to logos and the content of the header and the footer, bringing the test version’s ancillary content and design into alignment with the live site whilst retaining some of the additional tweaks I’d made to the test site (e.g. the option to hide the ‘browse’ column and the ‘about this entry’ box).
One change to the structure of the DSL data that has been implemented is that dates are now machine readable, with ‘from’, ‘to’ and ‘prefix’ attributes. I had started to look at extracting these for use in the site (e.g. maybe displaying the earliest citation date alongside the headword in the ‘browse’ lists) when I spotted an issue with the data: Rather than having a date in the ‘to’ attribute, some entries had an error code – for example there are 6,278 entries that feature a date with ‘PROBLEM6’ as a ‘to’ attribute. I flagged this up with the DSL people and after some investigation they figured out that the date processing script wasn’t expecting to find a circa in a date ending a range (e.g. c1500-c1512). When the script encountered such a case it was giving an error instead. The DSL people were able to fix this issue and a new data export was prepared, although I won’t be using it just yet, as they will be sending me a further update before we go live and to save time I decided to just wait until they send this on. I also completed work on the XSLT for displaying bibliography entries and created a new ‘versions and changes’ page, linking to it from a statement in the footer that notes the data version number.
For the ‘Speak For Yersel’ project I made a number of requested updates to the exercises that I’d previously created. I added a border around the selected answer and ensured the active state of a selected button doesn’t stay active and I added handy ‘skip to quiz’ and ‘skip to explore’ links underneath the grammar and lexical quizzes so we don’t have to click through all those questions to check out the other parts of the exercise. I italicised ‘you’ and ‘others’ on the activity index pages and I fixed a couple of bugs on the grammar questionnaire. Previously only the map rolled up and an issue was caused when an answer was pressed on whilst the map was still animating. Now the entire question area animates so it’s impossible to press on an answer when the map isn’t available. I updated the quiz questions so they now have the same layout as the questionnaire, with options on the left and the map on the right and I made all maps taller to see how this works.
For the ‘Who says what where’ exercise the full sentence text is now included and I made the page scroll to the top of the map if this isn’t visible when you press on an item. I also updated the map and rating colours, although there is still just one placeholder map that loads so the lexical quiz with its many possible options doesn’t have its own map that represents this. The map still needs some work – e.g. adding in a legend and popups. I also made all requested changes to the lexical question wording and made the ‘v4’ click activity the only version, making it accessible via the activities menu and updated the colours for the correct and incorrect click answers.
For the Books and Borrowing project I completed a first version of the requirements for the public website, which has taken a lot of time and a lot of thought to put together, resulting in a document that’s more than 5,000 words long. On Friday I had a meeting with PI Katie and Co-I Matt to discuss the document. We spent an hour going through it and a list of questions I’d compiled whilst writing it, and I’ll need to make some modifications to the document based on our discussions. I also downloaded images of more library registers from St Andrews and one further register from Glasgow that I will need to process when I’m back at work too.
I also spent a bit of time writing a script to export a flat CSV version of the Historical Thesaurus, then made some updates based on feedback from the HT team before exporting a further version. We also spotted that adjectives of ‘parts of insects’ appeared to be missing from the website and I investigated what was going on with it. It turned out that there was an empty main category missing, and as all the other data was held in subcategories these didn’t appear, as all subcategories need a main category to hang off. After adding in a maincat all of the data was restored.
Finally, I did a bit of work for the Speech Star project. Firstly, I fixed a couple of layout issues with the ExtIPA chart symbols. There was an issue with the diacritics for the symbol that looks like a theta, resulting in them being offset. I reduced the size of the symbol slightly and have adjusted the margins of the symbols above and below and this seems to have done the trick. In addition, I did a little bit of research into setting the playback speed and it looks like this will be pretty easy to do whilst still using the default video player. See this page: https://stackoverflow.com/questions/3027707/how-to-change-the-playing-speed-of-videos-in-html5. I added a speed switcher to the popup as a little test to see how it works. The design would still need some work (buttons with the active option highlighted) but it’s good to have a proof of concept. Pressing ‘normal’ or ‘slow’ sets the speed for the current video in the popup and works both when the video is playing and when it’s stopped.
Also, I was sure that jumping to points in the videos wasn’t working before, but it seems to work fine now – you can click and drag the progress bar and the video jumps to the required point, either when playing or paused. I wonder if there was something in the codec that was previously being used that prevented this. So fingers cross we’ll be able to just use the standard HTML5 video player to achieve everything the projects requires.
I’ll be participating in the UCU strike action for all of next week so it will be the week beginning the 28th of March before I’m back in work again.
This was my first five-day week after the recent UCU strike action and it was pretty full-on, involving many different projects. I spent about a day working on the Speak For Yersel project. I added in the content for all 32 ‘I would never say that’ questions and completed work on the new ‘Give your word’ lexical activity, which features a further 30 questions of several types. This includes questions that have associated images and questions where multiple answers can be selected. For the latter no more than three answers are allowed to be selected and this question type needs to be handed differently as we don’t want the map to load as soon as one answer is selected. Instead the user can select / deselect answers. If at least one answer is selected a ‘Continue’ button appears under the question. When you press on this the answers become read only and the map appears. I made it so that no more than three options can be selected – you need to deselect one before you can add another. I think we’ll need to look into the styling of the buttons, though, as currently ‘active’ (when a button is hovered over or has been pressed and nothing else has yet been pressed) is the same colour is ‘selected’. So if you select ‘ginger’ then deselect it the button still looks selected until you press somewhere else, which is confusing. Also if you press a fourth button it looks like it has been selected when in actual fact it’s just ‘active’ and isn’t really selected.
I also spent about a day continuing to work on the requirements document for the Books and Borrowing project. I haven’t quite finished this initial version of the document but I’ve made good progress and I aim to have it completed next week. Also for the project I participated in a Zoom call with RA Alex Deans and NLS Maps expert Chris Fleet about a subproject we’re going to develop for B&B for the Chambers Library in Edinburgh. This will feature a map-based interface showing where the borrowers lived and will use a historical map layer for the centre of Edinburgh.
Chris also talked about a couple of projects at the NLS that were very useful to see. The first one was the Jamaica journal of Alexander Innes (https://geo.nls.uk/maps/innes/) which features journal entries plotted on a historical map and a slider allowing you to quickly move through the journal entries. The second was the Stevenson maps of Scotland (https://maps.nls.uk/projects/stevenson/) that provides options to select different subjects and date periods. He also mentioned a new crowdsourcing project to transcribe all of the names on the Roy Military Survey of Scotland (1747-55) maps which launched in February and already has 31,000 first transcriptions in place, which is great. As with the GB1900 project, the data produced here will be hugely useful for things like place-name projects.
I also participated in a Zoom call with the Historical Thesaurus team where we discussed ongoing work. This mainly involves a lot of manual linking of the remaining unlinked categories and looking at sensitive words and categories so there’s not much for me to do at this stage, but it was good to be kept up to date.
I continued to work on the new extIPA charts for the Speech Star project, which I had started on last week. Last week I had some difficulties replicating the required phonetic symbols but this week Eleanor directed me to an existing site that features the extIPA chart (https://teaching.ncl.ac.uk/ipa/consonants-extra.html). This site uses standard Unicode characters in combinations that work nicely, without requiring any additional fonts to be used. I’ve therefore copied the relevant codes from there (this is just character codes like b̪ – I haven’t copied anything other than this from the site). With the symbols in place I managed to complete an initial version of the chart, including pop-ups featuring all of the videos, but unfortunately the videos seem to have been encoded with an encoder that requires QuickTime for playback. So although the videos are MP4 they’re not playing properly in browsers on my Windows PC – instead all I can hear is the audio. It’s very odd as the videos play fine directly from Windows Explorer, but in Firefox, Chrome or MS Edge I just get audio and the static ‘poster’ image. When I access the site on my iPad the videos play fine (as QuickTime is an Apple product). Eleanor is still looking into re-encoding the videos and will hopefully get updated versions to me next week.
I also did a bit more work for the Anglo-Norman Dictionary this week. I fixed a couple of minor issues with the DTD, for example the ‘protect’ attribute was an enumerated list that could either be ‘yes’ or ‘no’ but for some entries the attribute was present but empty, and this was against the rules. I looked into whether an enumerated list could also include an empty option (as opposed to not being present, which is a different matter) but it looks like this is not possible (see for example http://lists.xml.org/archives/xml-dev/200309/msg00129.html). What I did instead was to change the ‘protect’ attribute from an enumerated list with options ‘yes’ and ‘no’ to a regular data field, meaning the attribute can now include anything (including being empty). The ‘protect’ attribute is a hangover from the old system and doesn’t do anything whatsoever in the new system so it shouldn’t really matter. And it does mean that the XML files should now validate.
The AND people also noticed that some entries that are present in the old version of the site are missing from the new version. I looked through the database and also older versions of the data from the new site and it looks like these entries have never been present in the new site. The script I ran to originally export the entries from the old site used a list of headwords taken from another dataset (I can’t remember where from exactly) but I can only assume that this list was missing some headwords and this is why these entries are not in the new site. This is a bit concerning, but thankfully the old site is still accessible. I managed to write a little script that grabs the entire contents of the browse list from the old website, separating it into two lists, one for main entries and one for xrefs. I then ran each headword against a local version of the current AND database, separating out homonym numbers then comparing the headword with the ‘lemma’ field in the DB and the hom with the hom. Initially I ran main and xref queries separately, comparing main to main and xref to xref, but I realised that some entries had changed types (legitimately so, I guess) so stopped making a distinction.
The script outputted 1540 missing entries. This initially looks pretty horrifying, but I’m fairly certain most of them are legitimate. There are a whole bunch of weird ‘n’ forms in the old site that have a strange character (e.g. ‘nun⋮abilité’) that are not found in the new site, I guess intentionally so. Also, there are lots of ‘S’ and ‘R’ words but I think most of these are because of joining or splitting homonyms. Geert, the editor, looked through the output and thankfully it turns out that only a handful of entries are missing, and also that these were also missing from the old DMS version of the data so their omission occurred before I became involved in the project.
Finally this week I worked with a new dataset of the Dictionaries of the Scots Language. I successfully imported the new data and have set up a new ‘dps-v2’ api. There are 80,319 entries in the new data compared to 80,432 in the previous output from DPS. I have updated our test site to use the new API and its new data, although I have not been able to set up the free-text data in Solr yet so the advanced search for full text / quotations only will not work yet. Everything else should, though.
Also today I began to work on the layout of the bibliography page. I have completed the display of DOST bibs but haven’t started on SND yet. This includes the ‘style guide’ link when a note is present. I think we may still need to tweak the layout, however. I’ll continue to work with the new data next week.
I participated in the UCU strike action from Monday to Wednesday this week, making it a two-day week for me. I’d heard earlier in the week that the paper I’d submitted about the redevelopment of the Anglo-Norman Dictionary had been accepted for DH2022 in Tokyo, which was great. However, the organisers have decided to make the conference online only, which is disappointing, although probably for the best given the current geopolitical uncertainty. I didn’t want to participate in an online only event that would be taking place in Tokyo time (nine hours ahead of the UK) so I’ve asked to withdraw my paper.
On Thursday I had a meeting with the Speak For Yersel project to discuss the content that the team have prepared and what I’ll need to work on next. I also spend a bit of time looking into creating a geographical word cloud which would fit word cloud output into a geoJSON polygon shape. I found one possible solution here: https://npm.io/package/maptowordcloud but I haven’t managed to make it work yet.
I also received a new set of videos for the Speech Star project, relating to the extIPA consonants, and I began looking into how to present these. This was complicated by the extIPA symbols not being standard Unicode characters. I did a bit of research into how these could be presented, and found this site http://www.wazu.jp/gallery/Test_IPA.html#ExtIPAChart but here the marks appear to the right of the main symbol rather than directly above or below. I contacted Eleanor to see if she had any other ideas and she got back to me with some alternatives which I’ll need to look into next week.
I spent a bit of time working for the DSL this week too, looking into a question about Google Analytics from Pauline Graham (and finding this very handy suite of free courses on how to interpret Google Analytics here https://analytics.google.com/analytics/academy/). The DSL people had also wanted me to look into creating a Levenshtein distance option, whereby words that are spelled similarly to an entered term are given as suggestions, in a similar way to this page: http://chrisgilmour.co.uk/scots/levensht.php?search=drech. I created a test script that allows you to enter a term and view the SND headwords that have a Levenshtein distance of two or less from your term, with any headwords with a distance of one highlighted in bold. However, Levenshtein is a bit of a blunt tool, and as it stands I’m not sure the results of the script are all that promising. My test term ‘drech’ brings back 84 matches, including things like ‘french’ which is unfortunately only two letters different from ‘drech’. I’m fairly certain my script is using the same algorithm as used by the site linked to above, it’s just that we have a lot more possible matches. However, this is just a simple Levenshtein test – we could also add in further tests to limit (or expand) the output, such as a rule that changes vowels in certain places as in the ‘a’ becomes ‘ai’ example suggested by Rhona at our meeting last week. Or we could limit the output to words beginning with the same letter.
Also this week I had a chat with the Historical Thesaurus people, arranging a meeting for next week and exporting a recent version of the database for them to use offline. I also tweaked a couple of entries for the AND and spent an hour or so upgrading all of the WordPress sites I manage to the latest WordPress version.
It’s been a pretty full-on week ahead of the UCU strike action, which begins on Monday. I spent quite a bit of time working on the Speak For Yersel project, starting with a Zoom call on Monday, after which I continued to work on the ‘click’ map I’d developed last week. The team liked what I’d created but wanted some changes to be made. They didn’t like that the area containing the markers was part of the map and you needed to move the map back to the marker area to grab and move a marker. Instead they wanted to have the markers initially stored in a separate section beside the map. I thought this would be very tricky to implement but decided to investigate anyway and unfortunately I was proved right. In the original version the markers are part of the mapping library – all we’re doing is moving them around the map. To have the icons outside the map means the icons initially cannot be part of the mapping library, but instead need to be simple HTML elements, but when they are dragged into the map they then have to become map markers with latitude and longitude values, ideally with a smooth transition from plain HTML to map icon as the element is dragged from the general website into the map pane.
It took many hours to figure out how this might work and to update the map to implement the new way of doing things. I discovered that HTML5’s default drag and drop functionality could be used (see this example: https://jsfiddle.net/430oz1pj/197/), which allows you to drag an HTML element and drop it somewhere. If the element is dropped over the map then a marker can be created at this point. However, this proved to be more complicated than it looks to implement as I needed to figure out a way to pass the ID of the HTML marker to the mapping library, and also handle the audio files associated with the icons. Also, the latitude and longitude generated in the above example was not in any way an accurate representation of the cursor pointer location. For this reason I integrated a Leaflet plugin that displays the coordinates of the mouse cursor (https://github.com/MrMufflon/Leaflet.Coordinates). I hid this on the map, but it still runs in the background, allowing my script to grab the latitude and longitude of the cursor at the point where the HTML element is dropped. I also updated the marker icons to add a number to each one, making it easier to track which icon is which. This also required me to rework the play and pause audio logic. With all of this in place I completed ‘v2’ of the click map and I thought the task was completed until I did some final testing on my iPad and Android phone. And unfortunately I discovered that the icons don’t drag on touchscreen devices (even the touchscreen on my Windows 11 laptop). This was a major setback as clearly we need the resource to work on touchscreens.
I then created a further ‘v4’ version has the updated areas (Shetland and Orkney, Western Isles and Argyll are now split) and use the broader areas around Shetland and the Western Isles for ‘correct’ areas. I’ve also updated the style of the marker box and made it so that the ‘View correct locations’ and ‘Continue’ buttons only become active after the user has dragged all of the markers onto the map.
The ‘View correct locations’ button also now works again. The team had also wanted the correct locations to appear on a new map that would appear beside the existing map. Thinking more about this I really don’t think it’s a good idea. Introducing another map is likely to confuse people and on smaller screens the existing map already takes up a lot of space. A second map would need to appear below the first map and people might not even realise there are two maps as both wouldn’t fit on screen at the same time. What I’ve done instead is to slow down the animation of markers to their correct location when the ‘view’ button is pressed so it’s easier to see which marker is moving where. I think this in combination with the markers now being numbered makes it clearer. Here’s a screenshot of this ‘v4’ version showing two markers on the map, one correct, the other wrong:
There is still the issue of including the transcriptions of the speech. We’d discussed adding popups to the markers to contain these, but again the more I think about this the more I reckon it’s a bad idea. Opening a popup requires a click and the markers already have a click event (playing / stopping the audio). We could change the click event after the ‘View correct locations’ button is pressed, so that from that point onwards clicking on a marker opens a popup instead of playing the audio, but I think this would be horribly confusing. We did talk about maybe always having the markers open a popup when they’re clicked and then having a further button to play the audio in the popup along with the transcription, but requiring two clicks to listen to the audio is pretty cumbersome. Plus marker popups are part of the mapping library so the plain HMTL markers outside the map couldn’t have popups, or at least not the same sort.
I wondered if we’re attempting to overcomplicate the map. I would imagine most school children aren’t even going to bother looking at the transcripts and cluttering up the map with them might not be all that useful. An alternative might be to have the transcripts in a collapsible section underneath the ‘Continue’ button that appears after the ‘check answers’ button is pressed. We could have some text saying something like ‘Interested in reading what the speakers said? Look at the transcripts below’. The section could be hidden by default and then pressing on it opens up headings for speakers 1-8. Pressing on a heading then expands a section where the transcript can be read.
On Tuesday I had a call with the PI and Co-I of the Books and Borrowing project about the requirements for the front-end and the various search and browse functionality it would need to have. I’d started writing a requirements document before the meeting and we discussed this, plus their suggestions and input from others. It was a very productive meeting and I continued with the requirements document after the call. There’s still a lot to put into it, and the project’s data and requirements are awfully complicated, but I feel like we’re making good progress and things are beginning to make sense.
I also made some further tweaks to the speech database for the Speech Star project. I’d completed an initial version of this last week, including the option to view multiple selected videos side by side. However, while the videos worked fine in Firefox in other browsers only the last video loaded in successfully. It turns out that there’s a limit to the number of open connections Chrome will allow. If I set the videos so that the content doesn’t preload then all videos work when you press to play them. However, this does introduce a further problem: without preloading the video nothing gets displayed where the video appears unless you add in a ‘poster’, which is an image file to use as a placeholder, usually a still from the video. We had these for all of the videos for Seeing Speech, but we don’t have them for the new STAR videos. I’ve made a couple manually for the test page, but I don’t want to have to manually create hundreds of such images. I did wonder about doing this via YouTube as it generates placeholder images, but even this is going to take a long time as you can only upload 15 videos at once to Youtube, then you need to wait for them to be processed, then you need to manually download the image you want.
I found a post that gave some advice on programmatically generating poster images from video files (https://stackoverflow.com/questions/2043007/generate-preview-image-from-video-file) but the PHP library seemed to require some kind of weird package installer to first be installed in order to function. The library also required https://ffmpeg.org/download.html to be installed to function, and I decided to not bother with the PHP library and just use FFMPEG directly, calling it from the command line via a PHP script and iterating through the hundreds of videos to make the posters. It worked very well and now the ‘multivideo’ feature works perfectly in all browsers.
Also this week I had a Zoom call with Ophira Gamliel in Theology about a proposal she’s putting together. After the call I wrote sections of a Data Management Plan for the proposal and answered several emails over the remainder of the week. I also had a chat with the DSL people about the switch to the new server that we have scheduled for March. There’s quite a bit to do with the new data (and new structures in the new data) before we go live to March is going to be quite a busy time.
Finally this week I spent some time on the Anglo-Norman Dictionary. I finished generating the KWIC data for one of the textbase texts now that the server will allow scripts to execute for a longer time. I also investigated an issue with the XML proofreader that was giving errors. It turned out that the errors were being caused by errors in the XML files and I found out that oXygen offers a very nice batch validation facility that you can run on massive batches of XML files at the same time (See https://www.oxygenxml.com/doc/versions/24.0/ug-editor/topics/project-validation-and-transformation.html). I also began working with a new test instance of the AND site, through which I am going to publish the new data for the letter S. There are many thousand XML files that need to be integrated and it’s taking some time to ensure the scripts to process these work properly, but all is looking encouraging.
I will be participating in the UCU strike action over the coming weeks so that’s all for now.
I split my time over many different projects this week. For the Books and Borrowing project I completed the work I started last week on processing the Wigtown data, writing a little script that amalgamated borrowing records that had the same page order number on any page. These occurrences arose when multiple volumes of a book were borrowed by a person at the same time and each volume was recorded separately. My script worked perfectly and many such records were amalgamated.
I then moved onto incorporating images of register pages from Leighton into the CMS. This proved to be a rather complicated process for one of the four registers as around 30 pages for the register had already been manually created in the CMS and had borrowing records associated with them. However, these pages had been created in a somewhat random order, starting at folio number 25 and mostly being in order down to 43, at which point the numbers are all over the place, presumably because the pages were created in the order that they were transcribed. As it stands the CMS relies on the ‘page ID’ order when generating lists of pages as ‘Folio Number’ isn’t necessarily in numerical order (e.g. front / back matter with Roman numerals). If out of sequence pages crop up a lot we may have to think about adding a new ‘page order’ column, or possibly use the ‘previous’ and ‘next’ IDs to ascertain the order pages should be displayed. After some discussion with the team it looks like pages are usually created in page order and Leighton is an unusual case, so we can keep using the auto-incrementing page ID for listing pages in the contents page. I therefore generated a fresh batch of pages for the Leighton register then moved the borrowing records from the existing mixed up pages to the appropriate new page, then deleted the existing pages so everything is all in order.
For the Speak For Yersel project I created a new exercise whereby users are presented with a map of Scotland divided into 12 geographical areas and there are eight map markers in a box in the sea to the east of Scotland. Each marker is clickable, and clicking on it plays a sound file. Each marker is also draggable and after listening to the sound file the user should then drag the marker to whichever area they think the speaker in the sound file is from. After dragging all of the markers the user can then press a ‘check answers’ button to see which they got right, and press a ‘view correct locations’ button which animates the markers to their correct locations on the map. It was a lot of fun making the exercise and I think it works pretty well. It’s still just an initial version and no doubt we will be changing it, but here’s a screenshot of how it currently looks (with one answer correct and the rest incorrect):
For the Speech Star project I made some further changes to the speech database. Videos no longer autoplay, as requested. Also, the tables now feature checkboxes beside them. You can select up to four videos by pressing on these checkboxes. If you select more than four the earliest one you pressed is deselected, keeping a maximum of four no matter how many checkboxes you try to click on. When at least one checkbox is pressed the tab contents will slide down and a button labelled ‘Open selected videos’ will appear. If you press on this a wider popup will open, containing all of your chosen videos and the metadata about each. This has required quite a lot of reworking to implement, but it seemed to be working well, until I realised that while the multiple videos load and play successfully in Firefox, in Chrome and MS Edge (which is based on Chrome) only the final video loads in properly, with only audio playing on the other videos. I’ll need to investigate this further next week. But here’s a screenshot of how things look in Firefox:
Also this week I spoke to Thomas Clancy about the Place-names of Iona project, including discussing how the front-end map will function (Thomas wants an option to view all data on a single map, which should work although we may need to add in clustering at higher zoom levels. We also discussed how to handle external links and what to do about the elements database, that includes a lot of irrelevant elements from other projects.
Finally, I had some email conversations with the DSL people and made an update to the interface of the new DSL website to incorporate an ‘abbreviations’ button, which links to the appropriate DOST or SND abbreviations page.
I had a very busy week this week, working on several different projects. For the Books and Borrowing project I participated in the team Zoom call on Monday to discuss the upcoming development of the front-end and API for the project, which will include many different search and browse facilities, graphs and visualisations. I followed this up with a lengthy email to the PI and Co-I where I listed some previous work I’ve done and discussed some visualisation libraries we could use. In the coming weeks I’ll need to work with them to write a requirements document for the front-end. I also downloaded images from Orkney library, uploaded all of them to the server and generated the necessary register and page records. One register with 7 pages already existed in the system and I ensured that page images were associated with these and the remaining pages of the register fit in with the existing ones. I also processed the Wigtown data that Gerry McKeever had been working on, splitting the data associated with one register into two distinct registers, uploading page images and generating the necessary page records. This was a pretty complicated process, and I still need to complete the work on it next week, as there are several borrowing records listed as separate rows when in actual fact they are merely another volume of the same book borrowed at the same time. These records will need to be amalgamated.
For the Speak For Yersel project I had a meeting with the PI and RA on Monday to discuss updates to the interface I’ve been working on, new data for the ‘click’ exercise and a new type of exercise that will precede the ‘click’ exercise and will involve users listening to sound clips then dragging and dropping them onto areas of a map to see whether they can guess where the speaker is from. I spent some time later in the week making all of the required changes to the interface and the grammar exercise, including updating the style used for the interactive map and using different marker colours.
I also continued to work on the speech database for the Speech Star project based on feedback I received about the first version I completed last week. I added in some new introductory text and changed the order of the filter options. I also made the filter option section hidden by default as it takes up quite a lot of space, especially on narrow screens. There’s now a button to show / hide the filters, with the section sliding down or up. If a filter option is selected the section remains visible by default. I also changed the colour of the filter option section to a grey with a subtle gradient (it gets lighter towards the right) and added a similar gradient to the header, just to see how it looks.
The biggest update was to the filter options, which I overhauled so that instead of a drop-down list where one option in each filter type can be selected there are checkboxes for each filter option, allowing multiple items of any type to be selected. This was a fairly large change to implement as the way selected options are passed to the script and the way the database is queried needed to be completely changed. When an option is selected the page immediately reloads to display the results of the selection and this can also change the contents of the other filter option boxes – e.g. selecting ‘alveolar’ limits the options in the ‘sound’ section. I also removed the ‘All’ option and left all checkboxes unselected by default. This is how filters on clothes shopping sites do it – ‘all’ is the default and a limit is only applied if an option is ticked.
I also changed the ‘accent’ labels as requested, changed the ‘By Prompt’ header to ‘By Word’ and updated the order of items in the ‘position’ filter. I also fixed an issue where ‘cheap’ and ‘choose’ were appearing in a column instead of the real data. Finally, I made the overlay that appears when a video is clicked on darker so it’s more obvious that you can’t click on the buttons. I did investigate whether it was possible to have the popup open while other page elements were still accessible but this is not something that the Bootstrap interface framework that I’m using supports, at least not without a lot of hacking about with its source code. I don’t think it’s worth pursuing this as the popup will cover much of the screen on tablets / phones anyway, and when I add in the option to view multiple videos the popup will be even larger.
Also this week I made some minor tweaks to the Burns mini-project I was working on last week and had a chat with the DSL people about a few items, such as the data import process that we will be going through again in the next month or so and some of the outstanding tasks that I still need to tackle with the DSL’s interface.
I also did some work for the AND this week, investigating a weird timeout error that cropped up on the new server and discussing how best to tackle a major update to the AND’s data. The team have finished working on a major overhaul of the letter S and this is now ready to go live. We have decided that I will ask for a test instance of the AND to be set up so I can work with the new data, testing out how the DMS runs on the new server and how it will cope with such a large update.
The editor, Geert, had also spotted an issue with the textbase search, which didn’t seem to include one of the texts (Fabliaux) he was searching for. I investigated the issue and it looked like the script that extracted words from pages may have silently failed in some cases. There are 12,633 page records in the textbase, each of which has a word count. When the word count is greater than zero my script processes the contents of the page to generate the data for searching. However, there appear to be 1889 pages in the system that have a word count of zero, including all of Fabliaux. Further investigation revealed that my scripts expect the XML to be structured with the main content in a <body> tag. This cuts out all of the front matter and back matter from the searches, which is what we’d agreed should happen and thankfully accounts for many of the supposedly ‘blank’ pages listed above as they’re not the actual body of the text.
However, Fabliaux doesn’t include the <body> tag in the standard way. In fact, the XML file consists of multiple individual texts, each of which has a separate <body> tag. As my script didn’t find a <body> in the expected place no content was processed. I ran a script to check the other texts and the following also have a similar issue: gaunt1372 (710 pages) and polsongs (111 pages), in addition to the 37 pages of Fabliaux. Having identified these I could update my script that generates search words and re-ran it for these texts, fixing the issue.
Also this week I attended a Zoom-based seminar on ‘Digitally Exhibiting Textual Heritage’ that was being run by Information Studies. This featured four speakers from archives, libraries and museums discussing how digital versions of texts can be exhibited, both in galleries and online. Some really interesting projects were discussed, both past and present. This included the BL’s ‘Turning the Pages’ system (http://www.bl.uk/turning-the-pages/) , some really cool transparent LCD display cases (https://crystal-display.com/transparent-displays-and-showcases/) that allow images to be projected on clear glass while objects behind the panel are still visible. 3d representations of gallery spaces were discussed (e.g. https://www.lib.cam.ac.uk/ghostwords), as were ‘long form narrative scrolls’ such as https://www.nytimes.com/projects/2012/snow-fall/index.html#/?part=tunnel-creek, http://www.wolseymanuscripts.ac.uk/ and https://stories.durham.ac.uk/journeys-prologue/. There is a tool that can be used to create these here: https://shorthand.com/. It was a very interesting session!