Week Beginning 9th January 2023

I attended the workshop ‘The impact of multilingualism on the vocabulary and stylistics of Medieval English’ in Zurich this week.  The workshop ran on Tuesday and Wednesday and I travelled to Zurich with my colleagues Marc Alexander and Fraser Dallachy on Monday.  It was really great to travel to a workshop in a different country again as I’d not been abroad since before Lockdown.  I’d never been to Zurich before and it was a lovely city.  The workshop itself was great, with some very interesting papers and good opportunities to meet other researchers and discuss potential future projects.  I gave a paper on the Historical Thesaurus, its categories and data structures and how semantic web technologies may be used to more effectively structure, manage and share the Historical Thesaurus’s semantically arranged dataset.  It was a half-hour paper with 10 minutes for questions afterwards and it went pretty well.  The audience wasn’t especially technical and I’m not sure how interesting the topic was to most people, but it was well received and I’m glad I had the opportunity to both attend the event and to research the topic as I have greatly increased my knowledge of semantic web technologies such as RDF, graph databases and SPARQL, and as part of the research I managed to write a script that generated an RDF version of the complete HT category data, which may come in handy one day.

I got back home just before midnight on the Wednesday and returned to normal work first thing on Thursday.  This included submitting my expenses from the workshop and replying to a few emails that had come in regarding my office (it looks like the dry rot work is going to take a while to resolve and it also looks like I’ll have to share my temporary office) and attempting to set up web hosting for the VARICS project, which Arts IT Support seem reluctant to do.  I also looked into an issue with the DSL that Ann Ferguson had spotted and spoke to the IT people at Stirling about their current progress with setting up a Solr instance for the Books and Borrowing project.  I also replaced a selection of library register images with better versions for that project and arranged a meeting for next Monday with the project’s PI and Co-I to discuss progress with the front-end.

I spent most of Friday writing a Data Management Plan and attending a Zoom call for a new speech therapy project I’m involved with.  It’s an ESRC funding proposal involving Glasgow and Strathclyde and I’ll be managing the technical aspects.  We had a useful call and I managed to complete an initial version of the DMP that the PI is going to adapt if required.

Week Beginning 19th December 2022

This was the last week before the Christmas holidays, and Friday was a holiday.  I spent some time on Monday making further updates to the Speech Star data.  I fixed some errors in the data and made some updates to the error type descriptions.  I also made ‘poster’ images from the latest batch of child speech videos I’d created last week as this was something I’d forgotten to do at the time.  I also fixed some issues with the non-disordered speech data, including changing a dash to an underscore in the filenames of the files for one speaker as there had been a mismatch between filenames and metadata, causing none of the videos to open in the site.  I also created records for two projects (The Gentle Shepherd and Speak For Yersel) on this very site (see https://digital-humanities.glasgow.ac.uk/projects/last-updated/) as these are the projects I’ve been working on that have actually launched in the past year.  Other major ones such as Books and Borrowing and Speech Star are not yet ready to share.  I also updated all of the WordPress sites I manage to the latest version.

On Tuesday I travelled into the University to locate my new office.  My stuff had been moved across last week after a leak in the building resulted in water pouring through my office.  Plus work is ongoing to fix the dry rot in the building and I would have needed to move out for that anyway.  It took a little time to get the new office in order and to get my computer equipment set up, but once it was all done it was actually a very nice location – much nicer than the horrible little room I’m usually stuck in.

I spent most of Tuesday upgrading Google Analytics for all of the sites I manage that use it.  Google’s current analytics system is being retired in July next year and I decided to use the time in the run-up to Christmas to migrate the sites over to the new Google Analytics 4 platform.  This was a mostly straightforward process, although as usual Google’s systems feel clunky and counterintuitive at times.  It was also a fairly lengthy process as I had to update the code for each site un question.  Nevertheless I managed to get it done and informed all of the staff whose websites would be affected by the change.  I also had a further chat with Geert, the editor of the Anglo-Norman Dictionary about the new citation edit feature I’m planning at the moment.

On Wednesday I had a meeting with prospective project partners in Strathclyde about a speech therapy proposal we’re putting together.  It was good to meet people and to discuss things.  I’ll be working on the Data Management Plan for the proposal after the holidays.  I spent the rest of the day working on my paper for the workshop I’m attending in Zurich in the second week of January.  I have now finished the paper, which is quite a relief.

On Thursday I spent some time working for the Dictionaries of the Scots Language.  I responded to an email from Ann Fergusson about how we should handle links to ancillary pages in the XML.  There are two issues here that need to be agreed upon.  The first issue is how to represent links to things other than entries in the entry XML.  We currently have the <ref> element that is used to link from one entry to another (e.g. <ref refid=”snd00065761″>Chowky</ref>).  We could use the HTML element <a> in the XML for links to things other than entries but I personally think it’s best not to use this as (in my opinion) it’s better for XML elements to be meaningful when you look at them and the meaning of <a> isn’t especially clear.  It might be better to use <ref> with a different attribute instead of ‘refid’, for example <ref url=”https://dsl.ac.uk/geographical-labels”>.  Reusing <ref> means we don’t need to update the DTD (the rules that define which elements can be used where in the XML) to add a new element.

Of course other people may think that inventing our own way of writing HTML links is daft when everyone is already familiar with <a href=”https://dsl.ac.uk/geographical-labels”> and we could use the latter if people prefer.  If this is the case we would need to update the DTD to allow such elements to be used.  If we didn’t update the DTD the XML files would fail to validate.

Whichever way is chosen, there is a second issue that will need to be addressed:  I will need to update the XSLT that transforms the XML into HTML to tell the script how to handle either a <ref> with a ‘url’ attribute or a <a> with an ‘href’ attribute.  Without updating the XSLT the links won’t work.  I can add such a rule in when we decide how best to represent links in the XML.

I also made a couple of tweaks to the wildcard search term highlighting feature I was working on last week and then published the update on the live DSL site.  Now when you perform a search for something like ‘chr*mas’ and then select an entry to view any work that matches the wildcard pattern will be highlighted.  For example, go to this page: https://dsl.ac.uk/results/chr*mas/fulltext/withquotes/both/ and then select one of the entries and you’ll see the term highlighted in the entry page.

That’s all from me for this year.  Merry chr*mas one and all!

Week Beginning 24th October 2022

I returned to work this week after having a lovely week’s holiday in the Lake District.  I spent most of the week working for the Books and Borrowing project.  I’d received images for two library registers from Selkirk whilst I was away and I set about integrating them into our system.  This required a bit of work to get the images matched up to the page records for the registers that already exist in the CMS.   Most of the images are double-pages but the records in the CMS are of single pages marked as ‘L’ or ‘R’.  Not all of the double-page images have both ‘L’ and ‘R’ in the CMS and some images don’t have any corresponding pages in the CMS.  For example in Volume 1 we have ‘1010199l’ followed by ‘1010203l´followed by ‘1010205l’ and then ‘1010205r’.  This seems to be quite correct as the missing pages don’t contain borrowing records.  However, I still needed to figure out how to match up images and page records.  As with previous situations, the options were either slicing the images down the middle to create separate ‘L’ and ‘R’ images to match each page or joining the ‘L’ and ‘R’ page records in the CMS to make one single record that then matches the double-page image.  There are several hundred images so manually chopping them up wasn’t really an option, and automatically slicing them down the middle wouldn’t work too well as the page divide is often not in the centre of the image.  This then left joining up the page records in the CMS as the best option and I wrote a script to join the page records, rename them to remove the ‘L’ and ‘R’ affixes, moving all borrowing records across and renumbering their page order and then deleting the now empty pages.  Thankfully it all seemed to work well.  I also uploaded the images for the final register from the Royal High School, which thankfully was a much more straightforward process as all image files matched references already stored in the CMS for each page.

I then returned to the development of the front-end for the project.  When I looked at the library page I’d previously created I noticed that the interactive map of library locations was failing to load.  After a bit of investigation I realised that this was caused by new line characters appearing in the JSON data for the map, which was invalidating the file structure.  These had been added in via the library ‘name variants’ field in the CMS and were appearing in the data for the library popup on the map.  I needed to update the script that generated the JSON data to ensure that new line characters were stripped out of the data, and after that the maps loaded again.

Before I went on holiday I’d created a browse page for library books that split the library’s books up based on the initial letter of their titles.  The approach I’d taken worked pretty well, but St Andrews was still a bit of an issue due to it containing many more books than the other libraries (more than 8,500).  Project Co-I Matt Sangster suggested that we should omit some registers from the data as their contents (including book records) are not likely to be worked on during the course of the project.  However, I decided to just leave the data in place for now, as excluding data for specific registers would require quite a lot of reworking of the code.  The book data for a library is associated directly with the library record and not the specific registers and all the queries would need to be rewritten to check which registers a book appears in.  I reckon that if these registers are not going to be tackled by the project it might be better to just delete them, not just to make the system work better but to avoid confusing users with messy data, but I decided to leave everything as it is for now.

This week I added in two further ways of browsing books in a selected library:  By author and by most borrowed.  A drop-down list featuring the three browse options appears at the top of the ‘Books’ page now, and I’ve added in a title and explanatory paragraph about the list type. The ‘by author’ browse works in a similar manner to the ‘by title’ browse, with a list of initial letter tabs featuring the initial letter of the author’s surname and a count of the number of books that have an author with a surname beginning with this letter.  Note that any books that don’t have an associated author do not appear in this list.  I did think about adding a ‘no author’ tab as well, but some libraries (e.g. St Andrews) have so many books without specified authors that the data for this tab would take far too long to load in.  Note also that if a book has multiple authors then the book will appear multiple times – once for each author.  Here’s a screenshot of how the interface currently looks:

The actual list of books works in a similar way to the ‘title’ list but is divided by author, with authors appearing with their full name and dates in red above a list of their books.  The ordering of the records is by author surname then forename then author ID then book title.  This means two authors with the same name will still appear as separate headings with their books ordered alphabetically.  However, this has also uncovered some issues with duplicate author records.

Getting this browse list working actually took a huge amount of effort due to the complex way we store authors.  In our system an author can be associated with any one of four levels of book record (work / edition / holding / item) and an author associated at a higher level needs to cascade down to lower level book records.  Running queries directly on this structure proved to be too resource intensive and slow so instead I wrote a script to generate cached data about authors.  This script goes through every author connection at all levels and picks out the unique authors that should be associated with each book holding record.  It then stores a reference to the ID of the author, the holding record and the initial letter of the author’s surname in a new table that is much more efficient to reference.  This then gets used to generate the letter tabs with the number of book counts and to work out which books to return when an author surname beginning with a letter is selected.

However, one thing we need to consider about using cached tables is that the data only gets updated when I run the script to refresh the cache, so any changes / additions to authors made in the CMS will not be directly reflected in the library books tab.  This is also true of the ‘browse books by title’ lists I previously created too.  I noticed when looking at the books beginning with ‘V’ for a library (I can’t remember which) that one of the titles clearly didn’t begin with a ‘V’, which confused me for a while before I realised it’s because the title must have been changed in the CMS since I last generated the cached data.

The ’most borrowed’ page lists the top 100 most borrowed books for the library, from most to least borrowed.  Thankfully this was rather more straightforward to implement as I had already created the cached fields for this view.  I did consider whether to have tabs allowing you to view all of the books by number of borrowings, but I wasn’t really sure how useful this would be.  In terms of the display of the ‘top 100’ the books are listed in the same way as the other lists, but the number of borrowings is highlighted in red text to make it easier to see.  I’ve also added in a number to the top-left of the book record so you can see which place a book has in the ‘hitlist’, as you can see in the following screenshot:

I also added in a ‘to top’ button that appears as you scroll down the page (it appears in the bottom right, as you can see in the above screenshot).  Clicking on this scrolls to the page title, which should make the page easier to use – I’ve certainly been making good use of the button anyway.

Also this week I submitted my paper ‘Speak For Yersel: Developing a crowdsourced linguistic survey of Scotland’ to DH2023.  As it’s the first ‘in person’ DH conference to be held in Europe since 2019 I suspect there will be a huge number of paper submissions, so we’ll just need to see if it gets accepted or not.  Also for Speak For Yersel I had a lengthy email conversation with Jennifer Smith about repurposing the SFY system for use in other areas.  The biggest issue here would be generating the data about the areas:  settlements for the drop-down lists, postcode areas with GeoJSON shape files and larger region areas with appropriate GeoJSON shape files.  It took Mary a long time to gather or create all of this data and someone would have to do the same for any new region.  This might be a couple of weeks of effort for each area.  It turns out that Jennifer has someone in mind for this work, which would mean all I would need to do is plug in a new set of questions, work with the new area data and make some tweaks to the interface.  We’ll see how this develops.  I also wrote a script to export the survey data for further analysis.

Another project I spent some time on this week was Speech Star.  For this I created a new ‘Child Speech Error Database’ and populated it with around 90 records that Eleanor Lawson had sent me.  I imported all of the data into the same database as is used for the non-disordered speech database and have added a flag that decides which content is displayed in which page.  I removed ‘accent’ as a filter option (as all speakers are from the same area) and have added in ‘error type’.  Currently the ‘age’ filter defaults to the age group 0-17 as I wasn’t sure how this filter should work, as all speakers are children.

The display of records is similar to the non-disordered page in that there are two means of listing the data, each with its own tab.  In the new page these tabs are for ‘Error type’ and ‘word’.  I also added in ‘phonemic target’ and ‘phonetic production’ as new columns in the table as I thought it would be useful to include these, and I updated the video pop-up for both the new page and the non-disordered page to bring it into line with the popup for the disordered paediatric database, meaning all metadata now appears underneath the video rather than some appearing in the title bar or above the video and the rest below.  I’ve ensured this is exactly the same for the ‘multiple video’ display too.  At the moment the metadata all just appears on one long line (other than speaker ID, sex and age) so the full width of the popup is used, but we might change this to a two-column layout.

Later in the week Eleanor got back to me to say she’d sent me the wrong version of the spreadsheet and I therefore replaced the data.  However, I spotted something relating to the way I structure the data that might be an issue.  I’d noticed a typo in the earlier spreadsheet (there is a ‘helicopter’ and a ‘helecopter’) and I fixed it, but I forgot to fix it before uploading the newer file.   Each prompt is only stored once in the database, even if it is used by multiple speakers so I was going to go into the database and remove the ‘helecopter’ prompt row that didn’t need to be generated and point the speaker to the existing ‘helicopter’ prompt.  However, I noticed that ‘helicopter’ in the spreadsheet has ‘k’ as the sound whereas the existing record in the database has ‘l’.  I realised this is because the ‘helicopter’ prompt had been created as part of the non-disordered speech database and here the sound is indeed ‘l’.  It looks like one prompt may have multiple sounds associated with it, which my structure isn’t set up to deal with.  I’m going to have to update the structure next week.

Also this week I responded to a request for advice from David Featherstone in Geography who is putting together some sort of digitisation project.  I also responded to a query from Pauline Graham at the DSL regarding the source data for the Scots School Dictionary.  She wondered whether I had the original XML and I explained that there was no original XML.  The original data was stored in an ancient Foxpro database that ran from a CD.  When I created the original School Dictionary app I managed to find a way to extract the data and I saved it as two CSV files – one English-Scots the other Scots-English.  I then ran a script to convert this into JSON which is what the original app uses.  I gave Pauline a link to download all of the data for the app, including both English and Scots JSON files and the sound files and I also uploaded the English CSV file in case this would be more useful.

That’s all for this week.  Next week I’ll fix the issues with the Speech Star database and continue with the development of the Books and Borrowing front-end.

Week Beginning 20th June 2022

I completed an initial version of the Chambers Library map for the Books and Borrowing project this week.  It took quite a lot of time and effort to implement the subscription period range slider.  Searching for a range when the data also has a range of dates rather than a single date means we needed to make a decision about what data gets returned and what doesn’t.  This is because the two ranges (the one chosen as a filter by the user and the one denoting the start and end periods of subscription for each borrower) can overlap in many different ways.  For example, the period chosen by the user is 05 1828 to 06 1829.  Which of the following borrowers should therefore be returned?

  1. Borrowers range is 06 1828 to 02 1829: Borrower’s range is fully within the period so should definitely be included
  2. Borrowers range is 01 1828 to 07 1828: Borrower’s range extends beyond the selected period at the start and ends within the selected period.  Presumably should be included.
  3. Borrowers range is 01 1828 to 09 1829: Borrower’s range extends beyond the selected period in both directions.  Presumably should be included.
  4. Borrowers range is 05 1829 to 09 1829: Borrower’s range begins during the selected period and ends beyond the selected period. Presumably should be included.
  5. Borrowers range is 01 1828 to 04 1828: Borrower’s range is entirely before the selected period. Should not be included
  6. Borrowers range is 07 1829 to 10 1829: Borrower’s range is entirely after the selected period. Should not be included.

Basically if there is any overlap between the selected period and the borrower’s subscription period the borrower will be returned.  But this means most borrowers will always be returned a lot of the time.  It’s a very different sort of filter to one that purely focuses on a single date – e.g. filtering the data to only those borrowers whose subscription periods *begins* between 05 1828 and 06 1829.

Based on the above assumptions I began to write the logic that would decide which borrowers to include when the range slider is altered.  It was further complicated by having to deal with months as well as years.  Here’s the logic in full if you fancy getting a headache:

if(((mapData[i].sYear>startYear || (mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth || mapData[i].sYear>startYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)) || (((mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth) || mapData[i].sYear>startYear) && ((mapData[i].sYear==endYear && mapData[i].sMonth <=endMonth) || mapData[i].sYear<endYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)))

I also added the subscription period to the popups.  The only downside to the range slider is that the occupation marker colours change depending on how many occupations are present during a period, so you can’t always tell an occupation by its colour. I might see if I can fix the colours in place, but it might not be possible.

I also noticed that the jQuery UI sliders weren’t working very well on touchscreens so installed the jQuery TouchPunch library to fix that (https://github.com/furf/jquery-ui-touch-punch).  I also made the library marker bigger and gave it a white border to more easily differentiate it from the borrower markers.

I then moved onto incorporating page images in the resource too.  Where a borrower has borrower records the relevant pages where these borrowing records are found now appear as thumbnails in the borrower popup.  These are generated by the IIIF server based on dimensions passed to it, which is much nicer than having to generate and store thumbnails directly.  I also updated the popup to make it wider when required to give more space for the thumbnails.  Here’s a screenshot of the new thumbnails in action:

Clicking on a thumbnail opens a further popup containing a zoomable / pannable image of the page.  This proved to be rather tricky to implement.  Initially I was going to open a popup in the page (outside of the map container) using a jQuery UI Dialog.  However, I realised that this wouldn’t work when the map was being viewed in full-screen mode, as nothing beyond the map container is visible in such circumstances.  I then considered opening the image in the borrower popup but this wasn’t really big enough.  I then wondered about extending the ‘Map options’ section and replacing the contents of this with the image, but this then caused issues for the contents of the ‘Map options’ section, which didn’t reinitialise properly when the contents were reinstated.  I then found a plugin for the Leaflet mapping library that provides a popup within the map interface (https://github.com/w8r/Leaflet.Modal) and decided to use this.  However, it’s all a little complex as the popup then has to include another mapping library called OpenLayers that enables the zooming and panning of the page image, all within the framework of the overall interactive map.  It is all working and I think it works pretty well, although I guess the map interface is a little cluttered, what with the ‘Map Options’ section, the map legend, the borrower popup and then the page image popup as well.  Here’s a screenshot with the page image open:

All that’s left to do now is add in the introductory text once Alex has prepared it and then make the map live.  We might need to rearrange the site’s menu to add in a link to the Chambers Map as it’s already a bit cluttered.

Also for the project I downloaded images for two further library registers for St Andrews that had previously been missed.  However, there are already records for the registers and pages in the CMS so we’re going to have to figure out a way to work out which image corresponds to which page in the CMS.  One register has a different number of pages in the CMS compared to the image files so we need to work out how to align the start and end and if there are any gaps or issues in the middle.  The other register is more complicated because the images are double pages whereas it looks like the page records in the CMS are for individual pages.  I’m not sure how best to handle this.  I could either try and batch process the images to chop them up or batch process the page records to join them together.  I’ll need to discuss this further with Gerry, who is dealing with the data for St Andrews.

Also this week I prepared for and gave a talk to a group of students from Michigan State University who were learning about digital humanities.  I talked to them for about an hour about a number of projects, such as the Burns Supper map (https://burnsc21.glasgow.ac.uk/supper-map/), the digital edition I’d created for New Modernist Editing (https://nme-digital-ode.glasgow.ac.uk/), the Historical Thesaurus (https://ht.ac.uk/), Books and Borrowing (https://borrowing.stir.ac.uk/) and TheGlasgowStory (https://theglasgowstory.com/).  It went pretty and it was nice to be able to talk about some of the projects I’ve been involved with for a change.

I also made some further tweaks to the Gentle Shepherd Performances page which is now ready to launch, and helped Geert out with a few changes to the WordPress pages of the Anglo-Norman Dictionary.  I also made a few tweaks to the WordPress pages of the DSL website and finally managed to get a hotel room booked for the DHC conference in Sheffield in September.  I also made a couple of changes to the new Gaelic Tongues section of the Seeing Speech website and had a discussion with Eleanor about the filters for Speech Star.  Fraser had been in touch with about 500 Historical Thesaurus categories that had been newly matched to OED categories so I created a little script to add these connections to the online database.

I also had a Zoom call with the Speak For Yersel team.  They had been testing out the resource at secondary schools in the North East and have come away with lots of suggested changes to the content and structure of the resource.  We discussed all of these and agreed that I would work on implementing the changes the week after next.

Next week I’m going to be on holiday, which I have to say I’m quite looking forward to.

Week Beginning 6th June 2022

I’d taken Monday off this week to have an extra-long weekend following the jubilee holidays on Thursday and Friday last week.  On Tuesday I returned to another meeting for Speak For Yersel and a list of further tweaks to the site, including many changes to three of the five activities and a new set of colours for the map marker icons, which make the markers much more easy to differentiate.

I spent most of the week working on the Books and Borrowing project.  We’d been sent a new library register from the NLS and I spent a bit of time downloading the 700 or so images, processing them and uploading them into our system.  As usual, page numbers go a bit weird.  Page 632 is written as 634 and then after page 669 comes not 670 but 700!  I ran my script to bring the page numbers in the system into line with the oddities of the written numbers.  On Friday I downloaded a further library register which I’ll need to process next week.

My main focus for the project was the Chambers Library interactive map sub-site.  The map features the John Ainslie 1804 map from the NLS, and currently it uses the same modern map as I’ve used elsewhere in the front-end for consistency, although this may change.  The map defaults to having a ‘Map options’ pane open on the left, and you can open and close this using the button above it.  I also added a ‘Full screen’ button beneath the zoom buttons in the bottom right.  I also added this to the other maps in the front-end too. Borrower markers have a ‘person’ icon and the library itself has the ‘open book’ icon as found on other maps.

By default the data is categorised by borrower gender, with somewhat stereotypical (but possibly helpful) blue and pink colours differentiating the two.  There is one borrower with an ‘unknown’ gender and this is set to green.  The map legend in the top right allows you to turn on and off specific data groups.  The screenshot below shows this categorisation:

The next categorisation option is occupation, and this has some problems.  The first is there are almost 30 different occupations, meaning the legend is awfully long and so many different marker colours are needed that some of them are difficult to differentiate.  Secondly, most occupations only have a handful of people.  Thirdly, some people have multiple occupations, and if so these are treated as one long occupation, so we have both ‘Independent Means > Gentleman’ and then ‘Independent Means > Gentleman, Politics/Office Holders > MP (Britain)’.  It would be tricky to separate these out as the marker would then need to belong to two sets with two colours, plus what happens if you hide one set?  I wonder if we should just use the top-level categorisation for the groupings instead?  This would result in 12 groupings plus ‘unknown’, meaning the legend would be both shorter and narrower.  Below is a screenshot of the occupation categorisation as it currently stands:

The next categorisation is subscription type, which I don’t think needs any explanation.  I then decided to add in a further categorisation for number of borrowings, which wasn’t originally discussed but as I used the page I found myself looking for an option to see who borrowed the most, or didn’t borrow anything.  I added the following groupings, but these may change: 0, 1-10, 11-20, 21-50, 51-70, 70+ and have used a sequential colour scale (darker = more borrowings).  We might want to tweak this, though, as some of the colours are a bit too similar.  I haven’t added in the filter to select subscription period yet, but will look into this next week.

At the bottom of the map options is a facility to change the opacity of the historical map so you can see the modern street layout.  This is handy for example for figuring out why there is a cluster of markers in a field where ‘Ainslie Place’ was presumably built after the historical map was produced.

I decided to not include the marker clustering option in this map for now as clustering would make it more difficult to analyse the categorisation as markers from multiple groupings would end up clustered together and lose their individual colours until the cluster is split.  Marker hover-overs display the borrower name and the pop-ups contain information about the borrower.  I still need to add in the borrowing period data, and also figure out how best to link out to information about the borrowings or page images.  The Chambers Library pin displays the same information as found in the ‘libraries’ page you’ve previously seen.

Also this week I responded to a couple of queries from the DSL people about Google Analytics and the icons that gets used for the site when posting on Facebook.  Facebook was picking out the University of Glasgow logo rather than the DSL one, which wasn’t ideal.  Apparently there’s a ‘meta’ tag that you need to add to the site header in order for Facebook to pick up the correct logo, as discussed here: https://stackoverflow.com/questions/7836753/how-to-customize-the-icon-displayed-on-facebook-when-posting-a-url-onto-wall

I also created a new user for the Ayr place-names project and dealt with a couple of minor issues with the CMS that Simon Taylor had encountered.  I also investigated a certificate error with the ohos.ac.uk website and responded to a query about QR codes from fellow developer David Wilson.  Also, Craig Lamont in Scottish Literature got in touch about a spreadsheet listed Burns manuscripts that he’s been working on with a view to turning it into a searchable online resource and I gave him some feedback about the structure of the spreadsheet.

Finally, I did a bit of work for the Historical Thesaurus, working on a further script to match up HT and OED categories based on suggestions by researcher Beth Beattie.  I found a script I’d produced in from 2018 that ran pattern matching on headings and I adapted this to only look at subcats within 02.02 and 02.03, picking out all unmatched OED subcats from these (there are 627) and then finding all unmatched HT categories where our ‘t’ numbers match the OED path.  Previously the script used the HT oedmaincat column to link up OED and HT but this no longer matches (e.g. HT ‘smarten up’ has ‘t’ nums 02.02.16.02 which matches OED 02.02.16.02 ‘to smarten up’ whereas HT ‘oedmaincat’ is ’02.04.05.02’).

The script lists the various pattern matches at the top of the page and the output is displayed in a table that can be copied and pasted into Excel.  Of the 627 OED subcats there are 528 that match an HT category.  However, some of them potentially match multiple HT categories.  These appear in red while one to one matches appear in green.  Some of these multiple matches are due to Levenshtein matches (e.g. ‘sadism’ and ‘sadist’) but most are due to there being multiple subcats at different levels with the exact same heading.  These can be manually tweaked in Excel and then I could run the updated spreadsheet through a script to insert the connections.  We also had an HT team meeting this week that I attended.

Week Beginning 16th May 2022

This week I finished off all of the outstanding work for the Speak For Yerself project. The other members of the team (Jennifer and Mary) are both on holiday so I finished off all of the tasks I had on my ‘to do’ list, although there will certainly be more to do once they are both back at work again.  The tasks I completed were a mixture of small tweaks and larger implementations.  I made tweaks to the ‘About’ page text and changed the intro text to the ‘more give your word’ exercise.  I then updated the age maps for this exercise, which proved to be pretty tricky and time-consuming to implement as I needed to pull apart a lot of the existing code.  Previously these maps showed ‘60+’ and ‘under 19’ data for a question, with different colour markers for each age group showing those who would say a term (e.g. ‘Scunnered’) and grey markers for each age group showing those who didn’t say the term.  We have completely changed the approach now.  The maps now default to showing ‘under 19’ data only, with different colours for each different term.  There is now an option in the map legend to switch to viewing the ‘60+’ data instead.  I added in the text ‘press to view’ to try and make it clearer that you can change the map.  Here’s a screenshot:

I also updated the ‘give your word’ follow-on questions so that they are now rated in a new final page that works the same way as the main quiz.  In the main ‘give your word’ exercise I updated the quiz intro text and I ensured that the ‘darker dots’ explanatory text has now been removed for all maps.  I tweaked a few questions to change their text or the number of answers that are selectable and I changed the ‘sounds about right’ follow-on ‘rule’ text and made all of the ‘rule’ words lower case.  I also made it so that when the user presses ‘check answers’ for this exercise a score is displayed to the right and the user is able to proceed directly to the next section without having to correct their answers.  They still can correct their answers if they want.

I then made some changes to the ‘She sounds really clever’ follow-on.  The index for this is now split into two sections, one for ‘stereotype’ data and one for ‘rating speaker’ data and you can view the speaker and speaker/listener results for both types of data.  I added in the option of having different explanatory text for each of the four perception pages (or maybe just two – one for stereotype data, one for speaker ratings) and when viewing the speaker rating data the speaker sound clips now appear beneath the map.  When viewing the speaker rating data the titles above the sliders are slightly different.  Currently when selecting the ‘speaker’ view the title is “This speaker from X sounds…” as opposed to “People from X sound…”.  When selecting the ‘speaker/listener’ view the title is “People from Y think this speaker from X sounds…” as opposed to “People from Y think people from X sound…”.  I also added a ‘back’ button to these perception follow-on pages so it’s easier to choose a different page.  Finally, I added some missing HTML <title> tags to pages (e.g. ‘Register’ and ‘Privacy’) and fixed a bug whereby the ‘explore more’ map sound clips weren’t working.

With my ‘Speak For Yersel’ tasks out of the way I could spend some time looking at other projects that I’d put on hold for a while.  A while back Eleanor Lawson contacted me about adding a new section to the Seeing Speech website where Gaelic speaker videos and data will be accessible, and I completed a first version this week.  I replicated the Speech Star layout rather than the /r/ & /l/ page layout as it seemed more suitable: the latter only really works for a limited number of records while the former works well with lots more (there are about 150 Gaelic records).  What this means is the data has a tabular layout and filter options.  As with Speech Star you can apply multiple filters and you can order the table by a column by clicking on its header (clicking a second time reverses the order).  I’ve also included the option to open multiple videos in the same window.  I haven’t included the playback speed options as the videos already include the clip at different speeds.  Here’s a screenshot of how the feature looks:

On Thursday I had a Zoom call with Laura Rattray and Ailsa Boyd to discuss a new digital edition project they are in the process of planning.  We had a really great meeting and their project has a lot of potential.  I’ve offered to give technical advice and write any technical aspects of the proposal as and when required, and their plan is to submit the proposal in the autumn.

My final major task for the week was to continue to work on the Ramsay ‘Gentle Shepherd’ data.  I overhauled the filter options that I implemented last week so they work in a less confusing way when multiple types are selected now.  I’ve also imported the updated spreadsheet, taking the opportunity to trim whitespace to cut down on strange duplicates in the filter options.  There are some typos you’ll need to fix in the spreadsheet, though (e.g. we have ‘Glagsgow’ and ‘Glagsow’) plus some dates still need to be fixed.

I then created an interactive map for the project and have incorporated the data for which there are latitude and longitude values.  As with the Edinburgh Gazetteer map of reform societies (https://edinburghgazetteer.glasgow.ac.uk/map-of-reform-societies/) the number of performances at a venue is displayed in the map marker.  Hover over a marker to see info about the venue.  Click on it to open a list of performances.  Note that when zoomed out it can be difficult to make out individual markers but we can’t really use clustering as on the Burns Supper map (https://burnsc21.glasgow.ac.uk/supper-map/) because this would get confusing:  we’d have clustered numbers representing the number of markers in a cluster and then induvial markers with a number representing the number of performances.  I guess we could remove the number of performances from the marker and just have this in the tooltip and / or popup, but it is quite useful to see all the numbers on the map.  Here’s a screenshot of how the map currently looks:

I still need to migrate all of this to the University’s T4 system, which I aim to tackle next week.

Also this week I had discussions about migrating an externally hosted project website to Glasgow for Thomas Clancy.  I received a copy of the files and database for the website and have checked over things and all is looking good.  I also submitted a request for a temporary domain and I should be able to get a version of the site up and running next week.  I also regenerated a list of possible duplicate authors in the Books and Borrowing system after the team had carried out some work to remove duplicates.  I will be able to use the spreadsheet I have now to amalgamate duplicate authors, a task which I will tackle next week.

Week Beginning 31st January 2022

I split my time over many different projects this week.  For the Books and Borrowing project I completed the work I started last week on processing the Wigtown data, writing a little script that amalgamated borrowing records that had the same page order number on any page.  These occurrences arose when multiple volumes of a book were borrowed by a person at the same time and each volume was recorded separately.  My script worked perfectly and many such records were amalgamated.

I then moved onto incorporating images of register pages from Leighton into the CMS.  This proved to be a rather complicated process for one of the four registers as around 30 pages for the register had already been manually created in the CMS and had borrowing records associated with them.  However, these pages had been created in a somewhat random order, starting at folio number 25 and mostly being in order down to 43, at which point the numbers are all over the place, presumably because the pages were created in the order that they were transcribed.    As it stands the CMS relies on the ‘page ID’ order when generating lists of pages as ‘Folio Number’ isn’t necessarily in numerical order (e.g. front / back matter with Roman numerals).  If out of sequence pages crop up a lot we may have to think about adding a new ‘page order’ column, or possibly use the ‘previous’ and ‘next’ IDs to ascertain the order pages should be displayed.  After some discussion with the team it looks like pages are usually created in page order and Leighton is an unusual case, so we can keep using the auto-incrementing page ID for listing pages in the contents page.  I therefore generated a fresh batch of pages for the Leighton register then moved the borrowing records from the existing mixed up pages to the appropriate new page, then deleted the existing pages so everything is all in order.

For the Speak For Yersel project I created a new exercise whereby users are presented with a map of Scotland divided into 12 geographical areas and there are eight map markers in a box in the sea to the east of Scotland.  Each marker is clickable, and clicking on it plays a sound file.  Each marker is also draggable and after listening to the sound file the user should then drag the marker to whichever area they think the speaker in the sound file is from.  After dragging all of the markers the user can then press a ‘check answers’ button to see which they got right, and press a ‘view correct locations’ button which animates the markers to their correct locations on the map.  It was a lot of fun making the exercise and I think it works pretty well.  It’s still just an initial version and no doubt we will be changing it, but here’s a screenshot of how it currently looks (with one answer correct and the rest incorrect):

For the Speech Star project I made some further changes to the speech database.  Videos no longer autoplay, as requested.  Also, the tables now feature checkboxes beside them.  You can select up to four videos by pressing on these checkboxes.  If you select more than four the earliest one you pressed is deselected, keeping a maximum of four no matter how many checkboxes you try to click on.  When at least one checkbox is pressed the tab contents will slide down and a button labelled ‘Open selected videos’ will appear.  If you press on this a wider popup will open, containing all of your chosen videos and the metadata about each.  This has required quite a lot of reworking to implement, but it seemed to be working well, until I realised that while the multiple videos load and play successfully in Firefox, in Chrome and MS Edge (which is based on Chrome) only the final video loads in properly, with only audio playing on the other videos.  I’ll need to investigate this further next week.  But here’s a screenshot of how things look in Firefox:

Also this week I spoke to Thomas Clancy about the Place-names of Iona project, including discussing how the front-end map will function (Thomas wants an option to view all data on a single map, which should work although we may need to add in clustering at higher zoom levels.  We also discussed how to handle external links and what to do about the elements database, that includes a lot of irrelevant elements from other projects.

I also had an email conversation with Ophira Gamliel in Theology about a proposal she’s putting together that will involve an interactive map, gave some advice to Diane Scott about cookie policy pages, worked with Raymond in Arts IT Support to fix an issue with a server update that was affecting the playback of videos on the Seeing Speech and Dynamic Dialects websites and updated a script that Fraser Dallachy needed access to for his work on a Scots Thesaurus.

Finally, I had some email conversations with the DSL people and made an update to the interface of the new DSL website to incorporate an ‘abbreviations’ button, which links to the appropriate DOST or SND abbreviations page.

 

 

Week Beginning 22nd November 2021

I spent a bit of time this week writing an abstract for the DH2022 conference.  I wrote about how I rescued the data for the Anglo-Norman Dictionary in order to create the new AND website.  The DH abstracts are actually 750-1000 words long so it took a bit of time to write.  I have sent it on to Marc for feedback and I’ll need to run it by the AND editors before submission as well (if it’s worth submitting).  I still don’t know whether there would be sufficient funds for me to attend the event, plus the acceptance rate for papers is very low, so I’ll just need to see how this develops.

Also this week I participated in a Zoom call for the DSL about user feedback and redeveloping the DSL website.  It was a pretty lengthy call, but it was interesting to be a part of.  Marc mentioned a service called Hotjar (https://www.hotjar.com/) that allows you to track how people use your website (e.g. tracking their mouse movements) and this seemed like an interesting way of learning about how an interface works (or doesn’t).  I also had a conversation with Rhona about the updates to the DSL DNS that need to be made to improve the security or their email systems.  Somewhat ironically, recent emails from their IT people had ended up in my spam folder and I hadn’t realised they were asking me for further changes to be made, which unfortunately has caused a delay.

I spoke to Gerry Carruthers about another new project he’s hoping to set up, and we’ll no doubt be having a meeting about this in the coming weeks.  I also gave some advice to the students who are migrating the IJOSTS articles to WordPress too and made some updates to the Iona Placenames website in preparation for their conference.

For the Anglo-Norman Dictionary I fixed an issue with one of the textbase texts that had duplicate notes in one of its pages and then I worked on a new feature for the DMS that enables the editors to search the phrases contained in locutions in entries.  Editors can either match locution phrases beginning with a term (e.g. ta*), ending with a term (e.g. *de) or without a wildcard the term can appear anywhere in the phrase.  Other options found on the public site (e.g. single character wildcards and exact matches) are not included in this search.

The first time a search is performed the system needs to query all entries to retrieve only those that feature a locution.  These results are then stored in the session for use the next time a search is performed.  This means subsequent searches in a session should be quicker, and also means if the entries are updated between sessions to add or remove locutions the updates will be taken into consideration.

Search results work in a similar way to the old DMS option:  Any matching locution phrases are listed, together with their translations if present (if there are multiple senses / subsenses for a locution then all translations are listed, separated by a ‘|’ character).  Any cross references appear with an arrow and then the slug of the cross referenced entry.  There is also a link to the entry the locution is part of, which opens in a new tab on the live site.  A count of the total number of entries with locutions, the number of entries your search matched a phrase in and the total number of locutions is displayed above the results.

I spent the rest of the week working on the Speak For Yersel project.  We had a Zoom call on Monday to discuss the mockups I’d been working on last week and to discuss the user interface that Jennifer and Mary would like me to develop for the site (previous interfaces were just created for test purposes).  I spent the rest of my available time developing a further version of the grammar exercise with the new interface, that included logos, new fonts and colour schemes, sections appearing in different orders and an overall progress bar for the full exercise rather than individual ones for the questionnaire and the quiz sections.

I added in UoG and AHRC logos underneath the exercise area and added both an ‘About’ and ‘Activities’ menu items with ‘Activities’ as the active item.  The active state of the menu wasn’t mentioned in the document but I gave it a bottom border and made the text green not blue (but the difference is not hugely noticeable).  This is also used when hovering over a menu item.  I made the ‘Let’s go’ button blue not green to make it consistent with the navigation button in subsequent stages.  When a new stage loads the page now scrolls to the top as on mobile phones the content was changing but the visible section remained as it was previously, meaning the user had to manually scroll up.  I also retained the ‘I would never say that!’ header in the top-left corner of all stages rather than having ‘activities’ so it’s clearer what activity the user is currently working on.  For the map in the quiz questions I’ve added the ‘Remember’ text above the map rather than above the answer buttons as this seemed more logical and on the quiz the map pane scrolls up and scrolls down when the next question loads so as to make it clearer that it’s changed.  Also, the quiz score and feedback text now scroll down one after the other and in the final ‘explore’ page the clicked on menu item now remains highlighted to make it clearer which map is being displayed.  Here’s a screenshot of how the new interface looks:

Week Beginning 8th November 2021

I spent a bit of time this week working for the DSL.  I needed to act as the go-between for the DSL’s new IT people who are updating their email system and the University’s IT people who manage the DNS record on behalf of the DSL.  IT took a few attempts before the required changes were successfully in place.  I also read through a document that had been prepared about automatically ‘fixing’ the DSL’s dates to make them machine readable, and gave some feedback on the many different procedures that will need to be performed on the various date forms to produce the desired structure.

I also looked into an issue with cross references within citations that work in the live site but are not functioning in the new site or in the DSL’s editing system.  After some investigation it seems like it’s another case of the original API ‘fixing’ the XML in some way each time it’s processed in order for these links to work.  The XML for ‘put_v’ stored in the original API is as follows:

<cit><cref><date>1591</date> <title>Edinb. B. Rec.</title> V 41 (see <ref>Putting</ref> <i>vbl. n.</i> 1 (1)).</cref></cit>

There is a <ref> tag but no other information in this tag.  This is the same for the XML exported from DPS and used in the new dsl site (which has an additional bibliographic reference in):

<cit><cref refid=”bib013153″><date>1591</date> <title>Edinb. B. Rec.</title> V 41 (see <ref>Putting</ref> <i>vbl. n.</i> 1 (1)).</cref></cit>

The XSLT for both the live and new sites doesn’t include anything to process a <ref> that doesn’t include any attributes so both the live and new sites shouldn’t be displaying a link through to ‘putting’.  But of course the live site does.  I had generated and stored the XML that the original API (which I did not develop) outputs whenever the live site asks for an entry.  When looking at this I found the following:

<cit><cref ref=”db674″><date>1591</date> <title>Edinb. B. Rec.</title> V 41 (see <ref action=”link” href=”dost/putting”>Putting</ref> <i>vbl. n.</i> 1 (1)).</cref></cit>

You can see that the original API is injecting both a bibliographical cross-reference and the ‘putting’ reference.  The former we previously identified and sorted but the latter unfortunately hasn’t, although references that are not in citations do seem to have been fixed.  I updated the XSLT on the new dsl site to process the <ref> so the link now works, however this is not an approach that can be relied upon as all the XSLT is currently doing is taking the contents of the tag (Putting) and making a link out of it.  If the ‘slug’ of the entry doesn’t match the display form then the link is not going to work.  The original API includes a table containing cross references, but this doesn’t differentiate ones in citations from regular ones, and as the ‘putting_v’ entry contains 83 references it’s not going to be easy to pick out from this the ones that still need to be added.  This will need further discussion with the editors.

Continuing on a dictionary theme, I also did some further work for the Anglo-Norman Dictionary.  Last week I processed entries where a varlist date needed to be used as the citation date, but we noticed that the earliest date for entries hadn’t been updated in many cases where it should have been.  This week I figured out what went wrong.  My script only updated the entry’s date if the new date from the varlist was earlier than the existing earliest date for the entry.  This is obviously not what we want as in the majority of cases the varlist date will be later and should replace the earlier date that is erroneous.  Thankfully it was easy to pick out all of the entries that have a ‘usevardate’ and I then reran a corrected version of the script that checks and replaces an entry’s earliest date.

The editor spotted a couple of entries that still hadn’t been updated after this process and I then had to investigate them.  One of them had an error in the edited markup that was preventing the update from being applied.  For the other I realised that my code to update the XML wasn’t looking at all senses, just the first in each entry.  My script was attempting to loop through all senses as follows:

foreach($xml->main_entry->sense -> attestation as $a){

//process here

}

Which unfortunately only loops through all attestations in the first sense.  What I needed to do was:

 

foreach($xml->main_entry->sense as $s){

foreach($s->attestation as $a){

//process here

}

}

As the sense that needed updating for ‘aspreté’ was the last one the XML wasn’t getting changed, this meant ‘usevardate’ wasn’t present in the XML therefore my update to regenerate the earliest dates didn’t catch this entry (despite all dates for citations being successfully updated in the database for the entry).  I then fixed my script and regenerated all data again, including fixing the data so the ones with XML errors were updated.  I then ran a further spreadsheet containing entries that needed updated through the fixed script, resulting in a further 257 citations that had their dates updated.

Finally, I updated the Dictionary Management System so that ‘usevardate’ dates are taken into consideration when processing and publishing uploaded XML files.  If a ‘usevardate’ is found then this date is used for the attestation, which automatically affects the earliest date that is generated for the entry and also the dates used for attestations for search purposes.  I tried this out by downloading the XML for ‘admirable’, which features a ‘usevardate’.  I then edited the XML to remove the ‘usevardate’ before uploading and publishing this version.  As expected the dates for the attestation and the entry’s earliest date were affected by this change.  I then edited the XML to reinstate the ‘usevardate’ and uploaded and published this version, which took into consideration the ‘usevardate’ when generating the entry’s earliest date and attestation dates and returned the entry to the way it was before the test.

Also this week I set up a WordPress site that will be used for the archive of the International Journal of Scottish Theatre and Screen and migrated one of the issues to WordPress, which required me to do the following:

  1. Open the file in a PDF viewer for reference (e.g. Adobe Acrobat)
  2. Open the file in MS Word, which converts it into an editable format
  3. Create a WordPress page for the article with the article’s title as the page title and setting the page ‘parent’ as Volume 1
  4. Copy and paste the article contents from Word into WordPress
  5. Go through the article in WordPress, referencing the file in Acrobat, and manually fixing any issues that I spotted (e.g. fixing the display of headings, fixing some line breaks that were erroneously added). Footnotes proved to be particularly tricky as their layout was not handled very well by Word.  It’s possible that some footnotes are not quite right, especially with the ‘Trainspotting’ article that has more than 70(!) footnotes.
  6. Publish the WordPress page and update the ‘Volume 1’ page to add a link to it.

None of this was particularly difficult to do, but it was somewhat time-consuming.  There are a further 18 issues left to do (as far as I can tell), although some of these will take longer as they contain more articles, and some of these are more structurally complicated (e.g. including images).  Gerry Carruthers is getting a couple of students to do the rest and we have a meeting scheduled next week where I’ll talk through the process.

I also made some further tweaks to the WordPress site for the ‘Our Heritage, Our Stories’ site and dealt with renewing the domain for TheGlasgowStory.com site, which is now safe for a further nine years.  I also generated an Excel spreadsheet of the full lexical dataset from Mapping Metaphor for Wendy Anderson after she had a request for the data from some researchers in Germany.

I spent the rest of the week working for the Speak For Yersel project, continuing to generate mockups of the interactive exercises.  I completed an initial version of the overall structure for both the accessibility and word choice question types for the grammar exercise, so it will be possible to just ‘plug in’ any number of other questions that fit these templates.  What I haven’t done yet is incorporate the maps, the post-questionnaire ‘explore’ or the final quiz, as these need more content.  Here’s how things currently look:

I used another different font for the heading (Slackey), with the same one used for the ‘Question x of y’ text too’.  I also used CSS gradients quite a bit in this version, as the team seemed quite keen on these.  There’s a subtle diagonal gradient in the header and footer backgrounds, and a more obvious  top-to-bottom one in the answer buttons.  I used different combinations of colours too.  I created a progress bar, which works, but with only two questions in the system it’s not especially obvious what it does.  Rather than having people click an answer and then click a ‘next’ button to continue I’ve made it so that clicking an answer automatically loads the next step, and clicking an answer loads a panel with a ‘map’ – this is just a static image for now.  It also loads a ‘next’ button if there is a next question.  Clicking the ‘next’ button slides up the map panel, loads the next question in and advances the progress bar.  Users will be accessing this on many different screen sizes and I’ve tested it out on my Android phone and my iPad in both portrait and landscape orientations and all seems to work well to me.  However, the map panel will be displayed below rather than beside the questions on narrower screens.

I then began experimenting with randomly positioned markers in polygonal areas.  Initially I wanted to see whether this would be possible in ArcGIS, and a bit of Googling suggested it would be, see for example this post: http://gis.mtu.edu/?p=127 which is 10 years old, so the instructions don’t in any way match up to how things work in the current version of ArcGIS, but it at least showed it should be possible.  I loaded the desktop version of ArcGIS up via Glasgow Anywhere and after some experimentation and a fair bit of exasperation I managed to create a polygon shape and add 100 randomly placed marker points to it, which you can see here:

Something we will have to bear in mind is how such points will look when zoomed:

This is just 100 points over a pretty large geographical area.  We might end up with thousands of points, which might make this approach unusable.  Another issue is it took ArcGIS more than a minute to generate and process these 100 random points.  I don’t know how much of this is down to running the software via Glasgow Anywhere, but if we’re dealing with tens of polygons and hundreds or thousands of data points this is just not going to be feasible.

An issue of greater concern is that as far as I can tell (after more than an hour of investigation) the ‘create random points’ option is not available via ArcGIS Online, which is the tool we would need to use to generate maps to share online (if we choose to use ArcGIS).  The online version seems to be really pared back in terms of functionality compared to the desktop version and I just couldn’t see any way of incorporating the random points system.  However, I discovered a way of generating random points using Leaflet and another javascript based geospatial library called turf.js (http://turfjs.org/).  The information about how to go about it is here:  https://gis.stackexchange.com/questions/163044/mapbox-how-to-generate-a-random-coordinate-inside-a-polygon

I created a test map using the SCOSYA area for Campbeltown and the SCOSYA base map.  As a solution I’d say it’s working pretty well – it’s very fast and seems to do what we want it to.  You can view an example of the script output here:

The script generates 100 randomly placed markers each time you load the page.  At zoomed out levels the markers are too big, but I can make them smaller – this is just an initial test.  There is unfortunately going to be some clustering of markers as well, due to the nature of the random number generator.  This may give people to wrong impression.  I could maybe update the code to reject markers that are in too close proximity to another one, but I’d need to see about that.  I’d say it’s looking promising, anyway!

Week Beginning 25th October 2021

I came down with some sort of stomach bug on Sunday and was off work with it on Monday and Tuesday.  Thankfully I was feeling well again by Wednesday and managed to cram quite a lot into the three remaining days of the week.  I spent about a day working on the Data Management Plan for the new Curious Travellers proposal, sending out a first draft on Wednesday afternoon and dealing with responses to the draft during the rest of the week.  I also had some discussions with the Dictionaries of the Scots Language’s IT people about updating the DNS record regarding emails, responded to a query about the technology behind the SCOTS corpus, updated the images used in the mockups of the STAR website and created the ‘attendees only’ page for the Iona Placenames conference and added some content to it.  I also had a conversation with one of the Books and Borrowing researchers about trimming out the blank pages from the recent page image upload, and I’ll need to write a script to implement this next week.

My main task of the week was to develop a test version of the ‘where is the speaker from?’ exercise for the Speak For Yersel project.  This exercise involves the user listening to an audio clip and pressing a button each time they hear something that identifies the speaker as being from a particular area.  In order to create this I needed to generate my own progress bar that tracks the recording as it’s played, implement ‘play’ and ‘pause’ buttons, implement a button that when pressed grabs the current point in the audio playback and places a marker in the progress bar, and implement a means of extrapolating the exact times of the button press to specific sections of the transcription of the audio file so we can ascertain which section contains the feature the user noted.

It took quite some planning and experimentation to get the various aspects of the feature working, but I managed to complete an initial version that I’m pretty pleased with.  It will still need a lot of work but it demonstrates that we will be able to create such an exercise.  The interface design is not final, it’s just there as a starting point, using the Bootstrap framework (https://getbootstrap.com), the colours from the SCOSYA logo and a couple of fonts from Google Fonts (https://fonts.google.com).  There is a big black bar with a sort of orange vertical line on the right.  Underneath this is the ‘Play’ button and what I’ve called the ‘Log’ button (but we probably want to think of something better).  I’ve used icons from Font Awesome (https://fontawesome.com/) including a speech bubble icon in the ‘log’ button.

As discussed previously, when you press the ‘Play’ button the audio plays and the orange line starts moving across the black area.  The ‘Play’ button also turns into a ‘Pause’ button.  The greyed out ‘Log’ button becomes active when the audio is playing.  If you press the ‘Log’ button a speech bubble icon is added to the black area at the point where the orange ‘needle’ is.

For now the exact log times are outputted in the footer area.  Once the audio clip finishes the ‘Play’ button becomes a ‘Start again’ button.  Pressing on this clears the speech bubble icons and the footer and starts the audio from the beginning again.  The log is also processed.  Currently 1 second is taken off each click time to account for thinking and clicking.  I’ve extracted the data from the transcript of the audio and manually converted it into JSON data which is more easily processed by JavaScript.  Each ‘block’ consists of an ID, the transcribed content and the start and end times of the block in milliseconds.

For the time being for each click the script looks through the transcript data to find an entry where the click time is between the entry’s start and end times.  A tally of clicks for each transcript entry is then stored. This then gets outputted in the footer so you can see how things are getting worked out.  This is of course just test data – we’ll need smaller transcript areas for the real thing.  Currently nothing gets submitted to the server or stored – it’s all just processed in the browser.  I’ve tested the page out in several browsers in Windows, on my iPad and on my Android phone and the interface works perfectly well on mobile phone screens.  Below is a screenshot showing audio playback and four linguistic features ‘logged’:

Also this week I had a conversation with the editor of the AND about updating the varlist dates.  I also updated the DTD to allow the new ‘usevardate’ attribute to be used to identify occasions where a varlist date should be used as the earliest citation date.  We also became aware that a small number of entries in the online dictionary are referencing an old DTD on the wrong server so I updated these.