Week Beginning 26th September 2022

I spent most of my time this week getting back into the development of the front-end for the Books and Borrowing project.  It’s been a long time since I was able to work on this due to commitments to other projects and also due to there being a lot more for me to do than I was expecting regarding processing images and generating associated data in the project’s content management system over the summer.  However, I have been able to get back into the development of the front-end this week and managed to make some pretty good progress.  The first thing I did was to make some changes to the ‘libraries’ page based on feedback I received ages ago from the project’s Co-I Matt Sangster.  The map of libraries used clustering to group libraries that are close together when the map is zoomed out, but Matt didn’t like this.  I therefore removed the clusters and turned the library locations back into regular individual markers.  However, it is now rather difficult to distinguish the markers for a number of libraries.  For example, the markers for Glasgow and the Hunterian libraries (back when the University was still on the High Street) are on top of each other and you have to zoom in a very long way before you can even tell there are two markers there.

I also updated the tabular view of libraries.  Previously the library name was a button that when clicked on opened the library’s page.  Now the name is text and there are two buttons underneath.  The first one opens the library page while the second pans and zooms the map to the selected library, whilst also scrolling the page to the top of the map.  This uses Leaflet’s ‘flyTo’ function which works pretty well, although the map tiles don’t quite load in fast enough for the automatic ‘zoom out, pan and zoom in’ to proceed as smoothly as it ought to.

After that I moved onto the library page, which previously just displayed the map and the library name. I updated the tabs for the various sections to display the number of registers, books and borrowers that are associated with the library.  The Introduction page also now features the information recorded about the library that has been entered into the CMS.  This includes location information, dates, links to the library etc.  Beneath the summary info there is the map, and beneath this is a bar chart showing the number of borrowings per year at the library.  Beneath the bar chart you can find the longer textual fields about the library such as descriptions and sources.  Here’s a screenshot of the page for St Andrews:

I also worked on the ‘Registers’ tab, which now displays a tabular list of the selected library’s registers, and I also ensured that when you select one of the tabs other than ‘Introduction’ the page automatically scrolls down to the top of the tabs to avoid the need to manually scroll past the header image (but we still may make this narrower eventually).  The tabular list of registers can be ordered by any of the columns and includes data on the number of pages, borrowers, books and borrowing records featured in each.

When you open a register the information about it is displayed (e.g. descriptions, dates, stats about the number of books etc referenced in the register) and large thumbnails of each page together with page numbers and the number of records on each page are displayed.  The thumbnails are rather large and I could make them smaller, but doing so would mean that all the pages end up looking the same – beige rectangles.  The thumbnails are generated on the fly by the IIIF server and the first time a register is loaded it can take a while for the thumbnails to load in.  However, generated thumbnails are then cached on the server so subsequent page loads are a lot quicker.  Here’s a screenshot of a register page for St Andrews:

One thing I also did was write a script to add in a new ‘pageorder’ field to the ‘page’ database table.  I then wrote a script that generated the page order for every page in every register in the system.  This picks out the page that has no preceding page and iterates through pages based on the ‘next page’ ID.  Previously pages in lists were ordered by their auto-incrementing ID, but this meant that if new pages needed to be inserted for a register they ended up stuck at the end of the list, even though the ‘next’ and ‘previous’ links worked successfully.  This new ‘pageorder’ field ensures lists of pages are displayed in the proper order.  I’ve updated the CMS to ensure this new field is used when viewing a register, although I haven’t as of yet updated the CMS to regenerate the ‘pageorder’ for a register if new pages are added out of sequence.  For now if this happens I’ll need to manually run my script again to update things.

Anyway, back to the front-end:  The new ‘pageorder’ is used in the list of pages mentioned above so the thumbnails get displaying in the correct order.  I may add pagination to this page, as all of the thumbnails are currently on one page and it can take a while to load, although these days people seem to prefer having long pages rather than having data split over multiple pages.

The final section I worked on was the page for viewing an actual page of the register, and this is still very much in progress.  You can open a register page by pressing on its thumbnail and currently you can navigate through the register using the ‘next’ and ‘previous’ buttons or return to the list of pages.  I still need to add in a ‘jump to page’ feature here too.  As discussed in the requirements document, there will be three views of the page: Text, Image and Text and Image side-by-side.  Currently I have implemented the image view only.  Pressing on the ‘Image view’ tab opens a zoomable / pannable interface through which the image of the register page can be viewed.  You can also make this interface full screen by pressing on the button in the top right.  Also, if you’re viewing the image and you use the ‘next’ and ‘previous’ navigation links you will stay on the ‘image’ tab when other pages load.  Here’s a screenshot of the ‘image view’ of the page:

Also this week I wrote a three-page requirements document for the redevelopment of the front-ends for the various place-names projects I’ve created using the system originally developed for the Berwickshire place-names project which launched back in 2018.  The requirements document proposes some major changes to the front-end, moving to an interface that operates almost entirely within the map and enabling users to search and browse all data from within the map view rather than having to navigate to other pages.  I sent the document off to Thomas Clancy, for whom I’m currently developing the systems for two place-names projects (Ayr and Iona) and I’ll just need to wait to hear back from him before I take things further.

I also responded to a query from Marc Alexander about the number of categories in the Thesaurus of Old English, investigated a couple of server issues that were affecting the Glasgow Medical Humanities site, removed all existing place-name elements from the Iona place-names CMS so that the team can start afresh and responded to a query from Eleanor Lawson about the filenames of video files on the Seeing Speech site.  I also made some further tweaks to the Speak For Yersel resource ahead of its launch next week.  This included adding survey numbers to the survey page and updating the navigation links and writing a script that purges a user and all related data from the system.  I ran this to remove all of my test data from the system.  If we do need to delete a user in future (either because their data is clearly spam or a malicious attempt to skew the results, or because a user has asked us to remove their data) I can run this script again.  I also ran through every single activity on the site to check everything was working correctly.  The only thing I noticed is that I hadn’t updated the script to remove the flags for completed surveys when a user logs out, meaning after logging out and creating a new user the ticks for completed surveys were still displaying.  I fixed this.

I also fixed a few issues with the Burns mini-site about Kozeluch, including updating the table sort options which had stopped working correctly when I added a new column to the table last week and fixing some typos with the introductory text.  I also had a chat with the editor of the Anglo-Norman Dictionary about future developments and responded to a query from Ann Ferguson about the DSL bibliographies.  Next week I will continue with the B&B developments.

Week Beginning 25th July 2022

I was on holiday for most of the previous two weeks, working two days during this period.  I’ll also be on holiday again next week, so I’ve had quite a busy time getting things done.  Whilst I was away I dealt with some queries from Joanna Kopaczyk about the Future of Scots website.  I also had to investigate a request to fill in timesheets for my work on the Speak For Yersel project, as apparently I’d been assigned to the project as ‘Directly incurred’ when I should have been ‘Directly allocated’.  Hopefully we’ll be able to get me reclassified but this is still in-progress.  I also fixed a couple of issues with the facility to export data for publication for the Berwickshire place-name project for Carole Hough, and fixed an issue with an entry in the DSL, which was appearing in the wrong place in the dictionary.  It turned out that the wrong ‘url’ tag had been added to the entry’s XML several years ago and since then the entry was wrongly positioned.  I fixed the XML and this sorted things.  I also responded to a query from Geert of the Anglo-Norman Dictionary about Aberystwyth’s new VPN and whether this would affect his access to the AND.  I also investigated an issue Simon Taylor was having when logging into a couple of our place-names systems.

On the Monday I returned to work I launched two new resources for different projects.  For the Books and Borrowing project I published the Chambers Library Map (https://borrowing.stir.ac.uk/chambers-library-map/) and reorganised the site menu to make space for the new page link.  The resource has been very well received and I’m pretty pleased with how it’s turned out.  For the Seeing Speech project I launched the new Gaelic Tongues resource (https://www.seeingspeech.ac.uk/gaelic-tongues/) which has received a lot of press coverage, which is great for all involved.

I spent the rest of the week dividing my time primarily between three projects:  Speak For Yersel, Books and Borrowing and Speech Star.  For Books and Borrowing I continued processing the backlog of library register image files that has built up.  There were about 15 registers that needed to be processed, and each needed to be handled in a different way.  This included nine registers from Advocates Library that had been digitised by the NLS, for which I needed to batch process the images to rename them, delete blank pages, create page records in the CMS and then tweak the automatically generated folio numbers to account for discrepancies in the handwritten page number in the images.  I also processed a register for the Royal High School, which involved renaming the images so they match up with image numbers already assigned to page records in the CMS, inserting new page records and updating the ‘next’ and ‘previous’ links for pages for which new images had been uncovered and generating new page records for many tens of new pages that follow on from the ones that have already been created in the CMS.  I also uploaded new images for the Craigston register and created a new register including all page records and associated image URLs for a further register for Aberdeen.  I still have some further RHS registers to do and a few from St Andrews, but these will need to wait until I’m back from my holiday.

For Speech Star I downloaded a ZIP containing 500 new ultrasound MP4 videos.  I then had to process them to generate ‘poster’ images for each video (these are images that get displayed before the user chooses to play the video).  I then had to replace the existing normalised speech database with data from a new spreadsheet that included these new videos plus updates to some of the existing data.  This included adding a few new fields and changing the way the age filter works, as much of the new data is for child speakers who have specific ages in months and years, and these all need to be added to a new ‘under 18’ age group.

For Speak For Yersel I had an awful lot to do.  I started with a further large-scale restructuring of the website following feedback from the rest of the team.  This included changing the site menu order, adding in new final pages to the end of surveys and quizzes and changing the text of buttons that appear when displaying the final question.

I then developed the map filter options for age and education for all of the main maps.  This was a major overhaul of the maps.  I removed the slide up / slide down of the map area when an option is selected as this was a bit long and distracting.  Now the map area just updates (although there is a bit of a flicker as the data gets replaced).  The filter options unfortunately make the options section rather big, which is going to be an issue on a small screen.  On my mobile phone the options section takes up 100% of the width and 80% of the height of the map area unless I press the ‘full screen’ button.  However, I figured out a way to ensure that the filter options section scrolls if the content extends beyond the bottom of the map.

I also realised that if you’re in full screen mode and you select a filter option the map exits full screen as the map section of the page reloads.  This is very annoying, but I may not be able to fix it as it would mean completely changing how the maps are loaded.  This is because such filters and options were never intended to be included in the maps and the system was never developed to allow for this.  I’ve had to somewhat shoehorn in the filter options and it’s not how I would have done things had I known from the beginning that these options were required.  However, the filters work and I’m sure they will be useful.  I’ve added in filters for age, education and gender, as you can see in the following screenshot:

I also updated the ‘Give your word’ activity that asks to identify younger and older speakers to use the new filters too.  The map defaults to showing ‘all’ and the user then needs to choose an age.  I’m still not sure how useful this activity will be as the total number of dots for each speaker group varies considerably, which can easily give the impression that more of one age group use a form compared to another age group purely because one age group has more dots overall.  The questions don’t actually ask anything about geographical distribution so having the map doesn’t really serve much purpose when it comes to answering the question.  I can’t help but think that just presenting people with percentages would work better, or some other sort of visualisation like a bar graph or something.

I then moved on to working on the quiz for ‘she sounds really clever’ and so far I have completed both the first part of the quiz (questions about ratings in general) and the second part (questions about listeners from a specific region and their ratings of speakers from regions).  It’s taken a lot of brain-power to get this working as I decided to make the system work out the correct answer and to present it as an option alongside randomly selected wrong answers.  This has been pretty tricky to implement (especially as depending on the question the ‘correct’ answer is either the highest or the lowest) but will make the quiz much more flexible – as the data changes so will the quiz.

Part one of the quiz page itself is pretty simple.  There is the usual section on the left with the question and the possible answers.  On the right is a section containing a box to select a speaker and the rating sliders (readonly).  When you select a speaker the sliders animate to their appropriate location.  I decided to not include the map or the audio file as these didn’t really seem necessary for answering the questions, they would clutter up the screen and people can access them via the maps page anyway (well, once I move things from the ‘activities’ section).  Note that the user’s answers are stored in the database (the region selected and whether this was the correct answer at the time).  Part two of the quiz features speaker/listener true/false questions and this also automatically works out the correct answer (currently based on the 50% threshold).  Note that where there is no data for a listener rating a speaker from a region the rating defaults to 50.  We should ensure that we have at least one rating for a listener in each region before we let people answer these questions.  Here is a screenshot of part one of the quiz in action, with randomly selected ‘wrong’ answers and a dynamically outputted ‘right’ answer:

I also wrote a little script to identify duplicate lexemes in categories in the Historical Thesaurus as it turns out there are some occasions where a lexeme appears more than once (with different dates) and this shouldn’t happen.  These will need to be investigated and the correct dates will need to be established.

I will be on holiday again next week so there won’t be another post until the week after I’m back.

 

Week Beginning 4th July 2022

I had a lovely week’s holiday last week and returned to work for one week only before I head off for a further two weeks.  I spent most of my time this week working on the Speak For Yersel project implementing a huge array of changes that the team wanted to make following the periods of testing in schools a couple of weeks ago.  There were also some new sections of the resource to work on as well.

By Tuesday I had completed the restructuring of the site as detailed in the ‘Roadmap’ document, meaning the survey and quizzes have been separated, as have the ‘activities’ and ‘explore maps’.  This has required quite a lot of restructuring of the code, but I think all is working as it should.  I also updated the homepage text.  One thing I wasn’t sure about is what should happen when the user reaches the end of the survey.  Previously this led into the quiz, but for now I’ve created a page that provides links to the quiz, the ‘more activities’ and the ‘explore maps’ options for the survey in question.

The quizzes should work as they did before, but they now have their own progress bar.  Currently at the end of the quiz the only link offered is to explore the maps, but we should perhaps change this.  The ‘more activities’ work slightly differently to how these were laid out in the roadmap.  Previously a user selected an activity then it loaded an index page with links to the activities and the maps.  As the maps are now separated this index page was pretty pointless, so instead when you select an activity it launches straight into it.  The only one that still has an index page is the ‘Clever’ one as this has multiple options.  However, thinking about this activity:  it’s really just an ‘explore’ like the ‘explore maps’ rather than an actual interactive activity per se, so we should perhaps move this to the ‘explore’ page.

I also made all of the changes to the ‘sounds about right’ survey including replacing sound files and adding / removing questions.  I ended up adding a new ‘question order’ field to the database and questions are now ordered using this, as previously the order was just set by the auto-incrementing database ID which meant inserting a new question to appear midway through the survey was very tricky.  Hopefully this change of ordering hasn’t had any knock-on effects elsewhere.

I then made all of the changes to two other activities:  the ‘lexical’ one and the ‘grammatical’ one.  These included quite a lot of tweaks to questions, question options, question orders and the number of answers that could be selected for questions.  With all of this in place I moved onto the ‘Where do you think this speaker is from’ sections.  The ‘survey’ now only consists of the click map and when you press the ‘Check Answers’ button some text appears under the buttons with links through to where the user can go next.

For the ‘more activities’ section the main click activity is now located here.  It took quite a while to get this to work, as moving sections introduced some conflicts in the code that were a bit tricky to identify.  I replaced the explanatory text and I also added in the limit to the number of presses.  I’ve added a section to the right of the buttons that displays the number of presses the user has left.  Once there are no presses left the ‘Press’ button gets disabled.  I still think people are going to reach the 5 click limit too soon and will get annoyed when they realise they can’t add further clicks and they can’t reset the exercise to give it another go.  After you’ve listened to the four speakers a page is displayed saying you’ve completed the activity and giving links to other parts.  Below is a screenshot of the new ‘click’ activity with the limit in place (and also the new site menu):

 

The ’Quiz’ has taken quite some time to implement but is now fully operational.  I had to do a lot of work behind the scenes to get the percentages figured out and to get the quiz to automatically work out which answer should be the correct one, but it all works now.  The map displays the ‘Play’ icons as I figured people would want to be able to hear the clips as well as just see the percentages.  Beside each clip icon the percentage of respondents who correctly identified the location of the speaker is displayed.  The markers are placed at the ‘correct’ points on the map, as shown when you view the correct locations in the survey activities.  Question 1 asks you to identify the most recognised, question 2 the least recognised.  Quiz answers are logged in the database so we’ll be able to track answers.  Here’s a screenshot of the quiz:

I also added the percentage map to the ‘explore maps’ page too, and I gave people the option of focussing on the answers submitted from specific regions.  An ‘All regions’ map displays the same data as the quiz map, but then the user can choose (for example) Glasgow and view the percentages of correctly identified speakers that respondents from the Glasgow area identified, thus allowing them to compare how people in each area managed to identify speakers in the areas.  I decided to add a count of the number of people that have responded too.

The ‘explore maps’ for ‘guess the region’ has a familiar layout – buttons on the left that when pressed on load a map on the right.  The buttons correspond to the region of people who completed the ‘guess the region’ survey.  The first option shows the answers of all respondents from all regions.  This is exactly the same as the map in the quiz, except I’ve also displayed the number of respondents above the map.  Two things to be aware of:

Firstly, a respondent can complete the quiz as many times as they want, so each respondent may have multiple datasets.  Secondly, the click map (both quiz and ‘explore maps’) currently includes people from outside of Scotland as well as people who selected an area when registering.  There are currently 18 respondents and 3 of these are outside of Scotland.

When you click on a specific region button in the left-hand column the results of respondents from that specific region only are displayed on the map.  The number of respondents is also listed above the map.  Most of the regions currently have no respondents, meaning an empty map is displayed and a note above the map explains why.  Ayrshire has one respondent.  Glasgow has two.  Note that the reason there are such varied percentages in Glasgow from just two respondents (rather than just 100%, 50% and 0%) is because one or more of the respondents has completed the quiz more than once.  Lothian has two respondents.  North East has 10.  Here’s how the maps look:

On Friday I began to work on the ‘click transcription’ visualisations, which will display how many times speakers have clicked in each of the sections of the transcriptions they listen to in the ‘click’ activity.  I only managed to get as far as writing the queries and scripts to generate the data, rather than any actual visualisation of the data.  When looking at the aggregated data for the four speakers I discovered that the distribution of clicks across sections was a bit more uniform that I thought it might be.  We might need to consider how we’re going to work out the thresholds for different sizes.  I was going to base it purely on the number of clicks, but I realised that this would not work as the more responses we get the more clicks there will be.  Instead I decided to use percentages of the total number of clicks for a speaker.  E.g. for speaker 4 there are currently a total of 65 clicks so the percentages for each section would be:

 

11% Have you seen the TikTok vids with the illusions?
6% They’re brilliant!
9% I just watched the glass one.
17% The guy’s got this big glass full of water in his hands.
8% He then puts it down,
8% takes out one of those big knives
6% and slices right through it.
6% I sometimes get so fed up with Tiktok
8% – really does my head in –
8% but I’m not joking,
14% I want to see more and more of this guy.

 

(which adds up to 101% with rounding).  But what should the thresholds be?  E.g. 0-6% = regular, 7-10% = bigger, 11-15% even bigger, 16%+ biggest?  I’ll need input from the team about this.  I’m not a statistician but there may be better approaches, such as using standard deviation and such things.

I still have quite a lot of work to do for the project, namely:  Completing the ‘where do you think the speaker is from’ as detailed above; implementing the ‘she sounds really clever’ updates; adding in filter options to the map (age ranges and education levels); investigating dynamically working out the correct answers to map-based quizzes.

In addition to my Speak For Yersel work I participated in an interview with the AHRC about the role of technicians in research projects.  I’d participated in a focus group a few weeks ago and this was a one-on-one follow-up video call to discuss in greater detail some of the points I’d raised in the focus group.  It was a good opportunity to discuss y role and some of the issues I’ve encountered over the years.

I also installed some new themes for the OHOS project website and fixed an issue with the Anglo-Norman Dictionary website, as the editor had noticed that cognate references were not always working.  After some investigation I realised that this was happening when the references for a cognate dictionary included empty tags as well as completed tags.  I had to significantly change how this section of the entry is generated in the XSLT from the XML, which took some time to implement and test.  All seems to be working, though.

I also did some work for the Books and Borrowing project.  Whilst I’d been on holiday I’d been sent page images for a further ten library registers and I needed to process these.  This can be something of a time-consuming process as each set of images needs to be processed in a different way, such as renaming images, removing unnecessary images at the start and end, uploading the images to the server, generating the page images for each register and then bringing the automatically generated page numbers into line with any handwritten page numbers on the images, which may not always be sequentially numbered.  I processed two registers for the Advocates library from the NLS and three registers from Aberdeen library.  I looked into processing the images for a register from the High School of Edinburgh, but I had some questions about the images and didn’t hear back from the researcher before the end of the week, so I needed to leave these.  The remaining registers were from St Andrews and I had further questions about these, as the images are double-page spreads but existing page records in the CMS treat each page separately.  As the researcher dealing with St Andrews was on holiday I’ll need to wait until I’m back to deal with these too.

Also this week I completed the two mandatory Moodle courses about computer security and GDPR, which took a bit longer that I thought they might.

Week Beginning 20th June 2022

I completed an initial version of the Chambers Library map for the Books and Borrowing project this week.  It took quite a lot of time and effort to implement the subscription period range slider.  Searching for a range when the data also has a range of dates rather than a single date means we needed to make a decision about what data gets returned and what doesn’t.  This is because the two ranges (the one chosen as a filter by the user and the one denoting the start and end periods of subscription for each borrower) can overlap in many different ways.  For example, the period chosen by the user is 05 1828 to 06 1829.  Which of the following borrowers should therefore be returned?

  1. Borrowers range is 06 1828 to 02 1829: Borrower’s range is fully within the period so should definitely be included
  2. Borrowers range is 01 1828 to 07 1828: Borrower’s range extends beyond the selected period at the start and ends within the selected period.  Presumably should be included.
  3. Borrowers range is 01 1828 to 09 1829: Borrower’s range extends beyond the selected period in both directions.  Presumably should be included.
  4. Borrowers range is 05 1829 to 09 1829: Borrower’s range begins during the selected period and ends beyond the selected period. Presumably should be included.
  5. Borrowers range is 01 1828 to 04 1828: Borrower’s range is entirely before the selected period. Should not be included
  6. Borrowers range is 07 1829 to 10 1829: Borrower’s range is entirely after the selected period. Should not be included.

Basically if there is any overlap between the selected period and the borrower’s subscription period the borrower will be returned.  But this means most borrowers will always be returned a lot of the time.  It’s a very different sort of filter to one that purely focuses on a single date – e.g. filtering the data to only those borrowers whose subscription periods *begins* between 05 1828 and 06 1829.

Based on the above assumptions I began to write the logic that would decide which borrowers to include when the range slider is altered.  It was further complicated by having to deal with months as well as years.  Here’s the logic in full if you fancy getting a headache:

if(((mapData[i].sYear>startYear || (mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth || mapData[i].sYear>startYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)) || (((mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth) || mapData[i].sYear>startYear) && ((mapData[i].sYear==endYear && mapData[i].sMonth <=endMonth) || mapData[i].sYear<endYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)))

I also added the subscription period to the popups.  The only downside to the range slider is that the occupation marker colours change depending on how many occupations are present during a period, so you can’t always tell an occupation by its colour. I might see if I can fix the colours in place, but it might not be possible.

I also noticed that the jQuery UI sliders weren’t working very well on touchscreens so installed the jQuery TouchPunch library to fix that (https://github.com/furf/jquery-ui-touch-punch).  I also made the library marker bigger and gave it a white border to more easily differentiate it from the borrower markers.

I then moved onto incorporating page images in the resource too.  Where a borrower has borrower records the relevant pages where these borrowing records are found now appear as thumbnails in the borrower popup.  These are generated by the IIIF server based on dimensions passed to it, which is much nicer than having to generate and store thumbnails directly.  I also updated the popup to make it wider when required to give more space for the thumbnails.  Here’s a screenshot of the new thumbnails in action:

Clicking on a thumbnail opens a further popup containing a zoomable / pannable image of the page.  This proved to be rather tricky to implement.  Initially I was going to open a popup in the page (outside of the map container) using a jQuery UI Dialog.  However, I realised that this wouldn’t work when the map was being viewed in full-screen mode, as nothing beyond the map container is visible in such circumstances.  I then considered opening the image in the borrower popup but this wasn’t really big enough.  I then wondered about extending the ‘Map options’ section and replacing the contents of this with the image, but this then caused issues for the contents of the ‘Map options’ section, which didn’t reinitialise properly when the contents were reinstated.  I then found a plugin for the Leaflet mapping library that provides a popup within the map interface (https://github.com/w8r/Leaflet.Modal) and decided to use this.  However, it’s all a little complex as the popup then has to include another mapping library called OpenLayers that enables the zooming and panning of the page image, all within the framework of the overall interactive map.  It is all working and I think it works pretty well, although I guess the map interface is a little cluttered, what with the ‘Map Options’ section, the map legend, the borrower popup and then the page image popup as well.  Here’s a screenshot with the page image open:

All that’s left to do now is add in the introductory text once Alex has prepared it and then make the map live.  We might need to rearrange the site’s menu to add in a link to the Chambers Map as it’s already a bit cluttered.

Also for the project I downloaded images for two further library registers for St Andrews that had previously been missed.  However, there are already records for the registers and pages in the CMS so we’re going to have to figure out a way to work out which image corresponds to which page in the CMS.  One register has a different number of pages in the CMS compared to the image files so we need to work out how to align the start and end and if there are any gaps or issues in the middle.  The other register is more complicated because the images are double pages whereas it looks like the page records in the CMS are for individual pages.  I’m not sure how best to handle this.  I could either try and batch process the images to chop them up or batch process the page records to join them together.  I’ll need to discuss this further with Gerry, who is dealing with the data for St Andrews.

Also this week I prepared for and gave a talk to a group of students from Michigan State University who were learning about digital humanities.  I talked to them for about an hour about a number of projects, such as the Burns Supper map (https://burnsc21.glasgow.ac.uk/supper-map/), the digital edition I’d created for New Modernist Editing (https://nme-digital-ode.glasgow.ac.uk/), the Historical Thesaurus (https://ht.ac.uk/), Books and Borrowing (https://borrowing.stir.ac.uk/) and TheGlasgowStory (https://theglasgowstory.com/).  It went pretty and it was nice to be able to talk about some of the projects I’ve been involved with for a change.

I also made some further tweaks to the Gentle Shepherd Performances page which is now ready to launch, and helped Geert out with a few changes to the WordPress pages of the Anglo-Norman Dictionary.  I also made a few tweaks to the WordPress pages of the DSL website and finally managed to get a hotel room booked for the DHC conference in Sheffield in September.  I also made a couple of changes to the new Gaelic Tongues section of the Seeing Speech website and had a discussion with Eleanor about the filters for Speech Star.  Fraser had been in touch with about 500 Historical Thesaurus categories that had been newly matched to OED categories so I created a little script to add these connections to the online database.

I also had a Zoom call with the Speak For Yersel team.  They had been testing out the resource at secondary schools in the North East and have come away with lots of suggested changes to the content and structure of the resource.  We discussed all of these and agreed that I would work on implementing the changes the week after next.

Next week I’m going to be on holiday, which I have to say I’m quite looking forward to.

Week Beginning 13th June 2022

I worked for several different projects this week.  For the Books and Borrowing project I processed and imported a further register for the Advocates library that had been digitised by the NLS.  I also continued with the interactive map of Chambers library borrowers, although I couldn’t spend as much time on this as I’d hoped as my access to Stirling University’s VPN had stopped working and without VPN access I can’t connect to the database and the project server.  It took a while to resolve the issue as access needs to be approved by some manager or other, but once it was sorted I got to work on some updates.

One thing I’d noticed last week was that when zooming and panning the historical map layer was throwing out hundreds of 403 Forbidden errors to the browser console.  This was not having any impact on the user experience, but was still a bit messy and I wanted to get to the bottom of the issue.  I had a very helpful (as always) chat with Chris Fleet at NLS Maps, who provided the historical map layer and he reckoned it was because the historical map only covers a certain area and moving beyond this was still sending requests for map tiles that didn’t exist.  Thankfully an option exists in Leaflet that allows you to set the boundaries for a map layer (https://leafletjs.com/reference.html#latlngbounds) and I updated the code to do just that, which seems to have stopped the errors.

I then returned to the occupations categorisation, which was including far too many options.  I therefore streamlined the occupations, displaying the top-level occupation only.  I think this works a lot better (although I need to change the icon colour for ‘unknown’).  Full occupation information is still available for each borrower via the popup.

I also had to change the range slider for opacity as standard HTML range sliders don’t allow for double-ended ranges.  We require a double-ended range for the subscription period and I didn’t want to have two range sliders that looked different on one page.  I therefore switched to a range slider offered by the jQuery UI interface library (https://jqueryui.com/slider/#range).  The opacity slider still works as before, it just looks a little different.  Actually, it works better than before, as the opacity now changes as you slide rather than only updating after you mouse-up.

I then began to implement the subscription period slider.  This does not yet update the data.  It’s been pretty tricky to implement this.  The range needs to be dynamically generated based on the earliest and latest dates in the data, and dates are both year and month, which need to be converted into plain integers for the slider and then reinterpreted as years and months when the user updates the end positions.  I think I’ve got this working as it should, though.  When you update the ends of the slider the text above that lists the months and years updates to reflect this.  The next step will be to actually filter the data based on the chosen period.  Here’s a screenshot of the map featuring data categorised by the new streamlined occupations and the new sliders displayed:

For the Speak For Yersel project I made a number of tweaks to the resource, which Jennifer and Mary are piloting with school children in the North East this week.  I added in a new grammatical question and seven grammatical quiz questions.  I tweaked the homepage text and updated the structure of questions 27-29 of the ‘sound about right’ activity.  I ensured that ‘Dumfries’ always appears as ‘Dumfries and Galloway’ in the ‘clever’ activity and follow-on and updated the ‘clever’ activity to remove the stereotype questions.  These were the ones where users had to rate the speakers from a region without first listening to any audio clips and Jennifer reckoned these were taking too long to complete.  I also updated the ‘clever’ follow-on to hide the stereotype options and switched the order of the listener and speaker options in the other follow-on activity for this type.

For the Speech Star project I replaced the data for the child speech error database with a new, expanded dataset and added in ‘Speaker Code’ as a filter option.  I also replicated the child speech and normalised speech databases from the clinical website we’re creating on the more academic teaching site we’re creating and also pulled in the IPA chart from Seeing Speech into this resource too.  Here’s a screenshot of how the child speech error database looks with the new ‘speaker code’ filter with ‘vowel disorder’ selected:

I also responded to Craig Lamont in Scottish literature with some further feedback on the structure of his Burns Manuscript Database spreadsheet, which is now shaping up nicely.  Craig had also sent me an updated spreadsheet with data for the Ramsay Gentle Shepherd performances project.  I’d set this up (interactive map, timeline and filterable tabular data) a few weeks ago, migrating it to the University’s T4 website management system.  All had worked then but when I logged into T4 and previewed the page I previously created I discovered it longer worked.  The page hadn’t been updated since the end of May and I had no idea what’s gone wrong.  I can only assume that the linked content (i.e. the links to the JavaScript files) had somehow become unlinked.  I decided, therefore, that it would be easier to just host the JavaScript files on another server I have direct access to rather than having to shoehorn it all into T4.  I made an updated version with the new dataset and this is working well.

I also made a couple of tweaks to the DSL this week, installing the TablePress plugin for the ancillary pages and creating a further alternative logo for the DSL’s Facebook posts.  I also returned to going some work for the Anglo-Norman Dictionary, offering some advice to the editor Geert about incorporating publications and overhauling how cross references are displayed in the Dictionary Management System.

I updated the ‘View Entry’ page in the DMS.  Previously it only included cross references FROM the entry you’re looking at TO any other entries.  I.e. it only displayed content when the entry was of type ‘xref’ rather than ‘main’.  Now in addition to this there’s a further section listing all cross references TO the entry you’re looking at from any entry of type ‘xref’ that links to it.

In addition there is a button allowing you to view all entries that include a cross reference to the current entry anywhere in their XML – i.e. where an <xref> tag that features the current entry’s slug is found at any level in any other main entry’s XML.  This code is hugely memory intensive to run, as basically all 27,464 main entries need to be pulled into the script, with the full XML contents of each checked for matching xrefs.  For this reason the page doesn’t run the code each time the ‘view entry’ page is loaded but instead only runs when you actively press the button.  It takes a few seconds for the script to process, but after it does the cross references are listed in the same manner as the ‘pure’ xrefs in the preceding sections.

Finally I participated in a Zoom-based focus group for the AHRC about the role of technicians in research projects this week.  It was great to participate to share my views on my role and to hear from other people with similar roles at other organisations.

Week Beginning 11th April 2022

I was back at work on Monday this week after a lovely week off last week.  It was only a four-day week, however, as the week ended with the Good Friday holiday.  I’ll also be off next Monday too.  I had rather a lot to squeeze into the four working days.  For the DSL I did some further troubleshooting for integrating Google Analytics with the DSL’s new https://macwordle.co.uk/ site.  I also had discussions about the upcoming switchover to the new DSL website, which we scheduled in for the week after next, although later in the week it turned out that all of the data has already been finalised so I’ll begin processing it next week.

I participated in a meeting for the Historical Thesaurus on Tuesday, after which I investigated the server stats for the site, which needed fixing.  I also enquired about setting up a domain URL for one of the ‘ac.uk’ sites we host, and it turned out to be something that IT Support could set up really quickly, which is good to know for future reference.  I also had a chat with Craig Lamont about a database / timeline / map interface for some data for the Allan Ramsay project that he would like me to put together to coincide with a book launch at the end of May.  Unfortunately they want this to be part of the University’s T4 website, which makes development somewhat tricky but not impossible.  I had to spend some time familiarising myself with T4 again and arranging for access to the part of the system where the Ramsay content resides.  Now I have this sorted I’ve agreed to look into developing this in early May.  I also deleted a couple of unnecessary entries from the Anglo-Norman Dictionary after the editor requested their removal and created a new version of the requirements document for the front-end for the Books and Borrowing project following feedback form the project team on the previous version.

The rest of my week was spent on the Speak For Yerself project, for which I still have an awful lot to do and not much time to do it in.  I had a meeting with the team on Monday to go over some recent developments, and following that I tracked down a few bugs in the existing code (e.g. a couple of ‘undefined’ buttons in the ‘explore’ maps).  I then replaced all of the audio files in the ‘click’ exercise as the team had decided to use a standardised sentence spoken by many different regional speakers rather than having different speakers saying different things.  As the speakers were not always from the same region as the previous audio clips I needed to change the ‘correct’ regions and also regenerated the MP3 files and transcript data.

I then moved onto a major update to the system: working on the back end.  This took up the rest of the week and although in terms of the interface nothing much should have changed, behind the scenes things are very different.  I designed and implemented the database that will hold all of the data for the project, including information on respondents, answers and geographical areas.  I also migrated all of the activity and question data to this database too.  This was a somewhat time consuming and tedious task as I needed to input every question and every answer option into the database, but it needed to be done.  If we didn’t have the questions and answer options in the database alongside the answers then it would be rather tricky to analyse the data when the time comes, and this way everything is stored in one place and is all interconnected.  Previously the questions were held as JSON data within the JavaScript code for the site, but this was not ideal for the above reason and also because it made updating and manually accessing the question data a bit tricky.

With the new, much tidier arrangement all of the data is stored in a database on the server and the JavaScript code requests the data for an activity when the user loads the activity’s page.  All answer choices and transcript sections also now have their own IDs, which is what we need for recording which specific answer a user has selected.  For example, for the question with the ID 10 if the user selects ‘bairn’ the answer ID 36 will be logged for that user.  I’ve set up the database structure to hold these answers and have populated the postcode area table with all of the GeoJSON data for each area.

The next step will be to populate the table holding specific locations within a postcode area once this data is available.  After that I’ll be able to create the user information form and then I’ll need to update the activities so the selected options are actually saved.  In the meantime I began to implement the user management system.  A user icon now appears in the top right of every page, either with a green background and a tick if you’ve registered or a red background and a cross if you haven’t.  I haven’t created the registration form yet, but have just included a button to register, and when you press this you’ll be registered and this will be remembered in your browser even if you close your browser or turn your device off.  Press on the green tick user icon to view the details recorded about the registered person (none yet) and find an option to sign out if this isn’t you or you want to clear your details.  If you’re not registered and you try to access the activities the page will redirect you to the registration form as we don’t want unregistered people completing the activities.  I’ll continue with this next week, hopefully getting to the point where the choices a user makes are actually logged in the database.  After that I’ll be able to generate maps with real data, which will be an important step.

 

 

Week Beginning 28th March 2022

I was on strike last week, and I’m going to be on holiday next week, so I had a lot to try and cram into this week.  This was made slightly harder when my son tested positive for Covid again on Tuesday evening.  It’s his fourth time, and the last bout was only seven weeks ago.  Thankfully he wasn’t especially ill, but he was off school from Wednesday onwards.

I worked on several different projects this week.  For the Books and Borrowing project I updated the front-end requirements document based on my discussions with the PI and Co-I and set it on for the rest of the team to give feedback on.  I also uploaded a new batch of register images from St Andrews (more than 2,500 page images taking up about 50Gb) and created all of the necessary register and page records.  I also did the same for a couple of smaller registers from Glasgow.  I also exported spreadsheets of authors, edition formats and edition languages for the team to edit too.

For the Anglo-Norman Dictionary I fixed an issue with the advanced search for citations, where entries with multiple citations were having the same date and reference displayed for each snippet rather than the individual dates and references.  I also updated the display of snippets in the search results so they appear in date order.

I also responded to an email from editor Heather Pagan about how language tags are used in the AND XML.  There are 491 entries that have a language tag and I wrote a little script to list the distinct languages and a count of a number of times each appears.  Here’s the output (bearing in mind that an entry may have multiple language tags):

[Latin] => 79

[M.E.] => 369

[Dutch] => 3

[Arabic] => 12

[Hebrew] => 20

[M.L.] => 4

[Greek] => 2

[A.F._and_M.E.] => 3

[Irish] => 2

[M.E._and_A.F.] => 8

[A-S.] => 3

[Gascon] => 1

There seem to be two ways the language tag appears.  One in a sense, and these appear in the entry, e.g. https://anglo-norman.net/entry/Scotland and one in <head> and these don’t currently seem to get displayed.  E.g. https://anglo-norman.net/entry/ganeir  has:

<head> <language lang=”M.E.”/>

<lemma>ganeir</lemma>

</head>

But ‘M.E’ doesn’t appear anywhere.  I could probably write another little script that moves language to the head as above, and then I could update the XSLT so that this type of language tag gets displayed.  Or I could update the XSLT first so we can see how it might look with entries that already have this structure.  I’ll need to hear back from Heather before I do more.

For the Dictionaries of the Scots Language I spent quite a bit of time working with the XSLT for the display of bibliographies.  There are quite a lot of different structures for bibliographical entries, sometimes where the structure of the XML is the same but a different layout is required, so it proved to be rather tricky to get things looking right.  By the end of the week I think I had got everything to display as requested, but I’ll need to see if the team discover any further quirks.

I also wrote a script that extracts citations and their dates from DSL entries.  I created a new citations table that stores the dates, the quotes and associated entry and bibliography IDs.  The table has 747,868 rows in it.  Eventually we’ll be able to use this table for some kind of date search, plus there’s now an easy to access record of all of the bib IDs for each entry / entry IDs for each bib, so displaying lists of entries associated with each bibliography should also be straightforward when the time comes.  I also added new firstdate and lastdate columns to the entry table, picking out the earliest and latest date associated with each entry and storing these.  This means we can add first dates to the browse, something I decided to add in for test purposes later in the week.

I added the first recorded date (the display version not the machine readable version) to the ‘browse’ for DOST and SND.  The dates are right-aligned and grey to make them stand out less than the main browse label.  This does however make the date of the currently selected entry in the browse a little hard to read.  Not all entries have dates available.  Any that don’t are entries where either the new date attributes haven’t been applied or haven’t worked.  This is really just a proof of concept and I will remove the dates from the browse before we go live, as we’re not going to do anything with the new date information until a later point.

I also processed the ‘History of Scots’ ancillary pages.  Someone had gone through these to add in links to entries (hundreds of links), but unfortunately they hadn’t got the structure quite right.  The links had been added in Word, meaning regular double quotes had been converted into curly quotes, which are not valid HTML.  Also the links only included the entry ID, rather than the path to the entry page.  A couple of quick ‘find and replace’ jobs fixed these issues, but I also needed to update the API to allow old DSL IDs to be passed without also specifying the source.  I also set up a Google Analytics account for the DSL’s version of Wordle (https://macwordle.co.uk/)

For the Speak For Yersel project I had a meeting with Mary on Thursday to discuss some new exercises that I’ll need to create.  I also spent some time creating the ‘Sounds about right’ activity.  This had a slightly different structure to other activities in that the questionnaire has three parts with an introduction for each part.  This required some major reworking of the code as things like the questionnaire numbers and the progress bar relied on the fact that there was one block of questions with no non-question screens in between.  The activity also featured a new question type with multiple sound clips.  I had to process these (converting them from WAV to MP3) and then figure out how to embed them in the questions.

Finally, for the Speech Star project I updated the extIPA chart to improve the layout of the playback speed options.  I also made the page remember the speed selection between opening videos – so if you want to view them all ‘slow’ then you don’t need to keep selecting ‘slow’ each time you open one.  I also updated the chart to provide an option to switch between MRI and animation videos and added in two animation MP4s that Eleanor had supplied me with.  I then added the speed selector to the Normalised Speech Database video popups and then created a new ‘Disordered Paediatric Speech Database’, featuring many videos, filters to limit the display of data and the video speed selector.  It was quite a rush to get this finished by the end of the week, but I managed it.

I will be on holiday next week so there will be no post from me then.

Week Beginning 7th March 2022

This was my first five-day week after the recent UCU strike action and it was pretty full-on, involving many different projects.  I spent about a day working on the Speak For Yersel project.  I added in the content for all 32 ‘I would never say that’ questions and completed work on the new ‘Give your word’ lexical activity, which features a further 30 questions of several types.  This includes questions that have associated images and questions where multiple answers can be selected.  For the latter no more than three answers are allowed to be selected and this question type needs to be handed differently as we don’t want the map to load as soon as one answer is selected. Instead the user can select / deselect answers.  If at least one answer is selected a ‘Continue’ button appears under the question.  When you press on this the answers become read only and the map appears.  I made it so that no more than three options can be selected – you need to deselect one before you can add another.  I think we’ll need to look into the styling of the buttons, though, as currently ‘active’ (when a button is hovered over or has been pressed and nothing else has yet been pressed) is the same colour is ‘selected’.  So if you select ‘ginger’ then deselect it the button still looks selected until you press somewhere else, which is confusing.  Also if you press a fourth button it looks like it has been selected when in actual fact it’s just ‘active’ and isn’t really selected.

I also spent about a day continuing to work on the requirements document for the Books and Borrowing project.  I haven’t quite finished this initial version of the document but I’ve made good progress and I aim to have it completed next week.  Also for the project I participated in a Zoom call with RA Alex Deans and NLS Maps expert Chris Fleet about a subproject we’re going to develop for B&B for the Chambers Library in Edinburgh.  This will feature a map-based interface showing where the borrowers lived and will use a historical map layer for the centre of Edinburgh.

Chris also talked about a couple of projects at the NLS that were very useful to see.  The first one was the Jamaica journal of Alexander Innes (https://geo.nls.uk/maps/innes/) which features journal entries plotted on a historical map and a slider allowing you to quickly move through the journal entries.  The second was the Stevenson maps  of Scotland (https://maps.nls.uk/projects/stevenson/) that provides options to select different subjects and date periods.  He also mentioned a new crowdsourcing project to transcribe all of the names on the Roy Military Survey of Scotland (1747-55) maps which launched in February and already has 31,000 first transcriptions in place, which is great.  As with the GB1900 project, the data produced here will be hugely useful for things like place-name projects.

I also participated in a Zoom call with the Historical Thesaurus team where we discussed ongoing work.  This mainly involves a lot of manual linking of the remaining unlinked categories and looking at sensitive words and categories so there’s not much for me to do at this stage, but it was good to be kept up to date.

I continued to work on the new extIPA charts for the Speech Star project, which I had started on last week.  Last week I had some difficulties replicating the required phonetic symbols but this week Eleanor directed me to an existing site that features the extIPA chart (https://teaching.ncl.ac.uk/ipa/consonants-extra.html).  This site uses standard Unicode characters in combinations that work nicely, without requiring any additional fonts to be used.  I’ve therefore copied the relevant codes from there (this is just character codes like &#x62;&#x32A; – I haven’t copied anything other than this from the site).   With the symbols in place I managed to complete an initial version of the chart, including pop-ups featuring all of the videos, but unfortunately the videos seem to have been encoded with an encoder that requires QuickTime for playback.  So although the videos are MP4 they’re not playing properly in browsers on my Windows PC – instead all I can hear is the audio.  It’s very odd as the videos play fine directly from Windows Explorer, but in Firefox, Chrome or MS Edge I just get audio and the static ‘poster’ image.  When I access the site on my iPad the videos play fine (as QuickTime is an Apple product).  Eleanor is still looking into re-encoding the videos and will hopefully get updated versions to me next week.

I also did a bit more work for the Anglo-Norman Dictionary this week.  I fixed a couple of minor issues with the DTD, for example the ‘protect’ attribute was an enumerated list that could either be ‘yes’ or ‘no’ but for some entries the attribute was present but empty, and this was against the rules.  I looked into whether an enumerated list could also include an empty option (as opposed to not being present, which is a different matter) but it looks like this is not possible (see for example http://lists.xml.org/archives/xml-dev/200309/msg00129.html).  What I did instead was to change the ‘protect’ attribute from an enumerated list with options ‘yes’ and ‘no’ to a regular data field, meaning the attribute can now include anything (including being empty).  The ‘protect’ attribute is a hangover from the old system and doesn’t do anything whatsoever in the new system so it shouldn’t really matter.  And it does mean that the XML files should now validate.

The AND people also noticed that some entries that are present in the old version of the site are missing from the new version.  I looked through the database and also older versions of the data from the new site and it looks like these entries have never been present in the new site.  The script I ran to originally export the entries from the old site used a list of headwords taken from another dataset (I can’t remember where from exactly) but I can only assume that this list was missing some headwords and this is why these entries are not in the new site.  This is a bit concerning, but thankfully the old site is still accessible.  I managed to write a little script that grabs the entire contents of the browse list from the old website, separating it into two lists, one for main entries and one for xrefs.  I then ran each headword against a local version of the current AND database, separating out homonym numbers then comparing the headword with the ‘lemma’ field in the DB and the hom with the hom.  Initially I ran main and xref queries separately, comparing main to main and xref to xref, but I realised that some entries had changed types (legitimately so, I guess) so stopped making a distinction.

The script outputted 1540 missing entries.  This initially looks pretty horrifying, but I’m fairly certain most of them are legitimate.  There are a whole bunch of weird ‘n’ forms in the old site that have a strange character (e.g. ‘nun⋮abilité’) that are not found in the new site, I guess intentionally so.  Also, there are lots of ‘S’ and ‘R’ words but I think most of these are because of joining or splitting homonyms.  Geert, the editor, looked through the output and thankfully it turns out that only a handful of entries are missing, and also that these were also missing from the old DMS version of the data so their omission occurred before I became involved in the project.

Finally this week I worked with a new dataset of the Dictionaries of the Scots Language.  I successfully imported the new data and have set up a new ‘dps-v2’ api.  There are 80,319 entries in the new data compared to 80,432 in the previous output from DPS.  I have updated our test site to use the new API and its new data, although I have not been able to set up the free-text data in Solr yet so the advanced search for full text / quotations only will not work yet.  Everything else should, though.

Also today I began to work on the layout of the bibliography page.  I have completed the display of DOST bibs but haven’t started on SND yet.  This includes the ‘style guide’ link when a note is present.  I think we may still need to tweak the layout, however.  I’ll continue to work with the new data next week.

Week Beginning 28 February 2022

I participated in the UCU strike action from Monday to Wednesday this week, making it a two-day week for me.  I’d heard earlier in the week that the paper I’d submitted about the redevelopment of the Anglo-Norman Dictionary had been accepted for DH2022 in Tokyo, which was great.  However, the organisers have decided to make the conference online only, which is disappointing, although probably for the best given the current geopolitical uncertainty.  I didn’t want to participate in an online only event that would be taking place in Tokyo time (nine hours ahead of the UK) so I’ve asked to withdraw my paper.

On Thursday I had a meeting with the Speak For Yersel project to discuss the content that the team have prepared and what I’ll need to work on next.  I also spend a bit of time looking into creating a geographical word cloud which would fit word cloud output into a geoJSON polygon shape.  I found one possible solution here: https://npm.io/package/maptowordcloud but I haven’t managed to make it work yet.

I also received a new set of videos for the Speech Star project, relating to the extIPA consonants, and I began looking into how to present these.  This was complicated by the extIPA symbols not being standard Unicode characters.  I did a bit of research into how these could be presented, and found this site http://www.wazu.jp/gallery/Test_IPA.html#ExtIPAChart but here the marks appear to the right of the main symbol rather than directly above or below.  I contacted Eleanor to see if she had any other ideas and she got back to me with some alternatives which I’ll need to look into next week.

I spent a bit of time working for the DSL this week too, looking into a question about Google Analytics from Pauline Graham (and finding this very handy suite of free courses on how to interpret Google Analytics here https://analytics.google.com/analytics/academy/).  The DSL people had also wanted me to look into creating a Levenshtein distance option, whereby words that are spelled similarly to an entered term are given as suggestions, in a similar way to this page: http://chrisgilmour.co.uk/scots/levensht.php?search=drech.  I created a test script that allows you to enter a term and view the SND headwords that have a Levenshtein distance of two or less from your term, with any headwords with a distance of one highlighted in bold.  However, Levenshtein is a bit of a blunt tool, and as it stands I’m not sure the results of the script are all that promising.  My test term ‘drech’ brings back 84 matches, including things like ‘french’ which is unfortunately only two letters different from ‘drech’.  I’m fairly certain my script is using the same algorithm as used by the site linked to above, it’s just that we have a lot more possible matches.  However, this is just a simple Levenshtein test – we could also add in further tests to limit (or expand) the output, such as a rule that changes vowels in certain places as in the ‘a’ becomes ‘ai’ example suggested by Rhona at our meeting last week.  Or we could limit the output to words beginning with the same letter.

Also this week I had a chat with the Historical Thesaurus people, arranging a meeting for next week and exporting a recent version of the database for them to use offline.  I also tweaked a couple of entries for the AND and spent an hour or so upgrading all of the WordPress sites I manage to the latest WordPress version.

Week Beginning 21st February 2022

I participated in the UCU strike action for all of last week and on Monday and Tuesday this week.  I divided the remaining three days between three projects:  the Anglo-Norman Dictionary, the Dictionaries of the Scots Language and Books and Borrowing.

For AND I continued to work on the publication of a major update of the letter S.  I had deleted all of the existing S entries and had imported all of the new data into our test instance the week before the strike, giving the editors time to check through it all and work on the new data via the content management system of the test instance.  They had noticed that some of the older entries hadn’t been deleted, and this was causing some new entries to not get displayed (as both old and new entries had the same ‘slug’ and therefore the older entry was still getting picked up when the entry’s page was loaded).  It turned out that I had forgotten that not all S entries actually have a headword beginning with ‘s’ – there are some that have brackets and square brackets.  There were 119 of these entries still left in the system and I updated my deletion scripts to remove these additional entries, ensuring that only the older versions and not the new ones were removed.  This fixed the issues with new entries not appearing.  With this task completed and the data approved by the editors we replaced the live data with the data from the test instance.

The update has involved 2,480 ‘main’ entries, containing 4,109 main senses, 1,295 subsenses, 2,627 locutions, 1,753 locution senses, 204 locution subsenses and 16,450 citations.  In addition, 4,623 ‘xref’ entries have been created or updated.  I also created a link checker which goes through every entry, pulls out all cross references from anywhere in the entry’s XML and checks to see whether each cross-referenced entry actually exists in the system.  The vast majority of links were all working fine but there were still a substantial number that were broken (around 800).  I’ve passed a list of these over to the editors who will need to manually fix the entries over time.

For the DSL I had a meeting on Thursday morning with Rhona, Ann and Pauline to discuss the major update to the DSL’s data that is going to go live soon.  This involves a new batch of data exported from their new editing system that will have a variety of significant structural changes, such as a redesigned ‘head’ section, and an overhauled method of recording dates.  We will also be migrating the live site to a new server, a new API and a new Solr instance so it’s a pretty major change.  We had been planning to have all of this completed by the end of March, but due to the strike we now think it’s best to push this back to the end of April, although we may launch earlier if I manage to get all of the updates sorted before then.  Following the meeting I made a few updates to our test instance of the system (e.g. reinstating some superscript numbers from SND that we’d previously hidden) and had a further email conversation with Ann about some ancillary pages.

For the Books and Borrowing project I downloaded a new batch of images for five more registers that had been digitised for us by the NLS.  I then processed these, uploaded them to our server and generated register and page records for each page image.  I also processed the data from the Royal High School of Edinburgh that had been sent to me in a spreadsheet.  There were records from five different registers and it took quite some time to write a script that would process all of the data, including splitting up borrower and book data, generating book items where required and linking everything together so that a borrower and a book only exist once in the system even if they are associated with many borrowing records.  Thankfully I’d done this all before for previous external datasets, but the process is always different for each dataset so there was still much in the way of reworking to be done.

I completed my scripts and ran them on a test instance of the database running on my local PC to start with.  When all was checked and looking good I ran the scripts on the live server to incorporate the new register data with the main project dataset.  After completing the task there were 19,994 borrowing records across 1,438 register pages, involving 1,932 books and 2,397 borrowers.  Some tweaking of the data may be required (e.g. I noticed there are two ‘Alexander Adam’ borrowers, which seems to have occurred because there was a space character before the forename sometimes) but on the whole it’s all looking good to me.

Next week I’ll be on strike again on Monday to Wednesday.