Week Beginning 26th September 2022

I spent most of my time this week getting back into the development of the front-end for the Books and Borrowing project.  It’s been a long time since I was able to work on this due to commitments to other projects and also due to there being a lot more for me to do than I was expecting regarding processing images and generating associated data in the project’s content management system over the summer.  However, I have been able to get back into the development of the front-end this week and managed to make some pretty good progress.  The first thing I did was to make some changes to the ‘libraries’ page based on feedback I received ages ago from the project’s Co-I Matt Sangster.  The map of libraries used clustering to group libraries that are close together when the map is zoomed out, but Matt didn’t like this.  I therefore removed the clusters and turned the library locations back into regular individual markers.  However, it is now rather difficult to distinguish the markers for a number of libraries.  For example, the markers for Glasgow and the Hunterian libraries (back when the University was still on the High Street) are on top of each other and you have to zoom in a very long way before you can even tell there are two markers there.

I also updated the tabular view of libraries.  Previously the library name was a button that when clicked on opened the library’s page.  Now the name is text and there are two buttons underneath.  The first one opens the library page while the second pans and zooms the map to the selected library, whilst also scrolling the page to the top of the map.  This uses Leaflet’s ‘flyTo’ function which works pretty well, although the map tiles don’t quite load in fast enough for the automatic ‘zoom out, pan and zoom in’ to proceed as smoothly as it ought to.

After that I moved onto the library page, which previously just displayed the map and the library name. I updated the tabs for the various sections to display the number of registers, books and borrowers that are associated with the library.  The Introduction page also now features the information recorded about the library that has been entered into the CMS.  This includes location information, dates, links to the library etc.  Beneath the summary info there is the map, and beneath this is a bar chart showing the number of borrowings per year at the library.  Beneath the bar chart you can find the longer textual fields about the library such as descriptions and sources.  Here’s a screenshot of the page for St Andrews:

I also worked on the ‘Registers’ tab, which now displays a tabular list of the selected library’s registers, and I also ensured that when you select one of the tabs other than ‘Introduction’ the page automatically scrolls down to the top of the tabs to avoid the need to manually scroll past the header image (but we still may make this narrower eventually).  The tabular list of registers can be ordered by any of the columns and includes data on the number of pages, borrowers, books and borrowing records featured in each.

When you open a register the information about it is displayed (e.g. descriptions, dates, stats about the number of books etc referenced in the register) and large thumbnails of each page together with page numbers and the number of records on each page are displayed.  The thumbnails are rather large and I could make them smaller, but doing so would mean that all the pages end up looking the same – beige rectangles.  The thumbnails are generated on the fly by the IIIF server and the first time a register is loaded it can take a while for the thumbnails to load in.  However, generated thumbnails are then cached on the server so subsequent page loads are a lot quicker.  Here’s a screenshot of a register page for St Andrews:

One thing I also did was write a script to add in a new ‘pageorder’ field to the ‘page’ database table.  I then wrote a script that generated the page order for every page in every register in the system.  This picks out the page that has no preceding page and iterates through pages based on the ‘next page’ ID.  Previously pages in lists were ordered by their auto-incrementing ID, but this meant that if new pages needed to be inserted for a register they ended up stuck at the end of the list, even though the ‘next’ and ‘previous’ links worked successfully.  This new ‘pageorder’ field ensures lists of pages are displayed in the proper order.  I’ve updated the CMS to ensure this new field is used when viewing a register, although I haven’t as of yet updated the CMS to regenerate the ‘pageorder’ for a register if new pages are added out of sequence.  For now if this happens I’ll need to manually run my script again to update things.

Anyway, back to the front-end:  The new ‘pageorder’ is used in the list of pages mentioned above so the thumbnails get displaying in the correct order.  I may add pagination to this page, as all of the thumbnails are currently on one page and it can take a while to load, although these days people seem to prefer having long pages rather than having data split over multiple pages.

The final section I worked on was the page for viewing an actual page of the register, and this is still very much in progress.  You can open a register page by pressing on its thumbnail and currently you can navigate through the register using the ‘next’ and ‘previous’ buttons or return to the list of pages.  I still need to add in a ‘jump to page’ feature here too.  As discussed in the requirements document, there will be three views of the page: Text, Image and Text and Image side-by-side.  Currently I have implemented the image view only.  Pressing on the ‘Image view’ tab opens a zoomable / pannable interface through which the image of the register page can be viewed.  You can also make this interface full screen by pressing on the button in the top right.  Also, if you’re viewing the image and you use the ‘next’ and ‘previous’ navigation links you will stay on the ‘image’ tab when other pages load.  Here’s a screenshot of the ‘image view’ of the page:

Also this week I wrote a three-page requirements document for the redevelopment of the front-ends for the various place-names projects I’ve created using the system originally developed for the Berwickshire place-names project which launched back in 2018.  The requirements document proposes some major changes to the front-end, moving to an interface that operates almost entirely within the map and enabling users to search and browse all data from within the map view rather than having to navigate to other pages.  I sent the document off to Thomas Clancy, for whom I’m currently developing the systems for two place-names projects (Ayr and Iona) and I’ll just need to wait to hear back from him before I take things further.

I also responded to a query from Marc Alexander about the number of categories in the Thesaurus of Old English, investigated a couple of server issues that were affecting the Glasgow Medical Humanities site, removed all existing place-name elements from the Iona place-names CMS so that the team can start afresh and responded to a query from Eleanor Lawson about the filenames of video files on the Seeing Speech site.  I also made some further tweaks to the Speak For Yersel resource ahead of its launch next week.  This included adding survey numbers to the survey page and updating the navigation links and writing a script that purges a user and all related data from the system.  I ran this to remove all of my test data from the system.  If we do need to delete a user in future (either because their data is clearly spam or a malicious attempt to skew the results, or because a user has asked us to remove their data) I can run this script again.  I also ran through every single activity on the site to check everything was working correctly.  The only thing I noticed is that I hadn’t updated the script to remove the flags for completed surveys when a user logs out, meaning after logging out and creating a new user the ticks for completed surveys were still displaying.  I fixed this.

I also fixed a few issues with the Burns mini-site about Kozeluch, including updating the table sort options which had stopped working correctly when I added a new column to the table last week and fixing some typos with the introductory text.  I also had a chat with the editor of the Anglo-Norman Dictionary about future developments and responded to a query from Ann Ferguson about the DSL bibliographies.  Next week I will continue with the B&B developments.

Week Beginning 19th September 2022

It was a four-day week this week due to the Queen’s funeral on Monday.  I divided my time for the remaining four days over several projects.  For Speak For Yersel I finally tackled the issue of the way maps are loaded.  The system had been developed for a map to be loaded afresh every time data is requested, with any existing map destroyed in the process.  This worked fine when the maps didn’t contain demographic filters as generally each map only needed to be loaded once and then never changed until an entirely new map was needed (e.g. for the next survey question).  However, I was then asked to incorporate demographic filters (age groups, gender, education level), with new data requested based on the option the user selected.  This all went through the same map loading function, which still destroyed and reinitiated the entire map on each request.  This worked, but wasn’t ideal, as it meant the map reset to its default view and zoom level whenever you changed an option, map tiles were reloaded from the server unnecessarily and if the user was in ‘full screen’ mode they were booted out of this as the full screen map no longer existed.  For some time I’ve been meaning to redevelop this to address these issues, but I’ve held off as there were always other things to tackled and I was worried about essentially ripping apart the code and having to rebuilt fundamental aspects of it.  This week I finally plucked up the courage to delve into the code.

I created a test version of the site so as to not risk messing up the live version and managed to develop an updated method of loading the maps.  This method initiates the map only once when a page is first loaded rather than destroying and regenerating the map every time a new question is loaded or demographic data is changed.  This means the number of map tile loads is greatly reduced as the base map doesn’t change until the user zooms or pans.  It also means the location and zoom level a user has left the map on stays the same when the data is changed.  For example, if they’re interested in Glasgow and are zoomed in on it they can quickly flick between different demographic settings and the map will stay zoomed in on Glasgow rather than resetting each time.  Also, if you’re viewing the map in full-screen mode you can now change the demographic settings without the resource exiting out of full screen mode.

All worked very well, with the only issues being that the transitions between survey questions and quiz questions weren’t as smooth as the with older method.  Previously the map scrolled up and was then destroyed, then a new map was created and the data was loaded into the area before it smoothly scrolled down again.  For various technical reasons this no longer worked quite as well any more.  The map area still scrolls up and down, but the new data only populates the map as the map area scrolls down, meaning for a brief second you can still see the data and legend for the previous question before it switches to the new data.  However, I spent some further time investigating this issue and managed to fix it, with different fixes required for the survey and the quiz.  I also noticed a bug whereby the map would increase in size to fit the available space but the map layers and data were not extending properly into the newly expanded area.  This is a known issue with Leaflet maps that have their size changed dynamically and there’s actually a Leaflet function that sorts it – I just needed to call map.invalidateSize(); and the map worked properly again.  Of course it took a bit of time to figure this simple fix out.

I also made some further updates to the site.  Based on feedback about the difficulty some people are having about which surveys they’ve done, I updated the site to log when the user completes a survey.  Now when the user goes to the survey index page a count of the number of surveys they’ve completed is displayed in the top right and a green tick has been added to the button of each survey they have completed.  Also, when they reach the ‘what next’ page for a survey a count of their completed survey is also shown.  This should make it much easier for people to track what they’ve done.  I also made a few small tweaks to the data at the request of Jennifer, and create a new version of the animated GIF that has speech bubbles, as the bubble for Shetland needed its text changed.  As I didn’t have the files available I took the opportunity regenerate the GIF, using a larger map, as the older version looked quite fuzzy on a high definition screen like an iPad.  I kept the region outlines on as well to tie it in better with our interactive maps.  Also the font used in the new version is now the ‘Baloo’ font we use for the site.  I stored all of the individual frames both as images and as powerpoint slides so I can change them if required.  For future reference, I created the animated GIF using https://ezgif.com/maker with a 150 second delay between slides, crossfade on and a fader delay of 8.

Also this week I researched an issue with the Scots Thesaurus that was causing the site to fail to load.  The WordPress options table had become corrupted and unreadable and needed to be replaced with a version from the backups, which thankfully fixed things.  I also did my expenses from the DHC in Sheffield, which took longer than I thought it would, and made some further tweaks to the Kozeluch mini-site on the Burns C21 website.  This included regenerating the data from a spreadsheet via a script I’d written and tweaking the introductory text.  I also responded to a request from Fraser Dallachy to regenerate some data that a script Id’ previously written had outputted.  I also began writing a requirements document for the redevelopment of the place-names project front-ends to make them more ‘map first’.

I also did a bit more work for Speech Star, making some changes to the database of non-disordered speech and moving the ‘child speech error database’ to a new location.  I also met with Luca to have a chat about the BOSLIT project, its data, the interface and future plans.  We had a great chat and I then spent a lot of Friday thinking about the project and formulating some feedback that I sent in a lengthy email to Luca, Lorna Hughes and Kirsteen McCue on Friday afternoon.

Week Beginning 12th September 2022

I spent a bit of time this week going through my notes from the Digital Humanities Congress last week and writing last week’s lengthy post.  I also had my PDR session on Friday and I needed to spend some time preparing for this, writing all of the necessary text and then attending the session.  It was all very positive and it was a good opportunity to talk to my line manager about my role.  I’ve been in this job for ten years this month and have been writing these blog posts every working week for those ten years, which I think is quite an achievement.

In terms of actual work on projects, it was rather a bitty week, with my time spread across lots of different projects.  On Monday I had a Zoom call for the VariCS project, a phonetics project in collaboration with Strathclyde that I’m involved with.  The project is just starting up and this was the first time the team had all met.  We mainly discussed setting up a web presence for the project and I gave some advice on how we could set up the website, the URL and such things.  In the coming weeks I’ll probably get something set up for the project.

I then moved onto another Burns-related mini-project that I worked on with Kirsteen McCue many months ago – a digital edition of Koželuch’s settings of Robert Burns’s Songs for George Thomson.  We’re almost ready to launch this now and this week I created a page for an introductory essay, migrated a Word document to WordPress to fill the page, including adding in links and tweaking the layout to ensure things like quotes displayed properly.  There are still some further tweaks that I’ll need to implement next week, but we’re almost there.

I also spent some time tweaking the Speak For Yersel website, which is now publicly accessible (https://speakforyersel.ac.uk/) but still not quite finished.  I created a page for a video tour of the resource and made a few tweaks to the layout, such as checking the consistency of font sizes used throughout the site.  I also made some updates to the site text and added in some lengthy static content to the site in the form or a teachers’ FAQ and a ‘more information’ page.  I also changed the order of some of the buttons shown after a survey is completed to hopefully make it clearer that other surveys are available.

I also did a bit of work for the Speech Star project.  There had been some issues with the Central Scottish Phonetic Features MP4s playing audio only on some operating systems and the replacements that Eleanor had generated worked for her but not for me.  I therefore tried uploading them to and re-downloading them from YouTube, which thankfully seemed to fix the issue for everyone.  I then made some tweaks to the interfaces to the two project websites.  For the public site I made some updates to ensure the interface looked better on narrow screens, ensuring changing the appearance of the ‘menu’ button and making the logo and site header font smaller to they take up less space.  I also added an introductory video to the homepage too.

For the Books and Borrowing project I processed the images for another library register.  This didn’t go entirely smoothly.  I had been sent 73 images and these were all upside down so needed rotating.  It then transpired that I should have been sent 273 images so needed to chase up the missing ones.  Once I’d been sent the full set I was then able to generate the page images for the register, upload the images and associate them with the records.

I then moved on to setting up the front-end for the Ayr Place-names website.  In the process of doing so I became aware that one of the NLS map layers that all of our place-name projects use had stopped working.  It turned out that the NLS had migrated this map layer to a third party map tile service (https://www.maptiler.com/nls/) and the old URLs these sites were still using no longer worked.  I had a very helpful chat with Chris Fleet at NLS Maps about this and he explained the situation.  I was able to set up a free account with the maptiler service and update the URLS in four place-names websites that referenced the layer (https://berwickshire-placenames.glasgow.ac.uk/, https://kcb-placenames.glasgow.ac.uk/, https://ayr-placenames.glasgow.ac.uk and https://comparative-kingship.glasgow.ac.uk/scotland/).  I’ll need to ensure this is also done for the two further place-names projects that are still in development (https://mull-ulva-placenames.glasgow.ac.uk and https://iona-placenames.glasgow.ac.uk/).

I managed to complete the work on the front-end for the Ayr project, which was mostly straightforward as it was just adapting what I’d previously developed for other projects.  The thing that took the longest was getting the parish data and the locations where the parish three-letter acronyms should appear, but I was able to get this working thanks to the notes I’d made the last time I needed to deal with parish boundaries (as documented here: https://digital-humanities.glasgow.ac.uk/2021-07-05/.  After discussions with Thomas Clancy about the front-end I decided that it would be a good idea to redevelop the map-based interface to display al of the data on the map by default and to incorporate all of the search and browse options within the map itself.  This would be a big change, and it’s one I had been thinking of implementing anyway for the Iona project, but I’ll try and find some time to work on this for all of the place-name sites over the coming months.

Finally, I had a chat with Kirsteen McCue and Luca Guariento about the BOSLIT project.  This project is taking the existing data for the Bibliography of Scottish Literature in Translation (available on the NLS website here: https://data.nls.uk/data/metadata-collections/boslit/) and creating a new resource from it, including visualisations.  I offered to help out with this and will be meeting with Luca to discuss things further, probably next week.

 

Week Beginning 15th August 2022

I spent the majority of the week continuing to work on the Speak For Yersel resource, working through a lengthy document of outstanding tasks that need to be completed before the site is launched in September.  First up was the output for the ‘Where do you think is speaker is from’ click activity.  The page features some explanatory text and a drop-down through which you can select a speaker.  When this is clicked on the user is presented with the option to play the audio file and can view the transcript.

I decided to make the transcript chunks visible with a green background that’s slightly different from the colour of the main area.  I thought it would be useful for people to be able to tell which of the ‘bigger’ words was part of which section, as it may well be that the word that caused a user to ‘click’ a section is not the word that we’ve picked for the section.  For example, in the Glasgow transcript ‘water’ is the chosen word for one section but I personally I clicked this section because ‘hands’ was pronounced ‘honds’.  Another reason to make the chunks visible is because I’ve managed to set up the transcript to highlight the appropriate section as the audio plays.  Currently the section that is playing is highlighted in white and this really helps to get your eye in whilst listening to the audio.

In terms of resizing the ‘bigger’ words, I chose the following as a starting point:  Less than 5% of the clicks: word is bold but not bigger (default font size for the transcript area is currently 16pt); 5-9%: 20pt;  10-14%: 25pt; 15-19%: 30pt; 20-29%: 35pt; 30-49%: 40pt; 50-74%: 45pt; 75% or more: 50pt.

I’ve also given the ‘bigger’ word a tooltip that displays the percentage of clicks responsible for its size as I thought this might be useful for people to see.  We will need to change the text, though.  Currently it says something like ‘15% of respondents clicked in this section’ but it’s actually ‘15% of all clicks for this transcript were made in this section’ which is a different thing, but I’m not sure how best to phrase it.  Where there is a pop-up for a word it appears in the blue font and the pop-up text contains the text that the team has specified.  Where the pop-up word is also the ‘bigger’ word (most but not always the case) then the percentage text also appears in the popup, below the text.  Here’s a screenshot of how the feature currently looks:

I then moved onto the ‘I would never say that’ activities.  This is a two-part activity, with the first part involving the user dragging and dropping sentences into either a ‘used in Scots’ or ‘isn’t used in Scots’ column and then checking their answers.  The second part has the user translating a Scots sentence into Standard English by dragging and dropping possible words into a sentence area.  My first task was to format the data used for the activity, which involved creating a suitable data structure in JSON and then migrating all of the data into this structure from a Word document.  With this in place I then began to create the front-end.  I’d created similar drag and drop features before (including for another section of the current resource) and therefore used the same technologies:  The jQuery UI drag and drop library (https://jqueryui.com/draggable/).  This allowed me to set up two areas where buttons could be dropped and then create a list of buttons that could be dragged.  I then had to work on the logic for evaluating the user’s answers.  This involved keeping a tally of the number of buttons that had been dropped into one or other of the boxes (which also had to take into consideration that the user can drop a button back in the original list) and when every button has been placed in a column a ‘check answers’ button appears.  On pressing, the code then fixes the draggable buttons in place and compares the user’s answers with the correct answers, adding a ‘tick’ or ‘cross’ to each button and giving an overall score in the middle.  There are multiple stages to this activity so I also had to work on the logic for loading a new set of sentences with their own introductory text, or moving onto part two of the activity if required.  Below is a screenshot of part 1 with some of the buttons dragged and dropped:

Part two of the activity involved creating a sentence by choosing words to add.  The original plan was to have the user click on a word to add it to the sentence, or click on the word in the sentence to remove it if required.  I figured that using a drag and drop method and enabling the user to move words around the sentence after they have dropped them if required would be more flexible and would fit in better with the other activities in the site.  I was just going to use the same drag and drop library that I’d used for part one, but then I spotted a further jQuery interaction called sortable that allowed for connected lists (https://jqueryui.com/sortable/#connect-lists).  This allowed items within a list to be sortable, but also for items to be dragged and dropped from one list to another.  This sounded like the ideal solution, so I set about investigating its usage.

It took some time to style the activity to ensure that empty lists were still given space on the screen, and to ensure the word button layout worked properly, but after that the ‘sentence builder’ feature worked very well – the user could move words between the sentence area and the ‘list of possible words’ area and rearrange their order as required.  I set up the code to ensure a ‘check answers’ button appeared when at least one word had been added to the sentence (disappearing again if the user removes all words).  When the ‘check answers’ button is pressed the code grabs the content of the buttons in the sentence area in the order the buttons have been added and creates a sentence from the text.  It then compares this to one of the correct sentences (of which there may be more than one).  If the answer is correct a ‘tick’ is added after the sentence and if it’s wrong a ‘cross’ is added.  If there are multiple correct answers the other correct possibilities are displayed, and if the answer was wrong all correct answers are displayed.  Then it’s on to the next sentence, or the final evaluation.  Here’s a screenshot of part 2 with some words dragged and dropped:

Whilst working on part two it became clear that the ‘sortable’ solution I’d developed for part 2 worked better than the other draggable method I’d used for part one.  This is because the ‘sortable’ solution uses HTML lists and ‘snaps’ each item into place whereas the previous method just leaves the draggable item wherever the user drops it (so long as it’s in the confines of the droppable box).  This means things can look a bit messy.  I therefore revisited part 1 and replaced the method.  This took a bit of time to implement as I had to rework a lot of the logic, but I think it was worth it.

Also this week I spent a bit of time working for the Dictionaries of the Scots Language.  I had a conversation with Pauline Graham about the workflow for updates to the online data.  I also investigated a couple of issues with entries for Ann Fergusson.  One entry (sleesh) wasn’t loading as there were spaces in the entry’s ‘slug’.  Spaces in URLs can cause issues and this is what was happening with this entry.  I updated the URL information in the database so that ‘sleesh_n1 and v’ has been changed to ‘sleesh_n1_and_v’ and this has fixed the issue.  I also updated the XML in the online system so the first URL is now <url>sleesh_n1_and_v</url>.  I checked the online database and thankfully no other entries have a space in their ‘slug’ so this issue doesn’t affect anything else.  The second issue related to an entry that doesn’t appear in the online database.  It was not in the data I was sent and wasn’t present in several previous versions of the data, so in this case something must have happened prior to the data getting sent to me.  I also had a conversation about the appearance of yogh characters in the site.

I also did a bit more work for the Books and Borrowing project this week.  I added two further library registers from the NLS to our system.  This means there should now only be one further register to come from the NLS, which is quite a relief as each register takes some time to process.  I also finally got round to processing the four registers for St Andrews, which had been on my ‘to do’ list since late July.  It was very tricky to rename the images into a format that we can use on the server because the lack of trailing zeros meant a script to batch process the images loaded them in the wrong order.  The was made worse because rather than just being numbered sequentially the image filenames were further split into ‘parts’.  For example, the images beginning ‘UYLY 207 11 Receipt book part 11’ were being processed before images beginning ‘UYLY 207 11 Receipt book part 2’ as programming languages when ordering strings consider 11, 12 etc to come before 2.  This was also then happening within each ‘part’, e.g. ‘UYLY207 15 part 43_11.jpg’ was coming before ‘UYLY207 15 part 43_2.jpg’.  It took most of the morning to sort this out, but I was then able to upload the images to the server and I create new registers, generate pages and associate images for the two new registers (207-11 and 207-15).

However, the other two registers already exist in the CMS as page records with associated borrowing records.  Each image of the register is an open spread showing two borrowing pages and we had previously decided that I should run a script to merge pages in the CMS and then associate the merged record with one of the page images.  However, I’m afraid this is going to need some manual intervention.  Looking at the images for 206-1 and comparing them to the existing page records for this register, it’s clear that there are many blank pages in the two-page spreads that have not been replicated in the CMS.  For example, page 164 in the CMS is for ‘Profr Spens’.  The corresponding image (in my renamed images) is ‘UYLY206-1_00000084.jpg’.  The data is on the right-hand page and the left-hand page is blank.  But in the CMS the preceding page is for ‘Prof. Brown’, which is on the left-hand page of the preceding image.  If I attempted to automatically merge these two page records into one this would therefore result in an error.

I’m afraid what I need is for someone who is familiar with the data to look through the images and the pages and create a spreadsheet noting which pages correspond to which image.  Where multiple pages correspond to one page I can then merge the records.  So for example: Pages 159 (id 1087) and 160 (ID 1088) are found on image UYLY206-1_00000082.jpg.  Page 161 (1089) corresponds to UYLY206-1_00000083.jpg.  The next page in the CMS is 164 (1090) and this corresponds to UYLY206-1_00000084.jpg. So a spreadsheet could have two columns:

Page ID                 Image

1087                       UYLY206-1_00000082.jpg

1088                       UYLY206-1_00000082.jpg

1089                       UYLY206-1_00000083.jpg

1900                       UYLY206-1_00000084.jpg

Also, the page numbers in the CMS don’t tally with the handwritten page numbers in the images (e.g. the page record 1089 mentioned above has page 161 but the image has page number 162 written on it).  And actually, the page numbers would need to include two pages, e.g. 162-163.  Ideally whoever is going to manually create the spreadsheet could add new page numbers as a further column and I could then fix these when I process the spreadsheet too.  This task is still very much in progress.

Also for the project this week I created a ‘full screen’ version of the Chambers map that will be pulled into an iframe on the Edinburgh University Library website when they create an online exhibition based on our resource.

Finally this week I helped out Sofia from the Iona Place-names project who as luck would have it was also wanting help with embedding a map in an iframe.  As I’d already done some investigation about this very issue for the Chambers map I was able to easily set this up for Sofia.

 

Week Beginning 31st January 2022

I split my time over many different projects this week.  For the Books and Borrowing project I completed the work I started last week on processing the Wigtown data, writing a little script that amalgamated borrowing records that had the same page order number on any page.  These occurrences arose when multiple volumes of a book were borrowed by a person at the same time and each volume was recorded separately.  My script worked perfectly and many such records were amalgamated.

I then moved onto incorporating images of register pages from Leighton into the CMS.  This proved to be a rather complicated process for one of the four registers as around 30 pages for the register had already been manually created in the CMS and had borrowing records associated with them.  However, these pages had been created in a somewhat random order, starting at folio number 25 and mostly being in order down to 43, at which point the numbers are all over the place, presumably because the pages were created in the order that they were transcribed.    As it stands the CMS relies on the ‘page ID’ order when generating lists of pages as ‘Folio Number’ isn’t necessarily in numerical order (e.g. front / back matter with Roman numerals).  If out of sequence pages crop up a lot we may have to think about adding a new ‘page order’ column, or possibly use the ‘previous’ and ‘next’ IDs to ascertain the order pages should be displayed.  After some discussion with the team it looks like pages are usually created in page order and Leighton is an unusual case, so we can keep using the auto-incrementing page ID for listing pages in the contents page.  I therefore generated a fresh batch of pages for the Leighton register then moved the borrowing records from the existing mixed up pages to the appropriate new page, then deleted the existing pages so everything is all in order.

For the Speak For Yersel project I created a new exercise whereby users are presented with a map of Scotland divided into 12 geographical areas and there are eight map markers in a box in the sea to the east of Scotland.  Each marker is clickable, and clicking on it plays a sound file.  Each marker is also draggable and after listening to the sound file the user should then drag the marker to whichever area they think the speaker in the sound file is from.  After dragging all of the markers the user can then press a ‘check answers’ button to see which they got right, and press a ‘view correct locations’ button which animates the markers to their correct locations on the map.  It was a lot of fun making the exercise and I think it works pretty well.  It’s still just an initial version and no doubt we will be changing it, but here’s a screenshot of how it currently looks (with one answer correct and the rest incorrect):

For the Speech Star project I made some further changes to the speech database.  Videos no longer autoplay, as requested.  Also, the tables now feature checkboxes beside them.  You can select up to four videos by pressing on these checkboxes.  If you select more than four the earliest one you pressed is deselected, keeping a maximum of four no matter how many checkboxes you try to click on.  When at least one checkbox is pressed the tab contents will slide down and a button labelled ‘Open selected videos’ will appear.  If you press on this a wider popup will open, containing all of your chosen videos and the metadata about each.  This has required quite a lot of reworking to implement, but it seemed to be working well, until I realised that while the multiple videos load and play successfully in Firefox, in Chrome and MS Edge (which is based on Chrome) only the final video loads in properly, with only audio playing on the other videos.  I’ll need to investigate this further next week.  But here’s a screenshot of how things look in Firefox:

Also this week I spoke to Thomas Clancy about the Place-names of Iona project, including discussing how the front-end map will function (Thomas wants an option to view all data on a single map, which should work although we may need to add in clustering at higher zoom levels.  We also discussed how to handle external links and what to do about the elements database, that includes a lot of irrelevant elements from other projects.

I also had an email conversation with Ophira Gamliel in Theology about a proposal she’s putting together that will involve an interactive map, gave some advice to Diane Scott about cookie policy pages, worked with Raymond in Arts IT Support to fix an issue with a server update that was affecting the playback of videos on the Seeing Speech and Dynamic Dialects websites and updated a script that Fraser Dallachy needed access to for his work on a Scots Thesaurus.

Finally, I had some email conversations with the DSL people and made an update to the interface of the new DSL website to incorporate an ‘abbreviations’ button, which links to the appropriate DOST or SND abbreviations page.

 

 

Week Beginning 6th December 2021

I spent a bit of time this week writing as second draft of a paper for DH2022 after receiving feedback from Marc.  This one targets ‘short papers’ (500-750 words) and I managed to get it submitted before the deadline on Friday.  Now I’ll just need to see if it gets accepted – I should find out one way or the other in February.  I also made some further tweaks to the locution search for the Anglo-Norman Dictionary, ensuring that when a term appears more than once the result is repeated for each occurrence, appearing in the results grouped by each word that matches the term.  So for example ‘quatre tempres, tens’ now appears twice, once amongst the ‘tempres’ and once amongst the ‘tens’ results.

I also had a chat with Heather Pagan about the Irish Dictionary eDIL (http://www.dil.ie/) who are hoping to rework the way they handle dates in a similar way to the AND.  I said that it would be difficult to estimate how much time it would take without seeing their current data structure and getting more of an idea of how they intend to update it, and also what updates would be required to their online resource to incorporate the updated date structure, such as enhanced search facilities and whether further updates to their resource would also be part of the process.  Also whether any back-end systems would also need to be updated to manage the new data (e.g. if they have a DMS like the AND).

Also this week I helped out with some issues with the Iona place-names website just before their conference started on Thursday.  Someone had reported that the videos of the sessions were only playing briefly and then cutting out, but they all seemed to work for me, having tried them on my PC in Firefox and Edge and on my iPad in Safari.  Eventually I managed to replicate the issue in Chrome on my desktop and in Chrome on my phone, and it seemed to be an issue specifically related to Chrome, and didn’t affect Edge, which is based on Chrome.  The video file plays and then cuts out due to the file being blocked on the server.  I can only assume that the way Chrome accesses the file is different to other browsers and it’s sending multiple requests to the server which is then blocking access due to too many requests being sent (the console in the browser shows a 403 Forbidden error).  Thankfully Raymond at Arts IT Support was able to increase the number of connections allowed per browser and this fixed the issue.  It’s still a bit of a strange one, though.

I also had a chat with the DSL people about when we might be able to replace the current live DSL site with the ‘new’ site, as the server the live site is on will need to be decommissioned soon.  I also had a bit of a catch-up with Stevie Barrett, the developer in Celtic and Gaelic, and had a video call with Luca and his line-manager Kirstie Wild to discuss the current state of Digital Humanities across the College of Arts.  Luca does a similar job to me at college-level and it was good to meet him and Kirstie to see what’s been going on outside of Critical Studies.  I also spoke to Jennifer Smith about the Speak For Yersel project, as I’d not heard anything about it for a couple of weeks.  We’re going to meet on Monday to take things further.

I spent the rest of the week working on the radar diagram visualisations for the Historical Thesaurus, completing an initial version.  I’d previously created a tree browser for the thematic headings, as I discussed last week.  This week I completed work on the processing of data for categories that are selected via the tree browser.  After the data is returned the script works out which lexemes have dates that fall into the four periods (e.g. a word with dates 650-9999 needs to appear in all four periods).  Words are split by Part of speech, and I’ve arranged the axes so that N, V, Aj and Av appear first (if present), with any others following on.  All verb categories have also been merged.

I’m still not sure how widely useful these visualisations will be as they only really work for categories that have several parts of speech.  But there are some nice ones.  See for example a visualisation of ‘Badness/evil’, ‘Goodness, acceptability’ and ‘Mediocrity’ which shows words for ‘Badness/evil’ being much more prevalent in OE and ME while ‘Mediocrity’ barely registers, only for it and ‘Goodness, acceptability’ to grow in relative size EModE and ModE:

I also added in an option to switch between visualisations which use total counts of words in each selected category’s parts of speech and visualisations that use percentages.  With the latter the scale is fixed at a maximum of 100% across all periods and the points on the axes represent the percentage of the total words in a category that are in a part of speech in your chosen period.  This means categories of different sizes are more easy to compare, but does of course mean that the relative sizes of categories is not visualised.  I could also add a further option that fixes the scale at the maximum number of words in the largest POS so the visualisation still represents relative sizes of categories but the scale doesn’t fluctuate between periods (e.g. if there are 363 nouns for a category across all periods then the maximum on the scale would stay fixed at 363 across all periods, even if the maximum number of nouns in OE (for example) is 128.  Here’s the above visualisation using the percentage scale:

The other thing I did was to add in a facility to select a specific category and turn off the others.  So for example if you’ve selected three categories you can press on a category to make it appear bold in the visualisation and to hide the other categories.  Pressing on a category a second time reverts back to displaying all.  Your selection is remembered if you change the scale type or navigate through the periods.  I may not have much more time to work on this before Christmas, but the next thing I’ll do is to add in access to the lexeme data behind the visualisation.  I also need to fix a bug that is causing the ModE period to be missing a word in its counts sometimes.

 

Week Beginning 22nd November 2021

I spent a bit of time this week writing an abstract for the DH2022 conference.  I wrote about how I rescued the data for the Anglo-Norman Dictionary in order to create the new AND website.  The DH abstracts are actually 750-1000 words long so it took a bit of time to write.  I have sent it on to Marc for feedback and I’ll need to run it by the AND editors before submission as well (if it’s worth submitting).  I still don’t know whether there would be sufficient funds for me to attend the event, plus the acceptance rate for papers is very low, so I’ll just need to see how this develops.

Also this week I participated in a Zoom call for the DSL about user feedback and redeveloping the DSL website.  It was a pretty lengthy call, but it was interesting to be a part of.  Marc mentioned a service called Hotjar (https://www.hotjar.com/) that allows you to track how people use your website (e.g. tracking their mouse movements) and this seemed like an interesting way of learning about how an interface works (or doesn’t).  I also had a conversation with Rhona about the updates to the DSL DNS that need to be made to improve the security or their email systems.  Somewhat ironically, recent emails from their IT people had ended up in my spam folder and I hadn’t realised they were asking me for further changes to be made, which unfortunately has caused a delay.

I spoke to Gerry Carruthers about another new project he’s hoping to set up, and we’ll no doubt be having a meeting about this in the coming weeks.  I also gave some advice to the students who are migrating the IJOSTS articles to WordPress too and made some updates to the Iona Placenames website in preparation for their conference.

For the Anglo-Norman Dictionary I fixed an issue with one of the textbase texts that had duplicate notes in one of its pages and then I worked on a new feature for the DMS that enables the editors to search the phrases contained in locutions in entries.  Editors can either match locution phrases beginning with a term (e.g. ta*), ending with a term (e.g. *de) or without a wildcard the term can appear anywhere in the phrase.  Other options found on the public site (e.g. single character wildcards and exact matches) are not included in this search.

The first time a search is performed the system needs to query all entries to retrieve only those that feature a locution.  These results are then stored in the session for use the next time a search is performed.  This means subsequent searches in a session should be quicker, and also means if the entries are updated between sessions to add or remove locutions the updates will be taken into consideration.

Search results work in a similar way to the old DMS option:  Any matching locution phrases are listed, together with their translations if present (if there are multiple senses / subsenses for a locution then all translations are listed, separated by a ‘|’ character).  Any cross references appear with an arrow and then the slug of the cross referenced entry.  There is also a link to the entry the locution is part of, which opens in a new tab on the live site.  A count of the total number of entries with locutions, the number of entries your search matched a phrase in and the total number of locutions is displayed above the results.

I spent the rest of the week working on the Speak For Yersel project.  We had a Zoom call on Monday to discuss the mockups I’d been working on last week and to discuss the user interface that Jennifer and Mary would like me to develop for the site (previous interfaces were just created for test purposes).  I spent the rest of my available time developing a further version of the grammar exercise with the new interface, that included logos, new fonts and colour schemes, sections appearing in different orders and an overall progress bar for the full exercise rather than individual ones for the questionnaire and the quiz sections.

I added in UoG and AHRC logos underneath the exercise area and added both an ‘About’ and ‘Activities’ menu items with ‘Activities’ as the active item.  The active state of the menu wasn’t mentioned in the document but I gave it a bottom border and made the text green not blue (but the difference is not hugely noticeable).  This is also used when hovering over a menu item.  I made the ‘Let’s go’ button blue not green to make it consistent with the navigation button in subsequent stages.  When a new stage loads the page now scrolls to the top as on mobile phones the content was changing but the visible section remained as it was previously, meaning the user had to manually scroll up.  I also retained the ‘I would never say that!’ header in the top-left corner of all stages rather than having ‘activities’ so it’s clearer what activity the user is currently working on.  For the map in the quiz questions I’ve added the ‘Remember’ text above the map rather than above the answer buttons as this seemed more logical and on the quiz the map pane scrolls up and scrolls down when the next question loads so as to make it clearer that it’s changed.  Also, the quiz score and feedback text now scroll down one after the other and in the final ‘explore’ page the clicked on menu item now remains highlighted to make it clearer which map is being displayed.  Here’s a screenshot of how the new interface looks:

Week Beginning 15th November 2021

I had an in-person meeting for the Historical Thesaurus on Tuesday this week – the first such meeting I’ve had since the first lockdown began.  It was a much more enjoyable experience than Zoom-based calls and we had some good discussions about the current state of the HT and where we will head next.  I’m going to continue to work on my radar chart visualisations when I have the time and we will hopefully manage to launch a version of the quiz before Christmas.  There has also been some further work on matching categories and we’ll be looking into this in the coming months.

We also discussed the Digital Humanities conference, which will be taking place in Tokyo next summer.  This is always a really useful conference for me to attend and I wondered about writing a paper about the redevelopment of the Anglo-Norman Dictionary.  I’m not sure at this point whether we would be able to afford to send me to the conference, and the deadline for paper submission is the end of this month.  I did start looking through these blog posts and I extracted all of the sections that relate to the redevelopment of the site.  It’s almost 35,000 words over 74 pages, which shows you how much effort has gone into the redevelopment process.

I also had a meeting with Gerry Carruthers and others about the setting up of an archive for the International Journal of Scottish Theatre and Screen.  I’d set up a WordPress site for this and explored how the volumes, issues and articles could be migrated over from PDFs.  We met with the two students who will now do the work.  I spent the morning before the meeting preparing an instruction document for the students to follow and at the meeting I talked through the processes contained in the document.  Hopefully it will be straightforward for the students to migrate the PDFs, although I suspect it may take them an article or two before they get into the swing of things.

Also this week I fixed an issue with the search results tabs in the left-hand panel of the entry page on the DSL website.  There’s a tooltip on the ‘Up to 1700’ link, but on narrow screens the tooltip was ending up positioned over the link, and when you pressed on it the code was getting confused as to whether you’d pressed on the link or the tooltip.  I repositioned the tooltips so they now appear above the links, meaning they should no longer get in the way on narrow screens.  I also looked into an issue with the DSL’s Paypal account, which wasn’t working.  This turned out to be an issue on the Paypal side rather than with the links through from the DSL’s site.

I also had to rerun the varlist date scripts for the AND as we’d noticed that some quotations had a structure that my script was not set up to deal with.  The expected structure is something like this:

<quotation>ou ses orribles pates paracrosçanz <varlist><ms_var id=””V-43aaf04a”” usevardate=””true””><ms_form>par acros</ms_form><ms_wit>BN</ms_wit><ms_date post=””1300″” pre=””1399″”>s.xiv<sup>in</sup></ms_date></ms_var></varlist> e par ateinanz e par encrés temptacions</quotation>

Where there is one varlist in the quotation, containing one or more ms_var tags.  But the entry ‘purprestur’ has multiple separate varlists in the quotation:

<quotation>Endreit de purprestures voloms qe les nusauntes <varlist><ms_var id=””V-66946b02″”><ms_form>nusantes porprestures</ms_form><ms_wit>W</ms_wit><ms_date>s.xiv</ms_date></ms_var></varlist> soint ostez a coustages de ceux qi lé averount fet <varlist><ms_var id=””V-67f91f67″”><ms_form>des provours</ms_form><ms_wit>A</ms_wit><ms_date>s.xiv</ms_date></ms_var><ms_var id=””V-ea466d5e””><ms_form>des fesours</ms_form><ms_wit>W</ms_wit><ms_date>s.xiv</ms_date></ms_var><ms_var id=””V-88b4b5c2″” usevardate=””true””><ms_form>dé purpresturs</ms_form><ms_wit>M</ms_wit><ms_date post=””1300″” pre=””1310″”>s.xiv<sup>in</sup></ms_date></ms_var><ms_var id=””V-769400cd””><ms_form>des purpernours</ms_form><ms_wit>C</ms_wit><ms_date>s.xiv<sup>1/3</sup></ms_date></ms_var></varlist> </quotation>

I wasn’t aware that this was a possibility, so my script wasn’t set up to catch such situations.  It therefore only looks at the first <varlist>. And the <ms_var> that needs to be used for dating isn’t contained in this, so gets missed.  I therefore updated the script and have run both spreadsheets through it again.  I also updated the DMS so that quotations with multiple varlists can be processed.

Also this week I updated all of the WordPress sites I manage and helped set up the Our Heritage, Our Stories site, and had a further discussion with Sofia about the conference pages for the Iona place-names project.

I spent the rest of the week continuing to work on the mockups for the Speak For Yerself project, creating a further mockup of the grammar quiz that now features all of the required stages.  The ‘word choice’ type of question now has a slightly different layout, with buttons closer together in a block, and after answering the second question there is now an ‘Explore the answers’ button under the map.  Pressing on this loads the summary maps for each question, which are not live maps yet, and underneath the maps is a button for starting the quiz.  There isn’t enough space to have a three-column layout for the quiz so I’ve placed the quiz above the summary maps.  The progress bar also gets reinstated for the quiz and I’ve added the  text ‘Use the maps below to help you’ just to make it clearer what those buttons are for.  The ‘Q1’, ‘Q2’ IDs will probably need to be altered as it just makes it look like the map refers to a particular question in the quiz, which isn’t the case.  It’s possible to keep a map open between quiz questions, and when you press an answer button the ones you didn’t press get greyed out.  If your choice is correct you get a tick, and if not you get a cross and the correct answer gets a tick.  The script keeps track of what questions have been answered correctly in the background and I haven’t implemented a timer yet.  After answering all of the questions (there doesn’t need to be 6 – the code will work with any number) you can finish the section, which displays your score and the ranking.  Here is a screenshot of how the quiz currently looks:

Week Beginning 25th October 2021

I came down with some sort of stomach bug on Sunday and was off work with it on Monday and Tuesday.  Thankfully I was feeling well again by Wednesday and managed to cram quite a lot into the three remaining days of the week.  I spent about a day working on the Data Management Plan for the new Curious Travellers proposal, sending out a first draft on Wednesday afternoon and dealing with responses to the draft during the rest of the week.  I also had some discussions with the Dictionaries of the Scots Language’s IT people about updating the DNS record regarding emails, responded to a query about the technology behind the SCOTS corpus, updated the images used in the mockups of the STAR website and created the ‘attendees only’ page for the Iona Placenames conference and added some content to it.  I also had a conversation with one of the Books and Borrowing researchers about trimming out the blank pages from the recent page image upload, and I’ll need to write a script to implement this next week.

My main task of the week was to develop a test version of the ‘where is the speaker from?’ exercise for the Speak For Yersel project.  This exercise involves the user listening to an audio clip and pressing a button each time they hear something that identifies the speaker as being from a particular area.  In order to create this I needed to generate my own progress bar that tracks the recording as it’s played, implement ‘play’ and ‘pause’ buttons, implement a button that when pressed grabs the current point in the audio playback and places a marker in the progress bar, and implement a means of extrapolating the exact times of the button press to specific sections of the transcription of the audio file so we can ascertain which section contains the feature the user noted.

It took quite some planning and experimentation to get the various aspects of the feature working, but I managed to complete an initial version that I’m pretty pleased with.  It will still need a lot of work but it demonstrates that we will be able to create such an exercise.  The interface design is not final, it’s just there as a starting point, using the Bootstrap framework (https://getbootstrap.com), the colours from the SCOSYA logo and a couple of fonts from Google Fonts (https://fonts.google.com).  There is a big black bar with a sort of orange vertical line on the right.  Underneath this is the ‘Play’ button and what I’ve called the ‘Log’ button (but we probably want to think of something better).  I’ve used icons from Font Awesome (https://fontawesome.com/) including a speech bubble icon in the ‘log’ button.

As discussed previously, when you press the ‘Play’ button the audio plays and the orange line starts moving across the black area.  The ‘Play’ button also turns into a ‘Pause’ button.  The greyed out ‘Log’ button becomes active when the audio is playing.  If you press the ‘Log’ button a speech bubble icon is added to the black area at the point where the orange ‘needle’ is.

For now the exact log times are outputted in the footer area.  Once the audio clip finishes the ‘Play’ button becomes a ‘Start again’ button.  Pressing on this clears the speech bubble icons and the footer and starts the audio from the beginning again.  The log is also processed.  Currently 1 second is taken off each click time to account for thinking and clicking.  I’ve extracted the data from the transcript of the audio and manually converted it into JSON data which is more easily processed by JavaScript.  Each ‘block’ consists of an ID, the transcribed content and the start and end times of the block in milliseconds.

For the time being for each click the script looks through the transcript data to find an entry where the click time is between the entry’s start and end times.  A tally of clicks for each transcript entry is then stored. This then gets outputted in the footer so you can see how things are getting worked out.  This is of course just test data – we’ll need smaller transcript areas for the real thing.  Currently nothing gets submitted to the server or stored – it’s all just processed in the browser.  I’ve tested the page out in several browsers in Windows, on my iPad and on my Android phone and the interface works perfectly well on mobile phone screens.  Below is a screenshot showing audio playback and four linguistic features ‘logged’:

Also this week I had a conversation with the editor of the AND about updating the varlist dates.  I also updated the DTD to allow the new ‘usevardate’ attribute to be used to identify occasions where a varlist date should be used as the earliest citation date.  We also became aware that a small number of entries in the online dictionary are referencing an old DTD on the wrong server so I updated these.

Week Beginning 18th October 2021

I was back at work this week after having a lovely holiday in Northumberland last week.  I spent quite a bit of time in the early part of the week catching up with emails that had come in whilst I’d been off.  I fixed an issue with Bryony Randall’s https://imprintsarteditingmodernism.glasgow.ac.uk/ site, which was put together by an external contractor, but I have now inherited.  The site menu would not update via the WordPress admin interface and after a bit of digging around in the source files for the theme it would appear that the theme doesn’t display a menu anywhere, that is the menu which is editable from the WordPress Admin interface is not the menu that’s visible on the public site.  That menu is generated in a file called ‘header.php’ and only pulls in pages / posts that have been given one of three specific categories: Commissioned Artworks, Commissioned Text or Contributed Text (which appear as ‘Blogs’).  Any post / page that is given one of these categories will automatically appear in the menu.  Any post / page that is assigned to a different category or has no assigned category doesn’t appear.  I added a new category to the ‘header’ file and the missing posts all automatically appeared in the menu.

I also updated the introductory texts in the mockups for the STAR websites and replied to a query about making a place-names website from a student at Newcastle.  I spoke to Simon Taylor about a talk he’s giving about the place-name database and gave him some information on the database and systems I’d created for the projects I’ve been involved with.  I also spoke to the Iona Place-names people about their conference and getting the website ready for this.

I also had a chat with Luca Guariento about a new project involving the team from the Curious Travellers project.  As this is based in Critical Studies Luca wondered whether I’d write the Data Management Plan for the project and I said I would.  I spent quite a bit of time during the rest of the week reading through the bid documentation, writing lists of questions to ask the PI, emailing the PI, experimenting with different technologies that the project might use and beginning to write the Plan, which I aim to complete next week.

The project is planning on running some pre-digitised images of printed books through an OCR package and I investigated this.  Google owns and uses a program called Tesseract to run OCR for Google Books and Google Docs and it’s freely available (https://opensource.google/projects/tesseract).  It’s part of Google Docs – if you upload an image of text into Google Drive then open it in Google Docs the image will be automatically OCRed.  I took a screenshot of one of the Welsh tour pages (https://viewer.library.wales/4690846#?c=&m=&s=&cv=32&manifest=https%3A%2F%2Fdamsssl.llgc.org.uk%2Fiiif%2F2.0%2F4690846%2Fmanifest.json&xywh=-691%2C151%2C4725%2C3632) and cropped the text and then opened it in Google Docs and even on this relatively low resolution image the OCR results are pretty decent.  It managed to cope with most (but not all) long ‘s’ characters and there are surprisingly few errors – ‘Englija’ and ‘Lotty’ are a couple and have been caused by issues with the original print quality.  I’d say using Tesseract is going to be suitable for the project.

I spent a bit of time working on the Speak For Yersel project.  We had a team meeting on Thursday to go through in detail how one of the interactive exercises will work.  This one will allow people to listed to a sound clip and then relisten to it in order to click whenever they hear something that identifies the speaker as coming from a particular location.  Before the meeting I’d prepared a document giving an overview of the technical specification of the feature and we had a really useful session discussing the feature and exactly how it should function.  I’m hoping to make a start on a mockup of the feature next week.

Also for the project I’d enquired with Arts IT Support as to whether the University held a license for ArcGIS Online, which can be used to publish maps online.  It turns out that there is a University-wide license for this which is managed by the Geography department and a very helpful guy called Craig MacDonell arranged for me and the other team members to be set up with accounts for it.  I spent a bit of time experimenting with the interface and managed to publish a test heatmap based on data from SCOSYA.  I can’t get it to work directly with the SCOSYA API as it stands, but after exporting and tweaking one of the sets of rating data as a CSV I pretty quickly managed to make a heatmap based on the ratings and publish it: https://glasgow-uni.maps.arcgis.com/apps/instant/interactivelegend/index.html?appid=9e61be6879ec4e3f829417c12b9bfe51 This is just a really simple test, but we’d be able to embed such a map in our website and have it pull in data dynamically from CSVs generated in real-time and hosted on our server.

Also this week I had discussions with the Dictionaries of the Scots Language people about how dates will be handled.  Citation dates are being automatically processed to add in dates as attributes that can then be used for search purposes.  Where there are prefixes such as ‘a’ and ‘c’ the dates are going to be given ranges based on values for these prefixes.  We had a meeting to discuss the best way to handle this.  Marc had suggested that having a separate prefix attribute rather than hard coding the resulting ranges would be best.  I agreed with Marc that having a ‘prefix’ attribute would be a good idea, not only because it means we can easily tweak the resulting date ranges at a later point rather than having them hard-coded, but also because it then gives us an easy way to identify ‘a’, ‘c’ and ‘?’ dates if we ever want to do this.  If we only have the date ranges as attributes then picking out all ‘c’ dates (e.g. show me all citations that have a date between 1500 and 1600 that are ‘c’) would require looking at the contents of each date tag for the ‘c’ character which is messier.

A concern was raised that not having the exact dates as attributes would require a lot more computational work for the search function, but I would envisage generating and caching the full date ranges when the data is imported into the API so this wouldn’t be an issue.  However, there is a potential disadvantage to not including the full date range as attributes in the XML, and this is that if you ever want to use the XML files in another system and search the dates through it the full ranges would not be present in the XML so would require processing before they could be used.  But whether the date range is included in the XML or not I’d say it’s important to have the ‘prefix’ as an attribute, unless you’re absolutely sure that being able to easily identify dates that have a particular prefix isn’t important.

We decided that prefixes would be stored as attributes and that the date ranges for dates with a prefix would be generated whenever the data is exported from the DSL’s editing system, meaning editors wouldn’t have to deal with noting the date ranges and all the data would be fully usable without further processing as soon as it’s exported.

Also this week I was given access to a large number of images of registers from the Advocates Library that had been digitised by the NLS.  I downloaded these, batch processed them to add in the register numbers as a prefix to the filenames, uploaded the images to our server, created register records for each register and page records for each page.  The registers, pages and associated images can all now be accessed via our CMS.

My final task of the week was to continue work on the Anglo-Norman Dictionary.  I completed work on the script identifies which citations have varlists and which may need to have their citation date updated based on one of the forms in the varlist.  What the script does is to retrieve all entries that have a <varlist> somewhere in them.  It then grabs all of the forms in the <head> of the entry.  It then goes through every attestation (main sense and subsense plus locution sense and subsense) and picks out each one that has a <varlist> in it.

For each of these it then extracts the <aform> if there is one, or if there’s not then it extracts the final word before the <varlist>.  It runs a Levenshtein test on this ‘aform’ to ascertain how different it is from each of the <head> forms, logging the closest match (0 = exact match of one form, 1 = one character different from one of the forms etc).  It then picks out each <ms_form> in the <varlist> and runs the same Levenshtein test on each of these against all forms in the <head>.

If the score for the ‘aform’ is lower or equal to the lowest score for an <ms_form> then the output is added to the ‘varlist-aform-ok’ spreadsheet.  If the score for one of the <ms_form> words is lower than the ‘aform’ score the output is added to the ‘varlist-vform-check’ spreadsheet.

My hope is that by using the scores we can quickly ascertain which are ok and which need to be looked at by ordering the rows by score and dealing with the lowest scores first.  In the first spreadsheet there are 2187 rows that have a score of 0.  This means the ‘aform’ exactly matches one of the <head> forms.  I would imagine that these can safely be ignored.  There are a further 872 that have a score of 1, and we might want to have a quick glance through these to check they can be ignored, but I suspect most will be fine.  The higher the score the greater the likelihood that the ‘aform’ is not the form that should be used for dating purposes and one of the <varlist> forms should instead.  These would need to be checked and potentially updated.

The other spreadsheet contains rows where a <varlist> form has a lower score than the ‘aform’ – i.e. one of the <varlist> forms is closer to one of the <head> forms than the ‘aform’ is.  These are the ones that are more likely to have a date that needs updated. The ‘Var forms’ column lists each var form and its corresponding score.  It is likely that the var form with the lowest score is the form that we would need to pick the date out for.

In terms of what the editors could do with the spreadsheets:  My plan was that we’d add an extra column to note whether a row needs updated or not – maybe called ‘update’ – and be left blank for rows that they think look ok as they are and containing a ‘Y’ for rows that need to be updated.  For such rows they could manually update the XML column to add in the necessary date attributes.  Then I could process the spreadsheet in order to replace the quotation XML for any attestations that needs updated.

For the ‘vform-check’ spreadsheet I could update my script to automatically extract the dates for the lowest scoring form and attempt to automatically add in the required XML attributes for further manual checking, but I think this task will require quite a lot of manual checking from the onset so it may be best to just manually edit the spreadsheet here too.