Week Beginning 11th April 2022

I was back at work on Monday this week after a lovely week off last week.  It was only a four-day week, however, as the week ended with the Good Friday holiday.  I’ll also be off next Monday too.  I had rather a lot to squeeze into the four working days.  For the DSL I did some further troubleshooting for integrating Google Analytics with the DSL’s new https://macwordle.co.uk/ site.  I also had discussions about the upcoming switchover to the new DSL website, which we scheduled in for the week after next, although later in the week it turned out that all of the data has already been finalised so I’ll begin processing it next week.

I participated in a meeting for the Historical Thesaurus on Tuesday, after which I investigated the server stats for the site, which needed fixing.  I also enquired about setting up a domain URL for one of the ‘ac.uk’ sites we host, and it turned out to be something that IT Support could set up really quickly, which is good to know for future reference.  I also had a chat with Craig Lamont about a database / timeline / map interface for some data for the Allan Ramsay project that he would like me to put together to coincide with a book launch at the end of May.  Unfortunately they want this to be part of the University’s T4 website, which makes development somewhat tricky but not impossible.  I had to spend some time familiarising myself with T4 again and arranging for access to the part of the system where the Ramsay content resides.  Now I have this sorted I’ve agreed to look into developing this in early May.  I also deleted a couple of unnecessary entries from the Anglo-Norman Dictionary after the editor requested their removal and created a new version of the requirements document for the front-end for the Books and Borrowing project following feedback form the project team on the previous version.

The rest of my week was spent on the Speak For Yerself project, for which I still have an awful lot to do and not much time to do it in.  I had a meeting with the team on Monday to go over some recent developments, and following that I tracked down a few bugs in the existing code (e.g. a couple of ‘undefined’ buttons in the ‘explore’ maps).  I then replaced all of the audio files in the ‘click’ exercise as the team had decided to use a standardised sentence spoken by many different regional speakers rather than having different speakers saying different things.  As the speakers were not always from the same region as the previous audio clips I needed to change the ‘correct’ regions and also regenerated the MP3 files and transcript data.

I then moved onto a major update to the system: working on the back end.  This took up the rest of the week and although in terms of the interface nothing much should have changed, behind the scenes things are very different.  I designed and implemented the database that will hold all of the data for the project, including information on respondents, answers and geographical areas.  I also migrated all of the activity and question data to this database too.  This was a somewhat time consuming and tedious task as I needed to input every question and every answer option into the database, but it needed to be done.  If we didn’t have the questions and answer options in the database alongside the answers then it would be rather tricky to analyse the data when the time comes, and this way everything is stored in one place and is all interconnected.  Previously the questions were held as JSON data within the JavaScript code for the site, but this was not ideal for the above reason and also because it made updating and manually accessing the question data a bit tricky.

With the new, much tidier arrangement all of the data is stored in a database on the server and the JavaScript code requests the data for an activity when the user loads the activity’s page.  All answer choices and transcript sections also now have their own IDs, which is what we need for recording which specific answer a user has selected.  For example, for the question with the ID 10 if the user selects ‘bairn’ the answer ID 36 will be logged for that user.  I’ve set up the database structure to hold these answers and have populated the postcode area table with all of the GeoJSON data for each area.

The next step will be to populate the table holding specific locations within a postcode area once this data is available.  After that I’ll be able to create the user information form and then I’ll need to update the activities so the selected options are actually saved.  In the meantime I began to implement the user management system.  A user icon now appears in the top right of every page, either with a green background and a tick if you’ve registered or a red background and a cross if you haven’t.  I haven’t created the registration form yet, but have just included a button to register, and when you press this you’ll be registered and this will be remembered in your browser even if you close your browser or turn your device off.  Press on the green tick user icon to view the details recorded about the registered person (none yet) and find an option to sign out if this isn’t you or you want to clear your details.  If you’re not registered and you try to access the activities the page will redirect you to the registration form as we don’t want unregistered people completing the activities.  I’ll continue with this next week, hopefully getting to the point where the choices a user makes are actually logged in the database.  After that I’ll be able to generate maps with real data, which will be an important step.

 

 

Week Beginning 28th March 2022

I was on strike last week, and I’m going to be on holiday next week, so I had a lot to try and cram into this week.  This was made slightly harder when my son tested positive for Covid again on Tuesday evening.  It’s his fourth time, and the last bout was only seven weeks ago.  Thankfully he wasn’t especially ill, but he was off school from Wednesday onwards.

I worked on several different projects this week.  For the Books and Borrowing project I updated the front-end requirements document based on my discussions with the PI and Co-I and set it on for the rest of the team to give feedback on.  I also uploaded a new batch of register images from St Andrews (more than 2,500 page images taking up about 50Gb) and created all of the necessary register and page records.  I also did the same for a couple of smaller registers from Glasgow.  I also exported spreadsheets of authors, edition formats and edition languages for the team to edit too.

For the Anglo-Norman Dictionary I fixed an issue with the advanced search for citations, where entries with multiple citations were having the same date and reference displayed for each snippet rather than the individual dates and references.  I also updated the display of snippets in the search results so they appear in date order.

I also responded to an email from editor Heather Pagan about how language tags are used in the AND XML.  There are 491 entries that have a language tag and I wrote a little script to list the distinct languages and a count of a number of times each appears.  Here’s the output (bearing in mind that an entry may have multiple language tags):

[Latin] => 79

[M.E.] => 369

[Dutch] => 3

[Arabic] => 12

[Hebrew] => 20

[M.L.] => 4

[Greek] => 2

[A.F._and_M.E.] => 3

[Irish] => 2

[M.E._and_A.F.] => 8

[A-S.] => 3

[Gascon] => 1

There seem to be two ways the language tag appears.  One in a sense, and these appear in the entry, e.g. https://anglo-norman.net/entry/Scotland and one in <head> and these don’t currently seem to get displayed.  E.g. https://anglo-norman.net/entry/ganeir  has:

<head> <language lang=”M.E.”/>

<lemma>ganeir</lemma>

</head>

But ‘M.E’ doesn’t appear anywhere.  I could probably write another little script that moves language to the head as above, and then I could update the XSLT so that this type of language tag gets displayed.  Or I could update the XSLT first so we can see how it might look with entries that already have this structure.  I’ll need to hear back from Heather before I do more.

For the Dictionaries of the Scots Language I spent quite a bit of time working with the XSLT for the display of bibliographies.  There are quite a lot of different structures for bibliographical entries, sometimes where the structure of the XML is the same but a different layout is required, so it proved to be rather tricky to get things looking right.  By the end of the week I think I had got everything to display as requested, but I’ll need to see if the team discover any further quirks.

I also wrote a script that extracts citations and their dates from DSL entries.  I created a new citations table that stores the dates, the quotes and associated entry and bibliography IDs.  The table has 747,868 rows in it.  Eventually we’ll be able to use this table for some kind of date search, plus there’s now an easy to access record of all of the bib IDs for each entry / entry IDs for each bib, so displaying lists of entries associated with each bibliography should also be straightforward when the time comes.  I also added new firstdate and lastdate columns to the entry table, picking out the earliest and latest date associated with each entry and storing these.  This means we can add first dates to the browse, something I decided to add in for test purposes later in the week.

I added the first recorded date (the display version not the machine readable version) to the ‘browse’ for DOST and SND.  The dates are right-aligned and grey to make them stand out less than the main browse label.  This does however make the date of the currently selected entry in the browse a little hard to read.  Not all entries have dates available.  Any that don’t are entries where either the new date attributes haven’t been applied or haven’t worked.  This is really just a proof of concept and I will remove the dates from the browse before we go live, as we’re not going to do anything with the new date information until a later point.

I also processed the ‘History of Scots’ ancillary pages.  Someone had gone through these to add in links to entries (hundreds of links), but unfortunately they hadn’t got the structure quite right.  The links had been added in Word, meaning regular double quotes had been converted into curly quotes, which are not valid HTML.  Also the links only included the entry ID, rather than the path to the entry page.  A couple of quick ‘find and replace’ jobs fixed these issues, but I also needed to update the API to allow old DSL IDs to be passed without also specifying the source.  I also set up a Google Analytics account for the DSL’s version of Wordle (https://macwordle.co.uk/)

For the Speak For Yersel project I had a meeting with Mary on Thursday to discuss some new exercises that I’ll need to create.  I also spent some time creating the ‘Sounds about right’ activity.  This had a slightly different structure to other activities in that the questionnaire has three parts with an introduction for each part.  This required some major reworking of the code as things like the questionnaire numbers and the progress bar relied on the fact that there was one block of questions with no non-question screens in between.  The activity also featured a new question type with multiple sound clips.  I had to process these (converting them from WAV to MP3) and then figure out how to embed them in the questions.

Finally, for the Speech Star project I updated the extIPA chart to improve the layout of the playback speed options.  I also made the page remember the speed selection between opening videos – so if you want to view them all ‘slow’ then you don’t need to keep selecting ‘slow’ each time you open one.  I also updated the chart to provide an option to switch between MRI and animation videos and added in two animation MP4s that Eleanor had supplied me with.  I then added the speed selector to the Normalised Speech Database video popups and then created a new ‘Disordered Paediatric Speech Database’, featuring many videos, filters to limit the display of data and the video speed selector.  It was quite a rush to get this finished by the end of the week, but I managed it.

I will be on holiday next week so there will be no post from me then.

Week Beginning 7th March 2022

This was my first five-day week after the recent UCU strike action and it was pretty full-on, involving many different projects.  I spent about a day working on the Speak For Yersel project.  I added in the content for all 32 ‘I would never say that’ questions and completed work on the new ‘Give your word’ lexical activity, which features a further 30 questions of several types.  This includes questions that have associated images and questions where multiple answers can be selected.  For the latter no more than three answers are allowed to be selected and this question type needs to be handed differently as we don’t want the map to load as soon as one answer is selected. Instead the user can select / deselect answers.  If at least one answer is selected a ‘Continue’ button appears under the question.  When you press on this the answers become read only and the map appears.  I made it so that no more than three options can be selected – you need to deselect one before you can add another.  I think we’ll need to look into the styling of the buttons, though, as currently ‘active’ (when a button is hovered over or has been pressed and nothing else has yet been pressed) is the same colour is ‘selected’.  So if you select ‘ginger’ then deselect it the button still looks selected until you press somewhere else, which is confusing.  Also if you press a fourth button it looks like it has been selected when in actual fact it’s just ‘active’ and isn’t really selected.

I also spent about a day continuing to work on the requirements document for the Books and Borrowing project.  I haven’t quite finished this initial version of the document but I’ve made good progress and I aim to have it completed next week.  Also for the project I participated in a Zoom call with RA Alex Deans and NLS Maps expert Chris Fleet about a subproject we’re going to develop for B&B for the Chambers Library in Edinburgh.  This will feature a map-based interface showing where the borrowers lived and will use a historical map layer for the centre of Edinburgh.

Chris also talked about a couple of projects at the NLS that were very useful to see.  The first one was the Jamaica journal of Alexander Innes (https://geo.nls.uk/maps/innes/) which features journal entries plotted on a historical map and a slider allowing you to quickly move through the journal entries.  The second was the Stevenson maps  of Scotland (https://maps.nls.uk/projects/stevenson/) that provides options to select different subjects and date periods.  He also mentioned a new crowdsourcing project to transcribe all of the names on the Roy Military Survey of Scotland (1747-55) maps which launched in February and already has 31,000 first transcriptions in place, which is great.  As with the GB1900 project, the data produced here will be hugely useful for things like place-name projects.

I also participated in a Zoom call with the Historical Thesaurus team where we discussed ongoing work.  This mainly involves a lot of manual linking of the remaining unlinked categories and looking at sensitive words and categories so there’s not much for me to do at this stage, but it was good to be kept up to date.

I continued to work on the new extIPA charts for the Speech Star project, which I had started on last week.  Last week I had some difficulties replicating the required phonetic symbols but this week Eleanor directed me to an existing site that features the extIPA chart (https://teaching.ncl.ac.uk/ipa/consonants-extra.html).  This site uses standard Unicode characters in combinations that work nicely, without requiring any additional fonts to be used.  I’ve therefore copied the relevant codes from there (this is just character codes like &#x62;&#x32A; – I haven’t copied anything other than this from the site).   With the symbols in place I managed to complete an initial version of the chart, including pop-ups featuring all of the videos, but unfortunately the videos seem to have been encoded with an encoder that requires QuickTime for playback.  So although the videos are MP4 they’re not playing properly in browsers on my Windows PC – instead all I can hear is the audio.  It’s very odd as the videos play fine directly from Windows Explorer, but in Firefox, Chrome or MS Edge I just get audio and the static ‘poster’ image.  When I access the site on my iPad the videos play fine (as QuickTime is an Apple product).  Eleanor is still looking into re-encoding the videos and will hopefully get updated versions to me next week.

I also did a bit more work for the Anglo-Norman Dictionary this week.  I fixed a couple of minor issues with the DTD, for example the ‘protect’ attribute was an enumerated list that could either be ‘yes’ or ‘no’ but for some entries the attribute was present but empty, and this was against the rules.  I looked into whether an enumerated list could also include an empty option (as opposed to not being present, which is a different matter) but it looks like this is not possible (see for example http://lists.xml.org/archives/xml-dev/200309/msg00129.html).  What I did instead was to change the ‘protect’ attribute from an enumerated list with options ‘yes’ and ‘no’ to a regular data field, meaning the attribute can now include anything (including being empty).  The ‘protect’ attribute is a hangover from the old system and doesn’t do anything whatsoever in the new system so it shouldn’t really matter.  And it does mean that the XML files should now validate.

The AND people also noticed that some entries that are present in the old version of the site are missing from the new version.  I looked through the database and also older versions of the data from the new site and it looks like these entries have never been present in the new site.  The script I ran to originally export the entries from the old site used a list of headwords taken from another dataset (I can’t remember where from exactly) but I can only assume that this list was missing some headwords and this is why these entries are not in the new site.  This is a bit concerning, but thankfully the old site is still accessible.  I managed to write a little script that grabs the entire contents of the browse list from the old website, separating it into two lists, one for main entries and one for xrefs.  I then ran each headword against a local version of the current AND database, separating out homonym numbers then comparing the headword with the ‘lemma’ field in the DB and the hom with the hom.  Initially I ran main and xref queries separately, comparing main to main and xref to xref, but I realised that some entries had changed types (legitimately so, I guess) so stopped making a distinction.

The script outputted 1540 missing entries.  This initially looks pretty horrifying, but I’m fairly certain most of them are legitimate.  There are a whole bunch of weird ‘n’ forms in the old site that have a strange character (e.g. ‘nun⋮abilité’) that are not found in the new site, I guess intentionally so.  Also, there are lots of ‘S’ and ‘R’ words but I think most of these are because of joining or splitting homonyms.  Geert, the editor, looked through the output and thankfully it turns out that only a handful of entries are missing, and also that these were also missing from the old DMS version of the data so their omission occurred before I became involved in the project.

Finally this week I worked with a new dataset of the Dictionaries of the Scots Language.  I successfully imported the new data and have set up a new ‘dps-v2’ api.  There are 80,319 entries in the new data compared to 80,432 in the previous output from DPS.  I have updated our test site to use the new API and its new data, although I have not been able to set up the free-text data in Solr yet so the advanced search for full text / quotations only will not work yet.  Everything else should, though.

Also today I began to work on the layout of the bibliography page.  I have completed the display of DOST bibs but haven’t started on SND yet.  This includes the ‘style guide’ link when a note is present.  I think we may still need to tweak the layout, however.  I’ll continue to work with the new data next week.

Week Beginning 28 February 2022

I participated in the UCU strike action from Monday to Wednesday this week, making it a two-day week for me.  I’d heard earlier in the week that the paper I’d submitted about the redevelopment of the Anglo-Norman Dictionary had been accepted for DH2022 in Tokyo, which was great.  However, the organisers have decided to make the conference online only, which is disappointing, although probably for the best given the current geopolitical uncertainty.  I didn’t want to participate in an online only event that would be taking place in Tokyo time (nine hours ahead of the UK) so I’ve asked to withdraw my paper.

On Thursday I had a meeting with the Speak For Yersel project to discuss the content that the team have prepared and what I’ll need to work on next.  I also spend a bit of time looking into creating a geographical word cloud which would fit word cloud output into a geoJSON polygon shape.  I found one possible solution here: https://npm.io/package/maptowordcloud but I haven’t managed to make it work yet.

I also received a new set of videos for the Speech Star project, relating to the extIPA consonants, and I began looking into how to present these.  This was complicated by the extIPA symbols not being standard Unicode characters.  I did a bit of research into how these could be presented, and found this site http://www.wazu.jp/gallery/Test_IPA.html#ExtIPAChart but here the marks appear to the right of the main symbol rather than directly above or below.  I contacted Eleanor to see if she had any other ideas and she got back to me with some alternatives which I’ll need to look into next week.

I spent a bit of time working for the DSL this week too, looking into a question about Google Analytics from Pauline Graham (and finding this very handy suite of free courses on how to interpret Google Analytics here https://analytics.google.com/analytics/academy/).  The DSL people had also wanted me to look into creating a Levenshtein distance option, whereby words that are spelled similarly to an entered term are given as suggestions, in a similar way to this page: http://chrisgilmour.co.uk/scots/levensht.php?search=drech.  I created a test script that allows you to enter a term and view the SND headwords that have a Levenshtein distance of two or less from your term, with any headwords with a distance of one highlighted in bold.  However, Levenshtein is a bit of a blunt tool, and as it stands I’m not sure the results of the script are all that promising.  My test term ‘drech’ brings back 84 matches, including things like ‘french’ which is unfortunately only two letters different from ‘drech’.  I’m fairly certain my script is using the same algorithm as used by the site linked to above, it’s just that we have a lot more possible matches.  However, this is just a simple Levenshtein test – we could also add in further tests to limit (or expand) the output, such as a rule that changes vowels in certain places as in the ‘a’ becomes ‘ai’ example suggested by Rhona at our meeting last week.  Or we could limit the output to words beginning with the same letter.

Also this week I had a chat with the Historical Thesaurus people, arranging a meeting for next week and exporting a recent version of the database for them to use offline.  I also tweaked a couple of entries for the AND and spent an hour or so upgrading all of the WordPress sites I manage to the latest WordPress version.

Week Beginning 21st February 2022

I participated in the UCU strike action for all of last week and on Monday and Tuesday this week.  I divided the remaining three days between three projects:  the Anglo-Norman Dictionary, the Dictionaries of the Scots Language and Books and Borrowing.

For AND I continued to work on the publication of a major update of the letter S.  I had deleted all of the existing S entries and had imported all of the new data into our test instance the week before the strike, giving the editors time to check through it all and work on the new data via the content management system of the test instance.  They had noticed that some of the older entries hadn’t been deleted, and this was causing some new entries to not get displayed (as both old and new entries had the same ‘slug’ and therefore the older entry was still getting picked up when the entry’s page was loaded).  It turned out that I had forgotten that not all S entries actually have a headword beginning with ‘s’ – there are some that have brackets and square brackets.  There were 119 of these entries still left in the system and I updated my deletion scripts to remove these additional entries, ensuring that only the older versions and not the new ones were removed.  This fixed the issues with new entries not appearing.  With this task completed and the data approved by the editors we replaced the live data with the data from the test instance.

The update has involved 2,480 ‘main’ entries, containing 4,109 main senses, 1,295 subsenses, 2,627 locutions, 1,753 locution senses, 204 locution subsenses and 16,450 citations.  In addition, 4,623 ‘xref’ entries have been created or updated.  I also created a link checker which goes through every entry, pulls out all cross references from anywhere in the entry’s XML and checks to see whether each cross-referenced entry actually exists in the system.  The vast majority of links were all working fine but there were still a substantial number that were broken (around 800).  I’ve passed a list of these over to the editors who will need to manually fix the entries over time.

For the DSL I had a meeting on Thursday morning with Rhona, Ann and Pauline to discuss the major update to the DSL’s data that is going to go live soon.  This involves a new batch of data exported from their new editing system that will have a variety of significant structural changes, such as a redesigned ‘head’ section, and an overhauled method of recording dates.  We will also be migrating the live site to a new server, a new API and a new Solr instance so it’s a pretty major change.  We had been planning to have all of this completed by the end of March, but due to the strike we now think it’s best to push this back to the end of April, although we may launch earlier if I manage to get all of the updates sorted before then.  Following the meeting I made a few updates to our test instance of the system (e.g. reinstating some superscript numbers from SND that we’d previously hidden) and had a further email conversation with Ann about some ancillary pages.

For the Books and Borrowing project I downloaded a new batch of images for five more registers that had been digitised for us by the NLS.  I then processed these, uploaded them to our server and generated register and page records for each page image.  I also processed the data from the Royal High School of Edinburgh that had been sent to me in a spreadsheet.  There were records from five different registers and it took quite some time to write a script that would process all of the data, including splitting up borrower and book data, generating book items where required and linking everything together so that a borrower and a book only exist once in the system even if they are associated with many borrowing records.  Thankfully I’d done this all before for previous external datasets, but the process is always different for each dataset so there was still much in the way of reworking to be done.

I completed my scripts and ran them on a test instance of the database running on my local PC to start with.  When all was checked and looking good I ran the scripts on the live server to incorporate the new register data with the main project dataset.  After completing the task there were 19,994 borrowing records across 1,438 register pages, involving 1,932 books and 2,397 borrowers.  Some tweaking of the data may be required (e.g. I noticed there are two ‘Alexander Adam’ borrowers, which seems to have occurred because there was a space character before the forename sometimes) but on the whole it’s all looking good to me.

Next week I’ll be on strike again on Monday to Wednesday.

Week Beginning 7th February 2022

It’s been a pretty full-on week ahead of the UCU strike action, which begins on Monday.  I spent quite a bit of time working on the Speak For Yersel project, starting with a Zoom call on Monday, after which I continued to work on the ‘click’ map I’d developed last week.  The team liked what I’d created but wanted some changes to be made.  They didn’t like that the area containing the markers was part of the map and you needed to move the map back to the marker area to grab and move a marker.  Instead they wanted to have the markers initially stored in a separate section beside the map.  I thought this would be very tricky to implement but decided to investigate anyway and unfortunately I was proved right.  In the original version the markers are part of the mapping library – all we’re doing is moving them around the map.  To have the icons outside the map means the icons initially cannot be part of the mapping library, but instead need to be simple HTML elements, but when they are dragged into the map they then have to become map markers with latitude and longitude values, ideally with a smooth transition from plain HTML to map icon as the element is dragged from the general website into the map pane.

It took many hours to figure out how this might work and to update the map to implement the new way of doing things.  I discovered that HTML5’s default drag and drop functionality could be used (see this example: https://jsfiddle.net/430oz1pj/197/), which allows you to drag an HTML element and drop it somewhere.  If the element is dropped over the map then a marker can be created at this point.  However, this proved to be more complicated than it looks to implement as I needed to figure out a way to pass the ID of the HTML marker to the mapping library, and also handle the audio files associated with the icons.  Also, the latitude and longitude generated in the above example was not in any way an accurate representation of the cursor pointer location.  For this reason I integrated a Leaflet plugin that displays the coordinates of the mouse cursor (https://github.com/MrMufflon/Leaflet.Coordinates).  I hid this on the map, but it still runs in the background, allowing my script to grab the latitude and longitude of the cursor at the point where the HTML element is dropped.  I also updated the marker icons to add a number to each one, making it easier to track which icon is which.  This also required me to rework the play and pause audio logic.  With all of this in place I completed ‘v2’ of the click map and I thought the task was completed until I did some final testing on my iPad and Android phone.  And unfortunately I discovered that the icons don’t drag on touchscreen devices (even the touchscreen on my Windows 11 laptop).  This was a major setback as clearly we need the resource to work on touchscreens.

It turns out HTML5 drag and drop simply does not work with touchscreens.  So I’m afraid it was back to the drawing board.  I remembered that I successfully used drag and drop on touchscreens for the MetaphorIC website (see question 5 on this page: https://mappingmetaphor.arts.gla.ac.uk/metaphoric/exercise.html?id=49).  This website uses a different JavaScript framework called jQuery UI (see https://jqueryui.com/draggable/) so I figured I’d integrate this with the SFY website.  However, on doing so and updating the HTML and JavaScript to use the new library the HTML elements still wouldn’t drag on a touchscreen.  Thankfully I remembered that a further JavaScript library called jQuery UI Touch Punch (https://github.com/furf/jquery-ui-touch-punch) was needed for the drag functionality to work on touchscreens.  With this in place the icons could now be dragged around the screen.  However, getting the jQuery UI library to interact with the Leaflet mapping library also proved to be tricky.  The draggable icons ended up disappearing behind the map pane rather than dragged over the top.  They would then drop in the correct location, but having them invisible until you dropped them was no good.  I fixed this by updating the z-index of the icons (this controls the order HTML elements appear on the screen).  Finally the icon would glide across the map before being dropped.  But this also prevented the Leaflet Coordinates plugin from picking up the location of the cursor when the icon was dropped, meaning the icon either appeared on the map in the wrong location or simply disappeared entirely.  I almost gave up at this point, but I decided to go back to the method of positioning the marker as found in the first link above – the one that positioned a dropped icon, but in the wrong location.  The method used in this example did actually work with my new drag and drop approach, which was encouraging.  I also happened to return to the page that linked to the example: https://gis.stackexchange.com/questions/296126/dragging-and-dropping-images-into-leaflet-map-to-create-image-markers and found a comment further down the page that noted the incorrect location of the dropped marker and proposed a solution.  After experimenting with this I thankfully discovered that it worked, meaning I could finishe work on ‘v3’ of the click map, which is identical to ‘v2’ other than the fact that is works on touchscreens.

I then created a further ‘v4’ version has the updated areas (Shetland and Orkney, Western Isles and Argyll are now split) and use the broader areas around Shetland and the Western Isles for ‘correct’ areas.  I’ve also updated the style of the marker box and made it so that the ‘View correct locations’ and ‘Continue’ buttons only become active after the user has dragged all of the markers onto the map.

The ‘View correct locations’ button also now works again.  The team had also wanted the correct locations to appear on a new map that would appear beside the existing map.  Thinking more about this I really don’t think it’s a good idea.  Introducing another map is likely to confuse people and on smaller screens the existing map already takes up a lot of space.  A second map would need to appear below the first map and people might not even realise there are two maps as both wouldn’t fit on screen at the same time.  What I’ve done instead is to slow down the animation of markers to their correct location when the ‘view’ button is pressed so it’s easier to see which marker is moving where.  I think this in combination with the markers now being numbered makes it clearer.  Here’s a screenshot of this ‘v4’ version showing two markers on the map, one correct, the other wrong:

There is still the issue of including the transcriptions of the speech.  We’d discussed adding popups to the markers to contain these, but again the more I think about this the more I reckon it’s a bad idea.  Opening a popup requires a click and the markers already have a click event (playing / stopping the audio).  We could change the click event after the ‘View correct locations’ button is pressed, so that from that point onwards clicking on a marker opens a popup instead of playing the audio, but I think this would be horribly confusing.  We did talk about maybe always having the markers open a popup when they’re clicked and then having a further button to play the audio in the popup along with the transcription, but requiring two clicks to listen to the audio is pretty cumbersome.  Plus marker popups are part of the mapping library so the plain HMTL markers outside the map couldn’t have popups, or at least not the same sort.

I wondered if we’re attempting to overcomplicate the map.  I would imagine most school children aren’t even going to bother looking at the transcripts and cluttering up the map with them might not be all that useful.  An alternative might be to have the transcripts in a collapsible section underneath the ‘Continue’ button that appears after the ‘check answers’ button is pressed.  We could have some text saying something like ‘Interested in reading what the speakers said?  Look at the transcripts below’.  The section could be hidden by default and then pressing on it opens up headings for speakers 1-8.  Pressing on a heading then expands a section where the transcript can be read.

On Tuesday I had a call with the PI and Co-I of the Books and Borrowing project about the requirements for the front-end and the various search and browse functionality it would need to have.  I’d started writing a requirements document before the meeting and we discussed this, plus their suggestions and input from others.  It was a very productive meeting and I continued with the requirements document after the call.  There’s still a lot to put into it, and the project’s data and requirements are awfully complicated, but I feel like we’re making good progress and things are beginning to make sense.

I also made some further tweaks to the speech database for the Speech Star project.  I’d completed an initial version of this last week, including the option to view multiple selected videos side by side.  However, while the videos worked fine in Firefox in other browsers only the last video loaded in successfully.   It turns out that there’s a limit to the number of open connections Chrome will allow.  If I set the videos so that the content doesn’t preload then all videos work when you press to play them.  However, this does introduce a further problem:  without preloading the video nothing gets displayed where the video appears unless you add in a ‘poster’, which is an image file to use as a placeholder, usually a still from the video.  We had these for all of the videos for Seeing Speech, but we don’t have them for the new STAR videos.  I’ve made a couple manually for the test page, but I don’t want to have to manually create hundreds of such images.  I did wonder about doing this via YouTube as it generates placeholder images, but even this is going to take a long time as you can only upload 15 videos at once to Youtube, then you need to wait for them to be processed, then you need to manually download the image you want.

I found a post that gave some advice on programmatically generating poster images from video files (https://stackoverflow.com/questions/2043007/generate-preview-image-from-video-file) but the PHP library seemed to require some kind of weird package installer to first be installed in order to function.  The library also required https://ffmpeg.org/download.html to be installed to function, and I decided to not bother with the PHP library and just use FFMPEG directly, calling it from the command line via a PHP script and iterating through the hundreds of videos to make the posters.  It worked very well and now the ‘multivideo’ feature works perfectly in all browsers.

Also this week I had a Zoom call with Ophira Gamliel in Theology about a proposal she’s putting together.  After the call I wrote sections of a Data Management Plan for the proposal and answered several emails over the remainder of the week.  I also had a chat with the DSL people about the switch to the new server that we have scheduled for March.  There’s quite a bit to do with the new data (and new structures in the new data) before we go live to March is going to be quite a busy time.

Finally this week I spent some time on the Anglo-Norman Dictionary.  I finished generating the KWIC data for one of the textbase texts now that the server will allow scripts to execute for a longer time.  I also investigated an issue with the XML proofreader that was giving errors.  It turned out that the errors were being caused by errors in the XML files and I found out that oXygen offers a very nice batch validation facility that you can run on massive batches of XML files at the same time (See https://www.oxygenxml.com/doc/versions/24.0/ug-editor/topics/project-validation-and-transformation.html).  I also began working with a new test instance of the AND site, through which I am going to publish the new data for the letter S.  There are many thousand XML files that need to be integrated and it’s taking some time to ensure the scripts to process these work properly, but all is looking encouraging.

I will be participating in the UCU strike action over the coming weeks so that’s all for now.

Week Beginning 24th January 2022

I had a very busy week this week, working on several different projects.  For the Books and Borrowing project I participated in the team Zoom call on Monday to discuss the upcoming development of the front-end and API for the project, which will include many different search and browse facilities, graphs and visualisations.  I followed this up with a lengthy email to the PI and Co-I where I listed some previous work I’ve done and discussed some visualisation libraries we could use.  In the coming weeks I’ll need to work with them to write a requirements document for the front-end.  I also downloaded images from Orkney library, uploaded all of them to the server and generated the necessary register and page records.  One register with 7 pages already existed in the system and I ensured that page images were associated with these and the remaining pages of the register fit in with the existing ones.  I also processed the Wigtown data that Gerry McKeever had been working on, splitting the data associated with one register into two distinct registers, uploading page images and generating the necessary page records.  This was a pretty complicated process, and I still need to complete the work on it next week, as there are several borrowing records listed as separate rows when in actual fact they are merely another volume of the same book borrowed at the same time.  These records will need to be amalgamated.

For the Speak For Yersel project I had a meeting with the PI and RA on Monday to discuss updates to the interface I’ve been working on, new data for the ‘click’ exercise and a new type of exercise that will precede the ‘click’ exercise and will involve users listening to sound clips then dragging and dropping them onto areas of a map to see whether they can guess where the speaker is from.  I spent some time later in the week making all of the required changes to the interface and the grammar exercise, including updating the style used for the interactive map and using different marker colours.

I also continued to work on the speech database for the Speech Star project based on feedback I received about the first version I completed last week.  I added in some new introductory text and changed the order of the filter options.  I also made the filter option section hidden by default as it takes up quite a lot of space, especially on narrow screens.  There’s now a button to show / hide the filters, with the section sliding down or up.  If a filter option is selected the section remains visible by default.  I also changed the colour of the filter option section to a grey with a subtle gradient (it gets lighter towards the right) and added a similar gradient to the header, just to see how it looks.

The biggest update was to the filter options, which I overhauled so that instead of a drop-down list where one option in each filter type can be selected there are checkboxes for each filter option, allowing multiple items of any type to be selected.  This was a fairly large change to implement as the way selected options are passed to the script and the way the database is queried needed to be completely changed.  When an option is selected the page immediately reloads to display the results of the selection and this can also change the contents of the other filter option boxes – e.g. selecting ‘alveolar’ limits the options in the ‘sound’ section.  I also removed the ‘All’ option and left all checkboxes unselected by default.  This is how filters on clothes shopping sites do it – ‘all’ is the default and a limit is only applied if an option is ticked.

I also changed the ‘accent’ labels as requested, changed the ‘By Prompt’ header to ‘By Word’ and updated the order of items in the ‘position’ filter.  I also fixed an issue where ‘cheap’ and ‘choose’ were appearing in a column instead of the real data.  Finally, I made the overlay that appears when a video is clicked on darker so it’s more obvious that you can’t click on the buttons.  I did investigate whether it was possible to have the popup open while other page elements were still accessible but this is not something that the Bootstrap interface framework that I’m using supports, at least not without a lot of hacking about with its source code.  I don’t think it’s worth pursuing this as the popup will cover much of the screen on tablets / phones anyway, and when I add in the option to view multiple videos the popup will be even larger.

Also this week I made some minor tweaks to the Burns mini-project I was working on last week and had a chat with the DSL people about a few items, such as the data import process that we will be going through again in the next month or so and some of the outstanding tasks that I still need to tackle with the DSL’s interface.

I also did some work for the AND this week, investigating a weird timeout error that cropped up on the new server and discussing how best to tackle a major update to the AND’s data.  The team have finished working on a major overhaul of the letter S and this is now ready to go live.  We have decided that I will ask for a test instance of the AND to be set up so I can work with the new data, testing out how the DMS runs on the new server and how it will cope with such a large update.

The editor, Geert, had also spotted an issue with the textbase search, which didn’t seem to include one of the texts (Fabliaux) he was searching for.  I investigated the issue and it looked like the script that extracted words from pages may have silently failed in some cases.  There are 12,633 page records in the textbase, each of which has a word count.  When the word count is greater than zero my script processes the contents of the page to generate the data for searching.  However, there appear to be 1889 pages in the system that have a word count of zero, including all of Fabliaux.  Further investigation revealed that my scripts expect the XML to be structured with the main content in a <body> tag.  This cuts out all of the front matter and back matter from the searches, which is what we’d agreed should happen and thankfully accounts for many of the supposedly ‘blank’ pages listed above as they’re not the actual body of the text.

However, Fabliaux doesn’t include the <body> tag in the standard way.  In fact, the XML file consists of multiple individual texts, each of which has a separate <body> tag.  As my script didn’t find a <body> in the expected place no content was processed.  I ran a script to check the other texts and the following also have a similar issue:  gaunt1372 (710 pages) and polsongs (111 pages), in addition to the 37 pages of Fabliaux.  Having identified these I could update my script that generates search words and re-ran it for these texts, fixing the issue.

Also this week I attended a Zoom-based seminar on ‘Digitally Exhibiting Textual Heritage’ that was being run by Information Studies.  This featured four speakers from archives, libraries and museums discussing how digital versions of texts can be exhibited, both in galleries and online.  Some really interesting projects were discussed, both past and present.  This included the BL’s ‘Turning the Pages’ system (http://www.bl.uk/turning-the-pages/) , some really cool transparent LCD display cases (https://crystal-display.com/transparent-displays-and-showcases/) that allow images to be projected on clear glass while objects behind the panel are still visible.  3d representations of gallery spaces were discussed (e.g. https://www.lib.cam.ac.uk/ghostwords), as were ‘long form narrative scrolls’ such as https://www.nytimes.com/projects/2012/snow-fall/index.html#/?part=tunnel-creek,  http://www.wolseymanuscripts.ac.uk/ and https://stories.durham.ac.uk/journeys-prologue/.  There is a tool that can be used to create these here: https://shorthand.com/.  It was a very interesting session!

Week Beginning 10th January 2022

I continued to work on the Books and Borrowing project for a lot of this week, completing some of the tasks I began last week and working on some others.  We ran out of server space for digitised page images last week, and although I freed up some space by deleting a bunch of images that were no longer required we still have a lot of images to come.  The team estimates that a further 11,575 images will be required.  If the images we receive for these pages are comparable to the ones from the NLS, which average around 1.5Mb each, then 30Gb should give us plenty of space.  However, after checking through the images we’ve received from other digitisation units it turns out that the  NLS images are a vit of an outlier in term of file size and generally 8-10Mb is more usual.  If we use this as an estimate then we would maybe require 120Gb-130Gb of additional space.  I did some experiments with resizing and changing the image quality of one of the larger images, managing to bring an 8.4Mb image down to 2.4Mb while still retaining its legibility.  If we apply this approach to the tens of thousands of larger images we have then this would result in a considerable saving of storage.  However, Stirling’s IT people very kindly offered to give us a further 150Gb of space for the images so this resampling process shouldn’t be needed for now at least.

Another task for the project this week was to write a script to renumber the folio numbers for the 14 volumes from the Advocates Library that I noticed had irregular numbering.  Each of the 14 volumes had different issues with their handwritten numbering, so I had to tailor my script to each volume in turn, and once the process was complete the folio numbers used to identify page images in the CMS (and eventually in the front-end) entirely matched the handwritten numbers for each volume.

My next task for the project was to import the records for several volumes from the Royal High School of Edinburgh but I ran into a bit of an issue.  I had previously been intending to extract the ‘item’ column and create a book holding record and a single book item record for each distinct entry in the column.  This would then be associated with all borrowing records in RHS that also feature this exact ‘item’.  However, this is going to result in a lot of duplicate holding records due to the contents of the ‘item’ column including information about different volumes of a book and/or sometimes using different spellings.

For example, in SL137142 the book ‘Banier’s Mythology’ appears four times as follows (assuming ‘Banier’ and ‘Bannier’ are the same):

  1. Banier’s Mythology v. 1, 2
  2. Banier’s Mythology v. 1, 2
  3. Bannier’s Myth 4 vols
  4. Bannier’s Myth. Vol 3 & 4

My script would create one holding and item record for ‘Banier’s Mythology v. 1, 2’ and associate it with the first two borrowing records but the 3rd and 4th items above would end up generating two additional holding / item records which would then be associated with the 3rd and 4th borrowing records.

No script I can write (at least not without a huge amount of work) would be able to figure out that all four of these books are actually the same, or that there are actually 4 volumes for the one book, each requiring its own book item record, and that volumes 1 & 2 need to be associated with borrowing records 1&2 while all 4 volumes need to be associated with borrowing record 3 and volumes 3&4 need to be associated with borrowing record 4.  I did wonder whether I might be able to automatically extract volume data from the ‘item’ column but there is just too much variation.

We’re going to have to tackle the normalisation of book holding names and the generation of all required book items for volumes at some point and this either needs to be done prior to ingest via the spreadsheets or after ingest via the CMS.

My feeling is that it might be simpler to do it via the spreadsheets before I import the data.  If we were to do this then the ‘Item’ column would become the ‘original title’ and we’d need two further columns, one for the ‘standardised title’ and one listing the volumes, consisting of a number of each volume separated with a comma.  With the above examples we would end up with the following (with a | representing a column division):

  1. Banier’s Mythology v. 1, 2 | Banier’s Mythology | 1,2
  2. Banier’s Mythology v. 1, 2 | Banier’s Mythology | 1,2
  3. Bannier’s Myth 4 vols | Banier’s Mythology | 1,2,3,4
  4. Bannier’s Myth. Vol 3 & 4 | Banier’s Mythology | 3,4

If each sheet of the spreadsheet is ordered alphabetically by the ‘item’ column it might not take too long to add in this information.  The additional fields could also be omitted where the ‘item’ column has no volumes or different spellings.  E.g. ‘Hederici Lexicon’ may be fine as it is.  If the ‘standardised title’ and ‘volumes’ columns are left blank in this case then when my script reaches the record it will know to use ‘Hederici Lexicon’ as both original and standardised titles and to generate one single unnumbered book item record for it.  We agreed that normalising the data prior to ingest would be the best approach and I will therefore wait until I receive updated data before I proceed further with this.

Also this week I generated a new version of a spreadsheet containing the records for one register for Gerry McKeever, who wanted borrowers, book items and book holding details to be included in addition to the main borrowing record.  I also made a pretty major update to the CMS to enable books and borrower listings for a library to be filtered by year of borrowing in addition to filtering by register.  Users can either limit the data by register or year (not both).  They need to ensure the register drop-down is empty for the year filter to work, otherwise the selected register will be used as the filter.  On either the ‘books’ or ‘borrowers’ tab in the year box they can add either a single year (e.g. 1774) or a range (e.g. 1770-1779).  Then when ‘Go’ is pressed the data displayed is limited to the year or years entered.  This also includes the figures in the ‘borrowing records’ and ‘Total borrowed items’ columns.  Also, the borrowing records listed when a related pop-up is opened will only feature those in the selected years.

I also worked with Raymond in Arts IT Support and Geert, the editor of the Anglo-Norman Dictionary to complete the process of migrating the AND website to the new server.  The website (https://anglo-norman.net/) is now hosted on the new server and is considerably faster than it was previously.  We also took the opportunity the launch the Anglo-Norman Textbase, which I had developed extensively a few months ago.  Searching and browsing can be found here: https://anglo-norman.net/textbase/ and this marks the final major item in my overhaul of the AND resource.

My last major task of the week was to start work on a database of ultrasound video files for the Speech Star project.  I received a spreadsheet of metadata and the video files from Eleanor this week and began processing everything.  I wrote a script to export the metadata into a three-table related database (speakers, prompts and individual videos of speakers saying the prompts) and began work on the front-end through which this database and the associated video files will be accessed.  I’ll be continuing with this next week.

In addition to the above I also gave some advice to the students who are migrating the IJOSTS journal over the WordPress, had a chat with the DSL people about when we’ll make the switch to the new API and data, set up a WordPress site for Joanna Kopaczyk for the International Conference on Middle English, upgraded all of the WordPress sites I manage to the latest version of WordPress, made a few tweaks to the 17th Century Symposium website for Roslyn Potter, spoke to Kate Simpson in Information Studies about speaking to her Digital Humanities students about what I do and arranged server space to be set up for the Speak For Yersel project website and the Speech Star project website.  I also helped launch the new Burns website: https://burnsc21-letters-poems.glasgow.ac.uk/ and updated the existing Burns website to link into it via new top-level tabs.  So a pretty busy week!

Week Beginning 20th December 2021

This was the last week before Christmas and it’s a four-day week as the University has generously given us all an extra day’s holiday on Christmas Eve.  I also lost a bit of time due to getting my Covid booster vaccine on Wednesday.  I was booked in for 9:50 and got there at 9:30 to find a massive queue snaking round the carpark.  It took an hour to queue outside, plus about 15 minutes inside, but I finally got my booster just before 11.  The after-effects kicked in during Wednesday night and I wasn’t feeling great on Thursday, but I managed to work.

My major task of the week was to deal with the new Innerpeffray data for the Books and Borrowing project.  I’d previously uploaded data from an existing spreadsheet in the early days of the project, but it turns out that there were quite a lot of issues with the data and therefore one of the RAs has been creating a new spreadsheet containing reworked data.  The RA Kit got back to me this week after I’d checked some issues with her last week and I therefore began the process of deleting the existing data and importing the new data.

I was a pretty torturous process but I managed to finish deleting the existing Innerpeffray data and imported the new data.  This required a pretty complex amount of processing and checking via a script I wrote this week.  I managed to retain superscript characters in the transcriptions, something that proved to be very tricky as there is no way to find and replace superscript characters in Excel.  Eventually I ended up copying the transcription column into Word, then saving the table as HTML, stripping out all of the rubbish Word adds in when it generates an HTML file and then using this resulting file alongside the main spreadsheet file that I saved as a CSV.  After several attempts at running the script on my local PC, then fixing issues, then rerunning, I eventually reckoned the script was working as it should – adding page, borrowing, borrower, borrower occupation, book holding and book item records as required.  I then ran the script on the server and the data is now available via the CMS.

There were a few normalised occupations that weren’t right and I updated these.  There were also 287 standardised titles that didn’t match any existing book holding records in Innerpeffray.  For these I created a new holding record and (if there’s an ESTC number) linked to a corresponding edition.

Also this week I completed work on the ‘Guess the Category’ quizzes for the Historical Thesaurus.  Fraser had got back to me about the spreadsheets of categories and lexemes that might cause offence and should therefore never appear in the quiz.  I added a new ‘inquiz’ column to both the category and lexeme table which has been set to ‘N’ for each matching category and lexeme.  I also updated the code behind the quiz so that only categories and lexemes with ‘inquiz’ set to ‘Y’ are picked up.

The category exclusions are pretty major – a total of 17,111 are now excluded.  This is due to including child categories where noted, and 8340 of these are within ’03.08 Faith’.  For lexemes there are a total of 2174 that are specifically noted as excluded based on both tabs of the spreadsheet (but note that all lexemes in excluded categories are excluded by default – a total of 69099).  The quiz picks a category first and then a lexeme within it, so there should never be a case where a lexeme in an excluded category is displayed.  I also ensured that when a non-noun category is returned if there isn’t a full trail of categories (because there isn’t a parent in the same part of speech) then the trail is populated from the noun categories instead.

The two quizzes (a main one and an Old English one) are now live and can be viewed here:

https://ht.ac.uk/guess-the-category/

https://ht.ac.uk/guess-the-oe-category/

Also this week I made a couple of tweaks to the Comparative Kingship place-names systems, adding in Pictish as a language and tweaking how ‘codes’ appear in the map.  I also helped Raymond migrate the Anglo-Norman Dictionary to the new server that was purchased earlier this year.  We had to make a few tweaks to get the site to work at a temporary URL but it’s looking good now.  We’ll update the DNS and make the URL point to the new server in the New Year.

That’s all for this year.  If there is anyone reading this (doubtful, I know) I wish you a merry Christmas and all the best for 2022!

Week Beginning 13th December 2021

My big task of the week was to return to working for the Speak For Yersel project after a couple of weeks when my services haven’t been required.  I had a meeting with PI Jennifer Smith and RA Mary Robinson on Monday where we discussed the current status of the project and the tasks I should focus on next.  Mary had finished work on the geographical areas we are going to use.  These are based on postcode areas but a number of areas have been amalgamated.  We’ll use these to register where a participant is from and also to generate a map marker representing their responses at a random location within their selected area based on the research I did a few weeks ago about randomly positioning a marker in a polygon.

The original files that Mary sent me were plus two exports from ArcGIS as JSON and GeoJSON.  Unfortunately both files used a different coordinates system rather than latitude and longitude, the GeoJSON file didn’t include any identifiers for the areas so couldn’t really be used and while the JSON file looked promising when I tried to use it in Leaflet it gave me an ‘invalid GeoJSON object’ error.  Mary then sent me the original ArcGIS file for me to work with and I spent some time in ArcGIS figuring out how to export the shapefile data as GeoJSON with latitude and longitude.

Using ArcGIS I exported the data by typing in ‘to json’ in the ‘Geoprocessing’ pane on the right of the map then selecting ‘Features to JSON’.  I selected ‘output to GeoJSON’ and also checked ‘Project to WGS_1984’ which converts the ArcGIS coordinates to latitude and longitude.  When not using the ‘formatted JSON option’ (which adds in line breaks and tabs) this gave me a file size of 115Mb.  As a starting point I created a Leaflet map that uses this GeoJSON file but I ran into a bit of a problem:  the data takes a long time to load into the map – about 30-60 seconds for me – and the map feels a bit sluggish to navigate around even after it’s loaded in. And this is without there being any actual data.  The map is going to be used by school children, potentially on low-spec mobile devices connecting to slow internet services (or even worse, mobile data that they may have to pay for per MB).  We may have to think about whether using these areas is going to be feasible.  A option might be to reduce the detail in the polygons, which would reduce the size of the JSON file.  The boundaries in the current file are extremely detailed and each twist and turn in the polygon requires a latitude / longitude pair in the data, and there are a lot of twists and turns.  The polygons we used in SCOSYA are much more simplified (see for example https://scotssyntaxatlas.ac.uk/atlas/?j=y#9.75/57.6107/-7.1367/d3/all/areas) but would still suit our needs well enough.  However, manually simplifying each and every polygon would be a monumental and tedious task.  But perhaps there’s a method in ArcGIS that could do this for us.  There’s a tool called ‘Simplify Polygon’: https://desktop.arcgis.com/en/arcmap/latest/tools/cartography-toolbox/simplify-polygon.htm which might work.

I spoke to Mary about this and she agreed to experiment with the tool.  Whilst she worked on this I continued to work with the data.  I extracted all of the 411 areas and stored these in a database, together with all 954 postcode components that are related to these areas.  This will allow us to generate a drop-down list of options as the user types – e.g.  type in ‘G43’ and options ‘G43 2’ and ‘G43 3’ will appear, and both of these are associated with ‘Glasgow South’.

I also wrote a script to generate sample data for each of the 411 areas using the ‘turf.js’ script I’d previously used.  For each of the 411 areas a random number of markers between 0 and 100 are generated and stored in the database, each with a random rating of between 1 and 4.  This has resulted in 19946 sample ratings, which I then added to the map along with the polygonal area data, as you can see here:

Currently these are given the colours red=1, orange=2, light blue=3, dark blue=4, purely for test purposes.  As you can see, including almost 20,000 markers swamps the map when it’s zoomed out, but when you zoom in things look better.  I also realised that we might not even need to display the area boundaries to users.  They can be used in the background to work out where a marker should be positioned (as is the case with the map above) but perhaps they’re not needed for any other reasons?  It might be sufficient to include details of area in a popup or sidebar and if so we might not need to rework the areas at all.

However, whilst working on this Mary had created four different versions of the area polygons using four different algorithms.  These differ in how the simplify the polygons and therefore result in different boundaries – some missing out details such as lochs and inlets.  All four versions were considerably smaller in file size than the original, ranging from 4Mb to 20Mb.  I created new maps for each of the four simplified polygon outputs.  For each of these I regenerated new random marker data.  For algorithms ‘DP’ and ‘VW’ I limited the number of markers to between 0 and 20 per area, giving around 4000 markers in each map.  For ‘WM’ and ‘ZJ’ I limited the number to between 0 and 50 per area, giving around 10,000 markers per map.

All four new maps look pretty decent to me, with even the smaller JSON files (‘DP’ and ‘VW’) containing a remarkable level of detail.  I think the ‘DP’ one might be the one to go for.  It’s the smallest (just under 4MB compared to 115MB for the original) yet also seems to have more detail than the others.  For example for the smaller lochs to the east of Loch Ness the original and ‘DP’ include the outline of four lochs while the other three only include two.  ‘DP’ also includes more of the smaller islands around the Outer Hebrides.

We decided that we don’t need to display the postcode areas on the map to users but instead we’ll just use these to position the map markers.  However, we decided that we do want to display the local authority area so people have a general idea of where the markers are positioned.  My next task was to add these in.  I downloaded the administrative boundaries for Scotland from here: https://raw.githubusercontent.com/martinjc/UK-GeoJSON/master/json/administrative/sco/lad.json as referenced on this website: https://martinjc.github.io/UK-GeoJSON/ and added them into my ‘DP’ sample map, giving the boundaries a dashed light green that turns a darker green when you hover over the area, as you can see from the screenshot below:

Also this week I added in a missing text to the Anglo-Norman Dictionary’s Textbase.  To do this I needed to pass the XML text through several scripts to generate page records and all of the search words and ‘keyword in context’ data for search purposes.  I also began to investigate replacing the Innerpeffray data for Books and Borrowing with a new dataset that Kit has worked on.  This is going to be quite a large and complicated undertaking and after working through the data I had a set of questions to ask Kit before I proceeded to delete any of the existing data.  Unfortunately she is currently on jury duty so I’ll need to wait until she’s available again before I can do anything further.  Also this week a huge batch of images became available to us from the NLS and I spent some time downloading these and moving them to an external hard drive as they’d completely filled up the hard drive of my PC.

I also spoke to Fraser about the new radar diagrams I had been working on for the Historical Thesaurus and also about the ‘guess the category’ quiz that we’re hoping to launch soon.  Fraser sent on a list of categories and words that we want to exclude from the quiz (anything that might cause offence) but I had some questions about this that will need clarification before I take things further.  I’d suggested to Fraser that I could update the radar diagrams to include not only the selected category but also all child categories and he thought this would be worth investigating so I spent some time updating the visualisations.

I was a little worried about the amount of processing that would be required to include child categories but thankfully things seem pretty speedy, even when multiple top-level categories are chosen.  See for example the visualisation of everything within ‘Food and drink’, ‘Faith’ and ‘Leisure’:

This brings back many tens of thousands of lexemes but doesn’t take too long to generate.  I think including child categories will really help make the visualisations more useful as we’re now visualising data at a scale that’s very difficult to get a grasp on simply by looking at the underlying words.  It’s interesting to note in the above visualisation how ‘Leisure’ increases in size dramatically throughout the time periods while ‘Faith’ shrinks in comparison (but still grows overall).  With this visualisation the ‘totals’ rather than the ‘percents’ view is much more revealing.