Week Beginning 6th September 2021

I spent more than a day this week preparing my performance and development review form.  It’s the first time there’s been a PDR since before covid and it took some time to prepare everything.  Thankfully this blog provides a good record of everything I’ve done so I could base my form almost entirely on the material found here, which helped considerably.

Also this week I investigated and fixed an issue with the SCOTS corpus for Wendy Anderson.  One of the transcriptions of two speakers had the speaker IDs the wrong way round compared to the IDs in the metadata.  This was slightly complicated to sort out as I wasn’t sure whether it was better to change the participant metadata to match the IDs used in the text or vice-versa.  It turned out to be very difficult to change the IDs in the metadata as they are used to link numerous tables in the database, so instead I updated the text that’s displayed.  Rather strangely, the ‘download plan text’ file contained different incorrect IDs.  I fixed this as well, but it does make me worry that the IDs might be off in other plain text transcriptions too.  However, I looked at a couple of others and they seem ok, though, so perhaps it’s an isolated case.

I was contacted this week by a lecturer in English Literature who is intending to put a proposal together for a project to transcribe an author’s correspondence, and I spent some time writing a lengthy email with home helpful advice.  I also spoke to Jennifer Smith about her ‘Speak for Yersel’ project that’s starting this month, and we arranged to have a meeting the week after next.  I also spent quite a bit of time continuing to work on mockups for the STAR project’s websites based on feedback I’d received on the mockups I completed last week.  I created another four mockups with different colours, fonts and layouts, which should give the team plenty of options to decide from.  I also received more than a thousand new page images of library registers for the Books and Borrowing project and processed these and uploaded them to the server.  I’ll need to generate page records for them next week.

Finally, I continued to make updates to the Textbase search facilities for the Anglo-Norman Dictionary.  I updated genre headings to make them bigger and bolder, with more of a gap between the heading and the preceding items.  I also added a larger indent to the items within a genre and reordered the genres based on a new suggested order.  For each book I included the siglum as a link through to the book’s entry on the bibliography page and in the search results where a result’s page has an underscore in it the reference now displays volume and page number (e.g. 3_801 displays as ‘Volume 3, page 801’).  I updated the textbase text page so that page dividers in the continuous text also display volume and page in such cases.

Highlighted terms in the textbase text page no longer have padding around them (which was causing what looked like spaces when the term appears mid-word).  The text highlighting is unfortunately a bit of a blunt instrument, as one of the editors discovered by searching for the terms ‘le’ and fable’:  term 1 is located and highlighted first, then term 2 is.  In this example the first term is ‘le’ and the second term is ‘fable’.  Therefore the ‘le’ in ‘fable’ is highlighted during the first sweep and then ‘fable’ itself isn’t highlighted as it has already been changed to have the markup for the ‘le’ highlighting added to it and no longer matches ‘fable’.  Also, ‘le’ is matching some HTML tags buried in the text (‘style’), which is then breaking the HTML, which is why some HTML is getting displayed.  I’m not sure much can be done about any of this without a massive reworking of things, but it’s only an issue when searching for things like ‘le’ rather than actual content words so hopefully it’s not such a big deal.

The editor also wondered whether it would be possible to add in an option for searching and viewing multiple terms altogether but this would require me to rework the entire search and it’s not something I want to tackle if I can avoid it.  If a user wants to view the search results for different terms they can select two terms then open the full results in a new tab, repeating the process for each pair of terms they’re interested in, switching from tab to tab as required. Next week I’ll need to rename some of the textbase texts and split one of the texts into two separate texts, which is going to require me to regenerate the entire dataset.

Week Beginning 30th August 2021

This week I completed work on the proximity search of the Anglo-Norman textbase.  Thankfully the performance issues I’d feared might crop up haven’t occurred at all.  The proximity search allows you to search for term 1 up to 10 words to the left or right of term 2 using ‘after’ or ‘before’.  If you select ‘after or before’ then (as you might expect) the search looks 10 words in each direction.  This ties in nicely with the KWIC display, which displays 10 words either side of your term.  As mentioned last week, unless you search for exact terms (surrounded by double quotes) you’ll reach an intermediary page that lists all possible matching forms for terms 1 and 2.  Select one of each and you can press the ‘Continue’ button to perform the actual search.  What this does is finds all occurrences of term 2 (term 2 is the fixed anchor point, it’s term 1 that can be variable in position), then for each it checks the necessary words before or after (or before and after) the term for the presence of term 1.  When generating the search words I generated and stored the position the word appears on the page, which made it relatively easy to pinpoint nearby words.  What is trickier is dealing with words near the beginning or the end of a page, as in such cases the next or previous page must also then be looked at.  I hadn’t previously generated a total count of the number of words on a page, which was needed to ascertain whether a word was close to the end of the page, so I ran a script that generated and stored the word count for each page.  The search seems to be working as it should for words near the beginning and end of a page.

The results page is displayed in the same way as the regular search, complete with KWIC and sorting options.  Both terms 1 and 2 are bold, and if you sort the results the relevant numbered word left or right of term 2 is highlighted, as with the regular search.  When you click through to the actual text all occurrences of both term 1 and term 2 are highlighted (not just those in close proximity), but the page centres on the part of the text that meets the criteria, so hopefully this isn’t a problem – it is quite useful to see other occurrences of the terms after all.  There are still some tweaks I need to make to the search based on feedback I received during the week, and I’ll look at these next week, but on the whole the search facility (and the textbase facility in general) is just about ready to launch, which is great as it’s the last big publicly facing feature of the AND that I needed to develop.

Also this week I spent some time working on the Books and Borrowing project.  I created a new user account for someone who will be working for the project and I also received the digitised images for another library register, this time from the NLS.  I downloaded these and then uploaded them to the server, associating the images with the page records that were already in the system.  The process was a little more complicated and time consuming than I’d anticipated as the register has several blank pages in it that are not in our records but have been digitised.  Therefore the number of page images didn’t match up with the number of pages, plus page images were getting associated with the wrong page.  I had to manually look through the page images and delete the blanks, but I was still off by one image.  I then had to manually check through the contents of the images to compare them with the transcribed text to see where the missing image should have gone.  Thankfully I managed to track it down and reinstate it (it had one very faint record on it, which I hadn’t noticed when viewing and deleting blank thumbnails).  With that in place all images and page records aligned and I could made the associations in the database.  I also sent Gerry McKeever the zipped up images (several gigabytes) for a couple of the St Andrews registers as he prefers to have the complete set when working on the transcriptions.

I had a meeting with Gerry Carruthers and Pauline McKay this week to discuss further developments of the ‘phase 2’ Burns website, which they are hoping to launch in the new year, and also to discuss the hosting of the Scottish theatre studies journal that Gerry is sorting out.

I spent the rest of the week working on mockups for the two websites for the STAR speech and language therapy project.  Firstly there’s the academic site.  The academics site is going to sit alongside Seeing Speech and Dynamic Dialects, and as such it should have the same interface as these sites.  Therefore I’ve made a site that is pretty much identical in terms of the overall theme.  I added in a new ‘site tab’ for the site that sits at the top of the page and have added in the temporary logo as a site logo and favicon (the latter may need a dark background to make it stand out).  I created menu items for all of the items in Eleanor Lawson’s original mockup image.  These all work – leading to empty pages for now and added the star logo to the ‘Star in-clinic’ menu item as in the mockup too.  In the footer I made a couple of tweaks to the layout – the logos are all centre aligned and have a white border.  I added in the logo for Strathclyde and have only included the ESRC logo, but can add others in if required.  The actual content of the homepage is identical to Seeing Speech for now – I haven’t changed any images or text.

For the clinic website I’ve taken Eleanor’s mockup as a starting point again and have so far made two variations.  I will probably work on at least one more different version (with multiple variations) next week.  I haven’t added in the ‘site tabs’ to either version as I didn’t want to clutter things up, and I’m imagining that there will be a link somewhere to the STAR academic site for those that want it, and from there people would be able to find Seeing Speech and Dynamic Dialects.  The first version of the mockup has a top-level menu bar (we will need such a menu listing the pages the site features otherwise people may get confused) then the main body of the page is the blue, as in the mockup.  I used the same logo and the font for the header is this Google font: https://fonts.google.com/?query=rampart+one&preview.text=STAR%20Speech%20and%20Language%20Therapy&preview.text_type=custom.  Other headers on the page use this font: https://fonts.google.com/specimen/Annie+Use+Your+Telescope?query=annie&preview.text=STAR%20Speech%20and%20Language%20Therapy&preview.text_type=custom.  I added in a thick dashed border under the header.  The intro text is just some text I’ve taken from one of the Seeing Speech pages, and the images are still currently just the ones in the mockup.  Hovering over an image causes the same dashed border to appear.  The footer is a kind of pink colour, which is supposed to suggest those blue and pink rubbers you used to get in schools.

The second version uses the ‘rampart one’ font just for ‘STAR’ in the header, with the other font used for the rest of the text.  The menu bar is moved to underneath the header and the dashed line is gone.  The main body of the page is white rather than continuing the blue of the header and ‘rampart one’ is used as the in-page headers.  The images now have rounded edges, as do the text blocks in the images.  Hovering over an image brings up a red border, the same shade as used in the active menu item.  The pink footer has been replaced with the blue from the navbar.  Both versions are ‘responsive’ and work on all screen sizes.

I’ll be continuing to work on the mockups next week.

Week Beginning 23rd August 2021

This week I completed work on a first version of the textbase search facilities for the Anglo-Norman Dictionary.  I’ve been working on this over the past three weeks and it’s now fully operational, quick to use and does everything that was required of it.  I completed work on the KWIC ordering facilities, adding in a drop-down list that enables the user to order the results either by the term or any word to the left or right of the term.  When results are ordered by a word to the left or right of the search term that word is given a yellow highlight so you can easily get your eye on the word that each result is being ordered by.  I ran into a few difficulties with the ordering, for example accented initial characters were being sorted after ‘z’, and upper case characters were all sorted before lower case characters, but I’ve fixed these issues.  I also updated the textbase page so that when you load a text from the results a link back to the search results appears at the top of the page.  You can of course just use the ‘back’ button to return to the search results. Also, all occurrences of the search term throughout the text are highlighted in yellow.  There are possibly some further enhancements that could be made here (e.g. we could have a box that hovers on the screen like the ‘Top’ button that contains a summary of your search and a link back to the results, or options to load the next or previous result) but I’ll leave things as they are for now as what’s there might be good enough.  I also fixed some bugs that were cropping up, such as an exact search term not appearing in the search box when you return to refine your results (caused by double quotes needing to be changed to the code ‘%22’).

I then began thinking about the development of a proximity search for the textbase.  As with the old site, this will allow the user to enter two search terms and specify the maximum number of words before or after the first term the second one appears.  The results will then be displayed in a KWIC form with both terms highlighted.  It took quite some time to think through the various possibilities for this feature.  The simplest option from a technical point of view would be to process the first term as with the regular search, retrieve the KWIC for each result and then search this for the second term.  However, this wouldn’t allow the user to search for an exact match for the second term, or use wildcards, as the KWIC only contains the full text as written, complete with punctuation.  Instead I decided to make the proximity search as similar to and as consistent with the regular textbase search as possible.  This means the user will be able to enter the two terms with wildcards and two lists of possible exact matches will be displayed, from which the user can select term 1 and term 2.  Then at this point the exact matches for term 1 will be returned and in each case a search will be performed to see whether term 2 is found however number of words specified before or after term 1.  This will rely on the ‘word order’ column that I already added to the database, but will involve some complications when term 1 is near the very start or end of a page (as the search will then need to look at the preceding or following page).  I ran a few tests of this process directly via the database and it seemed to work ok, but I’ll just need to see whether there are any speed issues when running such queries on potentially thousands of results.

With this possible method in place I began working on a new version of the textbase search page that will provide both the regular concordance search and the new proximity search.  As with the advanced search on the AND website, these will be presented on one page in separate tabs, and this required much reworking of the existing page and processing scripts.  I had to ensure that HTML elements that previously used IDs but would need to be replicated in each tab and would therefore no longer be unique would still be valid.  This meant some major reworking of the genre and book selection options, both in the HTML and in the JavaScript that handles the selection and deselection.  I also had to ensure that the session variables relating to the search could handle multiple types of search and that links would return the user to the correct type of search.  By the end of the week I had got a search form for the proximity search in place, with facilities to limit the search to specific texts or genres and options to enter two terms, the maximum number of words between the terms and whether term 1 should appear before or after term 2 (or either).  Next week I’ll need to update the API to provide the endpoint to actually run such a search.

Also this week I had an email from Bryony Randall about her upcoming exhibition for her New Modernist Editing project.  The exhibition will feature a live website (https://www.blueandgreenproject.com/) running on a tablet in the venue and Bryony was worried that the wifi at the venue wouldn’t be up to scratch.  She asked whether I could create a version of the site that would run locally without an internet connection, and I spent some time working on this.

Looking at the source of the website it would appear to have been constructed using the online website creation platform https://www.wix.com/.  I’d never used this before, but it will have an admin interface where you can create and manage pages and such things.  The resulting website is integrated with the online Wix platform and (after a bit of Googling) it looked like there isn’t a straightforward way to export pages created using Wix for use elsewhere.  However, the site only consisted of 20 or so static pages (i.e. no interactive elements other than links to other pages) so I thought it would be possible to just save each page as HTML, go through each of the files and update the links and then the resulting pages could potentially run directly in a browser.  However, after trying this would I realised that there were some issues.  Looking at the source there are numerous references to externally hosted scripts and files, such as JavaScript files, fonts and images that were not downloaded when the webpage was saved and these would all be inaccessible if the internet connection was lost, which would likely result in a broken website.  I also realised that the HTML generated by WIX is a pretty horrible tangled mess, and getting this to work nicely would take a lot of work.  I therefore decided to just create a replica of the site from scratch using Bootstrap.

However, it was only after this point that I was informed that the local site would need to run on a tablet rather than a full PC.  The tablet is an Android one, which seriously complicates matters as unlike a proper computer, Android imposes restrictions on what you can and can’t do, and one of the things you can’t do is run locally hosted websites in the browser.  I tried several approaches to get my test site working on my Android phone and with all of the straightforward ways I can get the HTML file to load into the browser, but not any associated files – no images, stylesheets or JavaScript.  This is obviously not acceptable.  I did manage to get it to work, but only by using an app that runs a server on the device and by using absolute file references to the IP address the server app uses in the files (relative file references just did not work).  The app I used was called Simple HTTP Server (https://play.google.com/store/apps/details?id=jp.ubi.common.http.server) and once configured it worked pretty well.

I continued to work on my replica of the site, getting all of the content transferred over.  This took longer than I anticipated, as some of the pages are quite complicated (artworks including poetry, images, text and audio) but I managed to get everything done before the end of the week.  In the end it turned out that the wifi at the venue was absolutely fine so my replica site wasn’t needed, but it was still a good opportunity to learn about hosting a site on an Android device and to hone my Bootstrap skills.

Also this week I helped Katie Halsey of the Books and Borrowing project with a query about access to images, had a look through the final version of Kirsteen McCue’s AHRC proposal and spoke to Eleanor Lawson about creating some mockups of the interface to the STAR project websites, which I will start on next week.

Week Beginning 16th August 2021

I continued to work on the new textbase search facilities for the Anglo-Norman Dictionary this week.  I completed work on the required endpoints for the API, creating the facilities that would process a search term (with optional wildcards), limit the search to selected books and or genres and return either full search results in the case of an exact search for a term or a list of possible matching terms and the number of occurrences of each term.  I then worked on the front-end to enable a query to be processed and submitted to the API based on the choices made by the user.

By default any text entered will match any term that contains the text – e.g. enter ‘jour’ (without apostrophes) and you’ll find all forms containing the characters ‘jour’ anywhere e.g. ‘adjourner’, ‘journ’.  If you want to do an exact match you have to use double quotes – “jour”.  You can also use an asterisk at the beginning or end to match forms starting or ending with the term – ‘jour*’ and ‘*jour’ or an asterisk at both ends ‘*jour*’ will only find forms that contain the term somewhere in the middle.  You can also use a question mark wildcard to denote any single character, e.g. ‘am?n*’ will find words beginning ‘aman’, ‘amen’ etc.

If your selected form in your selected books / genres matches multiple forms then an intermediary page bringing up a list of matching forms and a count of the number of times each form appears will be displayed.  This is the same as how the ‘translation’ advanced search works, for example, and I wanted to maintain a consistent way of doing things across the site.  Select a specific form and the actual occurrences of each item in the texts will appear.  Above this list is a ‘Select another form’ button that returns you to the intermediary page.  If your search only brings back one form the intermediary page is skipped, and as all selection options appear in the URL it’s possible to bookmark / cite the search results too.

Whilst working on this I realised that I’d need to regenerate the data, as it became clear that many words have been erroneously joined together due to there being no space between words when one tag is closed and a following one is opened.  When the tags are then stripped out the forms get squashed together, which has led to some crazy forms such as ‘amendeezreamendezremaundez’.  Previously I’d not added spaces between tags as I was thinking that a space would have to be ended before a closing tag (e.g. ‘</’ becomes ‘ </’) and this would potentially mess up words that have tags in them, such as superscript tags in names like McDonald.  However, I realised I could instead do a find and replace to add spaces between a closing tag and an opening tag (‘><’ becomes ‘> <’, which would not mess up individual tags within words and wouldn’t have any further implications as I strip out all additional spaces when processing the texts for search purposes anyway.

I also decided that I should generate the ‘key-word in context’ (KWIC) for each word and store this in the database.  I was going to generate this on the fly every time a search results page was displayed but it seems more efficient to generate and store this once rather than do it every time.  I therefore updated by data processing script to generate the KWIC for each of the 3.5 million words as they were extracted from the texts.  This took some time to both implement and execute.  I decided to pull out the 10 words on either side of the term, which used the ‘word order’ column that gets generated as each page is processed.  Some complications were introduced in cases where the term is either before the tenth word on the page or there are less than ten words after the term on the page.  I such cases the script needed to look at the page before or after the current page in order to pull out the words and fill out the KWIC with the appropriate words on the other pages.

With the updates to data processing in place and a fair bit of testing of the KWIC facility carried out, I re-ran my scripts to regenerate the data and all looked good.  However, after inserting the KWIC data the querying of the tables slowed to a crawl.  On my local PC queries which were previously taking 0.5 seconds were taking more than 10 seconds, while on the server execution time was almost 30 seconds.  It was really baffling as the only difference was the search words table now had two additional fields (KWIC left and KWIC right), neither of which were being queried or returned in the query.  It seemed really strange that adding new columns could have such an effect if they were not even being used in a query.  I had to spend quite a bit of time investigating this, including looking at MySQL settings such as key buffer size and trying again to change storage engines, switching from MyISAM to InnoDB and back again to see what was going on.  Eventually I looked again at the indexes I’d created for the table, and decided to delete them and start over, in case this somehow jump-started the search speed.  I previously had the ‘word stripped’ column indexed in a multiple column index with page ID and word type (either main page or textual apparatus).  Instead I created an index of the ‘word stripped’ column on its own, and this immediately boosted performance.  Queries that were previously taking close to 30 seconds to execute on the server were now taking less than a second.  It was such a relief to have figured out what the issue was, as I had been considering whether my whole approach would need to be dropped and replaced by something completely different.

As I now had a useable search facility I continued to develop the front-end that would use this facility.  Previously the exact match for a term was bringing up just the term in question and a link through to the page the term appeared on, but now I could begin to incorporate the KWIC text too.  My initial idea was to use a tabular layout, with each word of the KWIC in a different column, with a clickable table heading that would allow the data to be ordered by any of the columns (e.g. order the data alphabetically by the first word to the left of the term).  However, after creating such a facility I realised it didn’t work very well.  The text just didn’t scan very well due to columns having to be the width of whatever the longest word in the column was, and the text just took up too much horizontal space.  Instead, I decided to revert to using an unordered list, with the KWIC left and KWIC right in separate spans, with KWIC left right aligned to push it up against the search term no matter what the length of the KWIC left text.  I split the KWIC text up into individual words and stored this in an array to enable each search result to be ordered by any word in the KWIC, and began working on a facility to change the order using a select box above the search results.  This is as far as I got this week, but I’m pretty confident that I’ll get things finished next week.  Here’s a screenshot of how the KWIC looks so far:

Also this week I had an email conversation with the other College of Arts developers about professional web designers after Stevie Barrett enquired about them, arranged to meet with Gerry Carruthers to discuss the journal he would like us to host, gave some advice to Thomas Clancy about mailing lists and spoke to Joanna Kopaczyk about a website she would like to set up for a conference she’s organising next year.

Week Beginning 9th August 2021

I’d taken last week off as our final break of the summer, and we spent it on the Kintyre peninsula.  We had a great time and were exceptionally lucky with the weather.  The rains began as we headed home and I returned to a regular week of work.  My major task for the week was to begin work on the search facilities for the Anglo-Norman Dictionary’s textbase, a collection of almost 80 lengthy texts for which I had previously created facilities to browse and view texts.  The editors wanted me to replicate the search options that were available through the old site, which enabled a user to select which texts to search (either individual texts or groups of texts arranged by genre), enter a single term to search (either a full match or partial match at the beginning or end of a word), select a specific term from a list of possible matches and then view each hit via a keyword in context (KWIC) interface, showing a specific number of words before and after the hit, with a link through to the full text opened at that specific point.

This is a pretty major development and I decided initially that I’d have two major tasks to tackle.  I’d have to categorise the texts by their genre and I’d have to research how best to handle full text searching including limiting to specific texts, KWIC and reordering KWIC, and linking through to specific pages and highlighting the results.  I reckoned it was potentially going to be tricky as I don’t have much experience with this kind of searching.  My initial thought was to see whether Apache Solr might be able to offer the required functionality.  I used this for the DSL’s advanced search, which searches the full text of the entries and returns snippets featuring the word, with the word highlighted and the word then highlighted throughout the entry when an entry in the results is loaded (e.g. https://dsl.ac.uk/results/dreich/fulltext/withquotes/both/).  This isn’t exactly what is required here, but I hoped that there might be further options I can explore.  Failing that I wondered whether I could repurpose the code for the Scottish Corpus of Texts and Speech.  I didn’t create this site, but I redeveloped it significantly a few years ago and may be able to borrow parts from the concordance search. E.g. https://scottishcorpus.ac.uk/advanced-search/ and select ‘general’ then ‘word search’ then ‘word / phrase (concordance)’ then search for ‘haggis’ and scroll down to the section under the map.  When opening a document you can then cycle through the matching terms, which are highlighted, e.g. https://scottishcorpus.ac.uk/document/?documentid=1572&highlight=haggis#match1.

After spending some further time with the old search facility and considering the issues I realised there are a lot of things to be considered regarding preparing the texts for search purposes.  I can’t just plug the entire texts in as only certain parts of them should be used for searching – no front or back matter, no notes, textual apparatus or references.  In addition, in order to properly ascertain which words follow on from each other all XML tags need to be removed too, and this introduces issues where no space has been entered between tags but a space needs to exist between the contents of the tags, e.g. ‘dEspayne</item><item>La charge’ would otherwise become ‘dEspayneLa charge’.

As I’d need to process the texts no matter which search facility I end up using I decided to focus on this first, and set up some processing scripts and a database on my local PC to work with the texts.  Initially I managed to extract the page contents for each required page, remove notes etc and strip the tags and line breaks so that the page content is one continuous block of text.

I realised that the old search seems to be case sensitive, which doesn’t seem very helpful.  E.g. search for ‘Leycestre’ and you find nothing – you need to enter ‘leycestre’, even though all 264 occurrences actually have a capital L.  I decided to make the new search case insensitive – so searching for ‘Leycestre’, ‘leycestre’ or ‘LEYCESTRE’ will bring back the same results.  Also, the old search limits the keyword in context display to pages.  E.g. the first ‘Leycestre’ hit has no text after it as it’s the last word on the page.  I’m intending to take the same approach as I’m processing text on a page-by-page basis.  I may be able to fill out the KWIC with text from the preceding / subsequent page if you consider this to be important, but it would be something I’d have to add in after the main work is completed.  The old search also limits the KWIC to text that’s on the same line, e.g. in a search for ‘arcevesque’ the result ‘L’arcevesque puis metre en grant confundei’ has no text before because it’s on a different line (it also chops off the end of ‘confundeisun’ for some reason).  The new KWIC will ignore breaks in the text (other than page breaks) when displaying the context.  I also realised that I need to know what to do about words that have apostrophes in them.  The old search splits words on the apostrophe, so for example you can search for arcevesque but not l’arcevesque.  I’m intending to do the same.  The old search retains both parts before and after the apostrophe as separate search terms, so for example in “qu’il” you can search for “qu” and “il” (but not “qu’il”).

After some discussions with the editor, I updated my system to include textual apparatus, stored in a separate field to the main page text.  With all of the text extracted I decided that I’d just try and make my own system initially, to see whether it would be possible.  I therefore created a script that would take each word from the extracted page and textual apparatus fields and store this in a separate table, ensuring that words with apostrophes in them are split into separate words and for search purposes all non-alphanumeric characters are removed and the text is stored as lower-case.  I also needed to store the word as it actually appears in the text, the word order on the page and whether the word is a main page word or in the textual apparatus.  This is because after finding a word I’ll need to extract those around it for the KWIC display.  After running my script I ended up with around 3.5 million rows in the ‘words’ table, and this is where I ran into some difficulties.

I ran some test queries on the local version of the database and all looked pretty promising, but after copying the data to the server and running the same queries it appeared that the server is unusably slow.  On my desktop a query  to find all occurrences of ‘jour’, with the word table joined to the page table and then to the text table completed in less than 0.5 seconds but on the server the same query took more than 16 seconds, so about 32 times slower.  I tried the same query a couple of times and the results are roughly the same each time.  My desktop PC is a Core i5 with 32GB of RAM, and the database is running on an NVMe M.2 SSD, which no doubt makes things quicker, but I wouldn’t expect it to be 32 times quicker.

I then did some further experiments with the server.  When I query the table containing the millions of rows on its own the query is fast (much less than a second).  I added a further index to the column that is used for the join to the page table (previously it was indexed, but in combination with other columns) and then when limiting the query to just these two tables the query runs at a fairly decent speed (about 0.5 seconds).  However, the full query involving all three tables still takes far too long, and I’m not sure why.  It’s very odd as there are indexes on the joining columns and the additional table is not big – it only has 77 rows.  I read somewhere that ordering the results by a column in the joined table can make things slower, as can using descending order on a column, so I tried updating the ordering but this has had no effect.  It’s really weird – I just can’t figure out why adding the table has such a negative effect on the performance and I may end up just having to incorporate some of the columns from the text table into the page table, even though it will mean duplicating data.  I also still don’t know why the performance is so different on my local PC either.

One final thing I tried was to change the database storage type.  I noticed that the three tables were set to use MyISAM storage rather than InnoDB, which the rest of the database was set to.  I migrated the tables to InnoDB in the hope that this might speed things up, but it’s actually slowed things down, both on my local PC and the server.  The two-table query now takes several seconds while the three-table query now takes about the same, so is quicker, but still too slow.  On my desktop PC the speed has doubled to about 1 second.  I therefore reverted back to using MyISAM.

I decided to leave the issue of database speed at that point and to focus on other things instead.  I added a new ‘genre’ column to the texts and added in the required categorisation.  I then updated the API to add in this new column and updated the ‘browse’ and ‘view’ front-ends so that genre now gets displayed.  I then began work on the front-end for the search, focussing on the options for listing texts by genre and adding in the options to select / deselect specific texts or entire genres of text.  This required quite a bit of HTML, JavaScript and CSS work and made a nice change from all of the data processing.  By the end of the week I’d completed work on the text selection facility, and next week I’ll tackle the actual processing of the search, at which point I’ll know whether my database way of handling things will be sufficiently speedy.

Also this week I had a chat with Eleanor Lawson about the STAR project that has recently begun.  There was a project meeting last week that unfortunately I wasn’t able to attend due to my holiday, so we had an email conversation about some of the technical issues that were raised at the meeting, including how it might be possible to view videos side by side and how a user may choose to select multiple videos to be played automatically one after the other.

I also fixed a couple of minor formatting issues for the DSL people and spoke to Katie Halsey, PI of the Books and Borrowing project about the development of the API for the project and the data export facilities.  I also received further feedback from Kirsteen McCue regarding the Data Management Plan for her AHRC proposal and went through this, responding to the comments and generating a slightly tweaked version of the plan.

 

Week Beginning 26th July 2021

I returned to Glasgow for a more regular week of working from home, after spending a delightful time at my parents’ house in Yorkshire for the past two weeks.  I continued to work on the Comparative Kingship front-ends this week.  I fixed a couple of issues with the content management systems, such as ensuring that the option to limit the list of place-names by parish worked for historical parishes and fixing an issue whereby searching by sources was returning zero results.  Last week I’d contacted Chris Fleet at NLS Maps to ask whether we might be able to incorporate a couple of maps of Ireland that they host into our resource, and Chris got back to me with a very helpful reply, giving us permission to use the maps and also pointing out some changes to the existing map layers that I could make.

I updated the attribution links on the site, and pointed the OS six-inch map links to the NLS’s hosting on AWS.  I also updated these things on the other place-name resources I’ve created too.  We had previously been using a modern OS map layer hosted by the NLS, and Chris pointed out that a more up to date version could now be accessed directly from the OS website (see Chris’s blog post here: https://www.ordnancesurvey.co.uk/newsroom/blog/comparing-past-present-new-os-maps-api-layers).  I followed the instructions and signed up for an OS API key, and it was then a fairly easy process to replace the OS layer with the new one.  I did the same with the other place-name resources too, and its looks pretty good.  See for example how it looks on a map showing placenames beginning with ‘B’ on the Berwickshire site: https://berwickshire-placenames.glasgow.ac.uk/place-names/?p=results&source=browse&reels_name=B*#13/55.7939/-2.2884/resultsTabs-0/code/tileOS//

With these changes and the Irish historical maps in place I continued to work on the Irish front-end.  I added in the parish boundaries for all of the currently required parishes and also added in three-letter acronyms that the researcher Nick Evans had created for each parish.  These are needed to identify the parishes on the map, as full parish names would clutter things up too much.  I then needed to manually positing each of the acronyms on the map, and to do so I updated the Irish map to print the latitude and longitude of a point to the console whenever a mouse click is made.  This made it very easy to grab the coordinates of an ideal location for each acronym.

There were a few issues with the parish boundaries, and Nick wondered whether the boundary shapefiles he was using might work better.  I managed to open the parish boundary shapefile in QGIS, converted the boundary data to WGS84 (latitude / longitude) and then extracted the boundaries as a GeoJSON file that I can use with my system.  I then replaced the previous parish boundaries with the ones from this dataset, but unfortunately something was not right with the positioning.  The northern Irish ones appear to be too far north and east, with the boundary for BNT extending into the sea rather than following the coast and ARM not even including the town of Armoy, as the following screenshot demonstrates:

In QGIS I needed to change the coordinate reference system from TM65 / Irish Grid to WGS84 to give me latitude and longitude values, and I wondered whether this process had caused the error, therefore I loaded the parish data into QGIS again and added an OpenStreetMap base map to it too, and the issue with the positioning is still apparent in the original data, as you can see from the following QGIS screenshot:

I can’t quite tell if the same problem exists with the southern parishes.  I’d positioned the acronyms in the middle of the parishes and they mostly still seem to be in the middle, which suggests these boundaries may be ok, although I’m not sure how some could be wrong while others are correct as everything is joined together.  After consultation with Nick I reverted to the original boundaries, but kept a copy of the other ones in case we want to reinstate them in future.

Also this week I investigated a strange issue with the Anglo-Norman Dictionary, whereby a quick search for ‘resoler’ brings back an ‘autocomplete’ match, but then finds zero results if you click on it.  ‘Resoler’ is a cross-reference entry and works in the ‘browse’ option too.  It seemed very strange that the redirect from the ‘browse’ would work, and also that a quick search for ‘resolut’, which is another variant of ‘resoudre’ was also working.  It turns out that it’s an issue with the XML for the entry for ‘resoudre’.  It lists ‘resolut’ as a variant, but does not include ‘resoler’ as you can see:

<variant gram=”imp.5″>resolvez</variant>

<deviant gram=”imp.5″>resoylez</deviant>

<varref><reference><source siglum=”Alchimie”><loc>380.5</loc></source></reference></varref>

<variant gram=”p.p.”>resolé</variant>

<variant>resolu</variant>

<variant>resolut</variant>

<newvargroup/>

<variant gram=”p.p.pl.”>resolous</variant>

<deviant gram=”p.p.pl.”>resouz</deviant>

<varref><reference><source siglum=”Secr1″><loc>1524</loc></source></reference></varref>

<deviant>resus</deviant>

<varref><reference><source siglum=”Alchimie”><loc>379.1</loc></source></reference></varref>

The search uses the variants / deviants from the XML to figure out which main entry to load from a cross reference.  As ‘resoler’ is not present the system doesn’t know what entry ‘resoler’ refers to and therefore displays no results.  I pointed this out to the editor, who changed the XML to add in the missing variant, which fixed the issue.

Also this week I responded to some feedback on the Data Management Plan for Kirsteen’s project, which took a little time to compile, and spoke to Jennifer Smith about her upcoming follow-on project for SCOSYA, which begins in September and I’ll be heavily involved with.  I also had a chat with Rhona about the ancient DSL server that we should now be able to decommission.

Finally, Gerry Carruthers sent me some further files relating to the International Journal of Scottish Theatre, which he is hoping we will be able to host an archive of at Glasgow.  It consisted of a database dump, which I imported into a local database and had a look at it.  It mostly consists of tables used to manage some sort of editorial system and doesn’t seem to contain the full text of the articles.  Some of the information contained in it may be useful, though – e.g. it stores information about article titles, authors, the issues articles appear in, the original PDF filenames for each article etc.

In addition, the full text of the articles is available as both PDF and HTML in the folder ‘1>articles’.  Each article has a numerically numbered folder (e.g. 109) that contains two folders: ‘public’ and ‘submission’.  ‘public’ contains the PDF version of the article.  ‘submission’ contains two further folders: ‘copyedit’ and ‘layout’.  ‘copyedit’ contains an HTML version of the article while ‘layout’ contains a further PDF version.  It would be possible to use each HTML version as a basis for a WordPress version of the article.  However, some things need to be considered:

Does the HTML version always represent the final published version of the article?  The fact that it exists in folders labelled ‘submission’ and ‘copyedit’ and not ‘public’ suggests that the HTML version is likely to be a work in progress version and editorial changes may have been made to the PDF in the ‘public’ folder that are not present in the HTML version.  Also, there are sometimes multiple HTML versions of the article.  E.g. in the folder ‘1>articles>154>submission>copyedit’ there are two HTML files: ‘164-523-1-CE.htm’ and ‘164-523-2-CE.htm’.  These both contain the full text of the article but have different formatting (and may have differences in the content, but I haven’t checked this).

After looking at the source of the HML versions I realised these have been auto-generated from MS Word.  Word generates really messy, verbose HTML with lots of unnecessary tags and I therefore wanted to see what would happen if I copied and pasted it into WordPress.  My initial experiment was mostly a success, but WordPress treats line breaks in the pasted file as actual line breaks, meaning the text didn’t display as it should.  What I needed to do in my text editor was find and replace all line break characters (\r and \n) with spaces.  I also had to make sure I only copied the contents within the HTML <body> tag rather than the whole text of the file.  After that the process worked quite well.

However, there are other issues with the dataset.  For example, article 138 only has Word files rather than HTML or PDF files and article 142 has images in it, and these are broken in the HTML version of the article.  Any images in articles will probably have to be manually added in during proofreading.  We’ll need to consider whether we’ll have to get someone to manually migrate the data, or whether I can write a script that will handle the bulk of the process.

I had my second vaccination jab on Wednesday this week, which thankfully didn’t hit me as hard as the first one did.  I still felt rather groggy for a couple of days, though.  Next week I’m on holiday again, this time heaving to the Kintyre peninsula to a cottage with no internet or mobile signal, so I’ll be unreachable until the week after next.

Week Beginning 19th July 2021

This was my second and final week staying at my parents’ house in Yorkshire, where I’m working a total of four days over the two weeks.  This week I had an email conversation with Eleanor Lawson about her STAR project, which will be starting very shortly.  We discussed the online presence for the project, which will be split between a new section on the Seeing Speech website and an entirely new website, the project’s data and workflows and my role over the 24 months of the project.  I also created a script to batch process some of the Edinburgh registers for the Books and Borrowing project.  The page images are double spreads and had been given a number for both the recto and the verso (e.g. 1-2, 3-4), but the student registers only ever use the verso page.  I was therefore asked to write a script to renumber all of these (e.g. 1-2 becomes 1, 3-4 becomes 2), which I created and executed on a test version of the site before applying to the live data.

I also continued to make tweaks to the front-ends for the Comparative Kingship project.  I fixed a bug with the Elements glossary of the Irish site, which was loading the Scottish version instead.  I also contacted Chris Fleet at NLS Maps to enquire about using a couple of their historical Irish maps with the site.  I also fixed the ‘to top’ button in the CMSes not working; the buttons now actually scroll the page to the top as they should.  I also fixed some issues relating to parish names no longer being unique in the system (e.g. the parish of Gartly is in the system twice due to it changing county at some point).  This was causing issues with the browse option as data was being grouped by parish name.  Changing the grouping to the parish ID thankfully fixed the issue.

I also had a chat with Ann Fergusson at the DSL about multi-item bibliographical entries in the existing DSL data.  These are being split into individual items, and a new ‘sldid’ attribute in the new data will be used to specify which item in the old entry the new entry corresponds to.  We agreed that I would figure out a way to ensure that these IDs can be used in the new website once I receive the updated data.

My final task of the week was to investigate a problem with Rob Maslen’s City of Lost Books blog (https://thecityoflostbooks.glasgow.ac.uk/) when went offline this week and only displayed a ‘database error’.  Usually when this happens it’s a problem with the MySQL database and it takes down all of the sites on the server, but this time it was only Rob’s site that was being affected.  I tried accessing the WP admin pages and this gave a different error about the database being corrupted.   I needed to update the wordpress config file to add the line define(‘WP_ALLOW_REPAIR’, true); and upon reloading the page WordPress attempted to fix the database.  After doing so it stated that “The wp_options table is not okay. It is reporting the following error: Table is marked as crashed and last repair failed. WordPress will attempt to repair this table… Failed to repair the wp_options table. Error: Wrong block with wrong total length starting at 10356”.  WordPress appeared to regenerate the table, as after this the table existed and was populated with data and the blog went online again and could be logged into.  I’ll have to remember this if it happens again in future.

Next week I’ll be back in Glasgow.

Week Beginning 12th July 2021

I’m down visiting my parents in Yorkshire for the first time in 18 months this week and next, working a total of four days over the two-week period.  This week I mainly focussed on the Irish front-end for the Comparative Kingship place-names project, but also adding in some updates to the Scotland system that I recently set up, such as making the Gaelic forms of the classification codes visible and adding options to browse Gaelic forms of place-names and historical forms to the ‘Browse’ facility and ensuring the other place-name and historical form browses only bring back English forms.

The Irish system is mostly identical to the Scottish system, but I did need to make some changes that took a bit of time to implement.  As the place-names covered appear to be much more geographically spread out, I’ve allowed the map to be zoomed out further.  I’ve also had to remove the modern OS and historical OS map layers as they don’t cover Ireland, so currently there are only three map layers available (the default view, satellite view and satellite view with labels).  The Ordnance Survey of Ireland provides access to some historical map layers here: https://geohive.maps.arcgis.com/apps/webappviewer/index.html?id=9def898f708b47f19a8d8b7088a100c4 but their terms and conditions makes it clear that you can’t use the maps on another online resource.  However, there are a couple of Irish maps on the NLS website, the Bartholomew Quarter-Inch 1940 (https://maps.nls.uk/geo/explore/#zoom=9&lat=53.10286&lon=-7.34481&layers=13&b=1) and the GSGS One-Inch 1941-3 (https://maps.nls.uk/geo/explore/#zoom=9&lat=53.10286&lon=-7.34481&layers=14&b=1) and we could investigate integrating these as the NLS maps people have always been very helpful.

I also updated the map pop-ups to include the new Irish data fields, such as baronies, townlands and the different map types.  Both English and Gaelic forms of things like parishes, baronies and classification codes are displayed throughout the site and on the Record page the ITM figures also appear.  I updated the ‘Browse’ page so that it features baronies and the element glossary should work, but I haven’t tested it out as there is no data yet.  The Advanced search features a selectable list of baronies and currently a simple textbox for townlands.  I may change this to an autocomplete (whereby you start typing and townlands that include the letters appear in a selectable list), or I may leave it as it is, meaning multiple townlands can be searched for and wildcard characters can be used.

I managed to locate downloadable files containing parish boundaries for Ireland here: https://www.townlands.ie/page/download/ and have added these to the data for the two parishes that currently contain data.  I haven’t added in any other parish boundaries yet as there are over 200 parishes in our database I don’t want to have to manually add in the boundaries for all of these if it won’t be necessary.  Also, on the Scotland maps the three-letter acronym appears in the middle of each parish in order to identify it, but the Irish parishes don’t have TLAs so currently don’t have any labels.  The full text of the parish will clutter up the map too much if I use it, so I’m not sure what we could do to label the parishes.

Also this week I responded to some feedback about the Data Management Plan for Kirsteen McCue’s proposal and created a slightly revised version.  I also had an email conversation with Eleanor Lawson about her new speech project and how the web presence for the project may function.  Finally, I made some tweaks to the Dictionary of the Scots Language, updating the layout of the ‘Contact’ page and updating the bibliography page on the new website so that URLs that use the old style IDs will continue to work.  I also had a chat with Rhona Alcorn about some new search options that we are going to add in to the new site before it goes live, although probably not until the autumn.

 

Week Beginning 5th July 2021

After a lovely week’s holiday in the East Neuk of Fife last week I returned to full week of work.  I spent Monday catching up with emails and making some updates to two project websites.  Firstly, for the Anglo-Norman Dictionary I updated the Textbase to add in the additional PDF texts.  As these are not part of the main Textbase I created a separate page that listed and linked to them, and added a reference to the page to the introductory paragraph of the main Textbase page.  Secondly, I made some further updates to the content management system for the Books and Borrowing project.  There was a bug in the ‘clear borrower’ feature that resulted in the normalised occupation fields not getting clears.  This meant that unless a researcher noticed and manually removed the selected occupations it would be very easy to end up with occupations assigned to the wrong borrower.  I implemented a fix for this bug, so all is well now.  I had also been alerted to an issue with the library’s ‘books’ tab.  When limiting the listed books to only those mentioned in a specific register the list of associated borrowing records that appears in a popup was not limiting the records to those in the specified register.  I fixed this as well, and also made a comparable fix to the ‘borrowers’ tab as well.

During the week I also had an email conversation with Kirsteen McCue about her ‘Singing the Nation’ AHRC proposal, and made a new version of the Data Management Plan for her.  I also investigated some anomalies with the stats for the Dictionary of the Scots Language website for Rhona Alcon.  Usage figures were down compared to last year, but it looks like last year may have been a blip caused by Covid, as figures for this year match up pretty well with the figures for years before the dreaded 2020.

On Wednesday I was alerted to an issue with the Historical Thesaurus website, which appeared to be completely inaccessible.  Further investigation revealed that other sites on the server were also all down.  Rather strangely the Arts IT Support team could all access the sites without issue, and I realised that if I turned wifi off on my phone and accessed the site via mobile data I could access the site too.  I had thought it was an issue with my ISP, but Marc Alexander reported that he used a different ISP and could also not access the sites.  Marc pointed me in the direction of two very handy websites that are useful for checking whether websites are online or not.  https://downforeveryoneorjustme.com checks the site and lets you know whether it’s working while https://www.uptrends.com/tools/uptime is a little more in-depth and checks whether the site is available from various locations across the globe.  I’ll need to remember these in future.

The sites were still inaccessible on Thursday morning  and after some Googling I found an answer to someone with a similar issue here: https://webmasters.stackexchange.com/questions/104092/why-is-my-site-showing-as-being-down-for-some-places-and-not-others I asked Arts IT Support to check with central IT Services to see whether any DNS settings had been changed recently or if they know what might be causing the issue, as it turned out to be a more widespread issue than I had thought, and was affecting sites on different servers too.  A quick check of the sites linked to from this site showed that around 20 websites were inaccessible.

Thankfully by Thursday lunchtime the sites had begun to be accessible again, although not for everyone.  I could access them, but Marc Alexander still couldn’t.  By Friday morning all of the sites were fully accessible again from locations around the globe, and Arts IT Support got back to me with a cause for the issue.  Apparently there was some server in the Boyd Orr that controls DNS records for the University and it had gone wrong and sent out garbled instructions to other DNS servers around the world, which knocked out access to our sites, even though the sites themselves were all working perfectly.

I spent the rest of the week working on the front-end for the Scotland data for the Comparative Kingship project, a task that I’d begun before I went away on my holiday.  I managed to complete an initial version of the Scotland front-end, which involved taking the front-end from one of the existing place-names websites (e.g. https://kcb-placenames.glasgow.ac.uk/) and adapting it.  I had to make a number of adaptations, such as ensuring that two parallel interfaces and APIs could function on one site (one for Scotland, one for Ireland), updating a lot of the site text, creating a new, improved menu system and updating the maps so that they defaulted to the new area of research.  I also needed to add in facilities to search, return data for and display new Gaelic fields, e.g. Gaelic versions of place-names and historical forms.  This meant updating the advanced search to add in a new ‘language’ choice option, to enable a user to limit their search to just English or Gaelic place-name forms or historical forms.  This in turn meant updating the API to add in this additional option.

An additional complication came when I attempted to grab the parish boundary data, which for previous project I’d successfully exported from Scottish Government’s Spatial Data website (https://www.spatialdata.gov.scot/geonetwork/srv/eng/catalog.search#/metadata/c1d34a5d-28a7-4944-9892-196ca6b3be0c) via a handy API (https://maps.gov.scot/server/rest/services/ScotGov/AgricultureEnvironment/MapServer/1/query).  However, the parish boundary data was not getting returned with latitude / longitude pairs marking the parish shape, but instead used esriMeters instead.  I found someone else who wanted to covert esriMeters into lat/lng (https://gis.stackexchange.com/questions/54534/how-can-i-convert-esrimeters-to-lat-lng) and one of the responses was that with an ArcGIS service (which the above API appears to be) you should be able to set the ‘output spatial reference’, with the code 4326 being used for WGS84, which would give lat/lng values.  The API form does indeed have an ‘Output Spatial Reference’ field, but unfortunately it doesn’t seem to do anything.  I did lots of further Googling and tried countless different ways of entering the code, but nothing changed the output.

Eventually I gave up and tried an alternative approach.  The site also provides the parish data as an ESRI Shapefile (https://maps.gov.scot/ATOM/shapefiles/SG_AgriculturalParishes_2016.zip) and I wondered whether I could plug this into a desktop GIS package and use it to migrate the coordinates to lat/lng.  I installed the free GIS package QGIS (https://www.qgis.org/en/site/forusers/download.html) and after opening it went to the ‘Layer’ menu, selected ‘Add Layer’, then ‘Add Vector Layer’ then selected the zip file and pressed ‘add’, at which point all of the parish data loaded in, allowing me to select a parish and view the details for it.  What I then needed to do was to find a means of changing the spatial reference and saving a geoJSON file.  After much trial and error I discovered that in the ‘Layer’ menu there is a ‘Save as’ option.  This allowed me to specify the output format (geoJSON) and change the ‘CRS’, which is the ‘Coordinate Reference System’.  In the drop-down list I located ESPG: 4326 / WGS84 and selected it.  I then specified a filename (the folder defaults to a Windows System folder and needs to be updated too) and pressed ‘OK’ and after a long wait the geoJSON file was generated, with latitude and longitude values for all parishes.  Phew!  It was quite a relief to get this working.

With access to a geoJSON file containing parishes with lat/lng pairings I could then find and import the parishes that we needed for the current project, of which there were 28.  It took a bit of time to grab all of these, and I then needed to figure out where I wanted the three-letter acronyms for each parish to be displayed, as well, which I worked out using the National Library of Scotland’s parish boundaries map (https://maps.nls.uk/geo/boundaries/), which helpfully displays lat/lng coordinates for your cursor position in the bottom right.  With all of the parish boundary data in place the infrastructure for the Scotland front-end is now more or less complete and I await feedback from the project team.  I will begin work on the Ireland section next, which will take quite some work as the data fields are quite different.  I’m only going to be working a total of four days over the next two weeks (probably as half-days) so my reports for the next couple of weeks are likely to be a little shorter!

Week Beginning 21st June 2021

This was a four-day week for me as I’m off on Friday and will be off all of next week too.  A big thing I ticked off my ‘to do’ list this week was completing work on the ‘Browse’ facility for the Anglo-Norman Textbase, featuring each text on its own continuous page rather than split into sometimes hundreds of individual pages.  I finished updating the way footnotes work, and they are now renumbered starting at [1] on each page of each text no matter what format they originally had.  All of the issues I’d noted about footnote numbers in my previous couple of blog posts have now been addressed (e.g. numbering out of sequence, numbers getting erroneously repeated).

With the footnotes in place I then we through each of the 77 texts to check their layout, which took quite some time but also raised a few issues that needed to be fixed.  The biggest thing was I needed to regenerate the page number data (used in the ‘jump to page’ feature) as I realised the previously generated data was including all <pb> tags, but some of the texts such as ‘indentures’ use <pb> to mean something else.  For example, ‘<pb ed=”MS” n=”Dorse”/>’ is not an actual page break and there are numerous of these occurrences throughout the text, resulting in lots of ‘Dorse’ options in the ‘jump to page’ list.  Instead I limited the page breaks to just those that have ‘ed=”base”’ in them, e.g. ‘<pb n=”49″ ed=”base”/>’ and this seems to have done the trick.

I also noticed some issues with paragraph and table tags in footnotes causing the notes to display in the wrong place or display only partially, and the ‘dorse’ issue was also resulting in footnotes getting added to the wrong page sometimes.  Thankfully I managed to fix these issues and so as far as I can tell that’s the ‘browse’ facility of the Textbase complete.  The editors don’t want to launch the Textbase until the search facilities have also been developed, so it’s going to be a while until they’re actually available, what with summer holidays and commitments to other projects.

Also this week I continued to work on the Books and Borrowings project, having an email conversation with the digitisers at the NLS about file formats and methods of transferring files, and making further updates to the CMS to add features and make things run quicker.  I managed to reduce the number of database calls on the ‘Books’ tab in the library view again, which should mean the page loads faster.  Previously all book holding records were returned and then a separate query was executed for each to count the number of borrowings whereas I’ve now nested the count query in the initial query.  So for St Andrews with its 7471 books this has cut out 7471 individual queries.

I’d realised that the ‘borrowing records’ count column in this ‘Books’ table isn’t actually a count of borrowing records at all, but a count of the number of book items that have been borrowed for the book holding.  I’ve figured out a way to return a count of borrowing records instead, and I replaced the old way with the new way, so the ‘Borrowing Records’ column now does what it should do.  This means the numbers listed have changed, e.g. ‘Universal History’ now has 177 borrowing records rather than 269 and is no longer the most borrowed book holding at St Andrews.  I also changed the popup so that each borrowing record only appears once (e.g. David Gregory on 1748-6-7 now only has one borrowing record listed).  I added a further ‘Total borrowed items’ column in as well, to hold the information that was previously in the ‘Borrowing Records’ column, and it’s possible to order the table by this column too.  I also noticed that I’d accidentally removed columns displaying additional fields from the table, so I have reinstated these.  For St Andrews this means the ‘Classmark’ column is now back in the table.  I also realised that my new nested count queries were not limiting their counts when a specific register was selected so updated them to take this into consideration too.

Also this week I updated all of the WordPress sites I manage to the latest version and ensured all plugins were updated too.  I then began working on the public interfaces for the Comparative Kingship place-names project, which will have separate interfaces for its Scotland and Ireland data.  So far I’ve modified the existing place-names API so that it works with a database table prefix and got the API working for the Scotland data.  I then began working on the front-end that connects to this API and have managed to get the ‘browse’ option sort of working, although there are still some issues with layout and JavaScript due to the site using a different theme to the other place-names sites.  I’ll continue looking into this once I’m back from my holidays on the 5th of July.