Week Beginning 6th September 2021

I spent more than a day this week preparing my performance and development review form.  It’s the first time there’s been a PDR since before covid and it took some time to prepare everything.  Thankfully this blog provides a good record of everything I’ve done so I could base my form almost entirely on the material found here, which helped considerably.

Also this week I investigated and fixed an issue with the SCOTS corpus for Wendy Anderson.  One of the transcriptions of two speakers had the speaker IDs the wrong way round compared to the IDs in the metadata.  This was slightly complicated to sort out as I wasn’t sure whether it was better to change the participant metadata to match the IDs used in the text or vice-versa.  It turned out to be very difficult to change the IDs in the metadata as they are used to link numerous tables in the database, so instead I updated the text that’s displayed.  Rather strangely, the ‘download plan text’ file contained different incorrect IDs.  I fixed this as well, but it does make me worry that the IDs might be off in other plain text transcriptions too.  However, I looked at a couple of others and they seem ok, though, so perhaps it’s an isolated case.

I was contacted this week by a lecturer in English Literature who is intending to put a proposal together for a project to transcribe an author’s correspondence, and I spent some time writing a lengthy email with home helpful advice.  I also spoke to Jennifer Smith about her ‘Speak for Yersel’ project that’s starting this month, and we arranged to have a meeting the week after next.  I also spent quite a bit of time continuing to work on mockups for the STAR project’s websites based on feedback I’d received on the mockups I completed last week.  I created another four mockups with different colours, fonts and layouts, which should give the team plenty of options to decide from.  I also received more than a thousand new page images of library registers for the Books and Borrowing project and processed these and uploaded them to the server.  I’ll need to generate page records for them next week.

Finally, I continued to make updates to the Textbase search facilities for the Anglo-Norman Dictionary.  I updated genre headings to make them bigger and bolder, with more of a gap between the heading and the preceding items.  I also added a larger indent to the items within a genre and reordered the genres based on a new suggested order.  For each book I included the siglum as a link through to the book’s entry on the bibliography page and in the search results where a result’s page has an underscore in it the reference now displays volume and page number (e.g. 3_801 displays as ‘Volume 3, page 801’).  I updated the textbase text page so that page dividers in the continuous text also display volume and page in such cases.

Highlighted terms in the textbase text page no longer have padding around them (which was causing what looked like spaces when the term appears mid-word).  The text highlighting is unfortunately a bit of a blunt instrument, as one of the editors discovered by searching for the terms ‘le’ and fable’:  term 1 is located and highlighted first, then term 2 is.  In this example the first term is ‘le’ and the second term is ‘fable’.  Therefore the ‘le’ in ‘fable’ is highlighted during the first sweep and then ‘fable’ itself isn’t highlighted as it has already been changed to have the markup for the ‘le’ highlighting added to it and no longer matches ‘fable’.  Also, ‘le’ is matching some HTML tags buried in the text (‘style’), which is then breaking the HTML, which is why some HTML is getting displayed.  I’m not sure much can be done about any of this without a massive reworking of things, but it’s only an issue when searching for things like ‘le’ rather than actual content words so hopefully it’s not such a big deal.

The editor also wondered whether it would be possible to add in an option for searching and viewing multiple terms altogether but this would require me to rework the entire search and it’s not something I want to tackle if I can avoid it.  If a user wants to view the search results for different terms they can select two terms then open the full results in a new tab, repeating the process for each pair of terms they’re interested in, switching from tab to tab as required. Next week I’ll need to rename some of the textbase texts and split one of the texts into two separate texts, which is going to require me to regenerate the entire dataset.

Week Beginning 16th August 2021

I continued to work on the new textbase search facilities for the Anglo-Norman Dictionary this week.  I completed work on the required endpoints for the API, creating the facilities that would process a search term (with optional wildcards), limit the search to selected books and or genres and return either full search results in the case of an exact search for a term or a list of possible matching terms and the number of occurrences of each term.  I then worked on the front-end to enable a query to be processed and submitted to the API based on the choices made by the user.

By default any text entered will match any term that contains the text – e.g. enter ‘jour’ (without apostrophes) and you’ll find all forms containing the characters ‘jour’ anywhere e.g. ‘adjourner’, ‘journ’.  If you want to do an exact match you have to use double quotes – “jour”.  You can also use an asterisk at the beginning or end to match forms starting or ending with the term – ‘jour*’ and ‘*jour’ or an asterisk at both ends ‘*jour*’ will only find forms that contain the term somewhere in the middle.  You can also use a question mark wildcard to denote any single character, e.g. ‘am?n*’ will find words beginning ‘aman’, ‘amen’ etc.

If your selected form in your selected books / genres matches multiple forms then an intermediary page bringing up a list of matching forms and a count of the number of times each form appears will be displayed.  This is the same as how the ‘translation’ advanced search works, for example, and I wanted to maintain a consistent way of doing things across the site.  Select a specific form and the actual occurrences of each item in the texts will appear.  Above this list is a ‘Select another form’ button that returns you to the intermediary page.  If your search only brings back one form the intermediary page is skipped, and as all selection options appear in the URL it’s possible to bookmark / cite the search results too.

Whilst working on this I realised that I’d need to regenerate the data, as it became clear that many words have been erroneously joined together due to there being no space between words when one tag is closed and a following one is opened.  When the tags are then stripped out the forms get squashed together, which has led to some crazy forms such as ‘amendeezreamendezremaundez’.  Previously I’d not added spaces between tags as I was thinking that a space would have to be ended before a closing tag (e.g. ‘</’ becomes ‘ </’) and this would potentially mess up words that have tags in them, such as superscript tags in names like McDonald.  However, I realised I could instead do a find and replace to add spaces between a closing tag and an opening tag (‘><’ becomes ‘> <’, which would not mess up individual tags within words and wouldn’t have any further implications as I strip out all additional spaces when processing the texts for search purposes anyway.

I also decided that I should generate the ‘key-word in context’ (KWIC) for each word and store this in the database.  I was going to generate this on the fly every time a search results page was displayed but it seems more efficient to generate and store this once rather than do it every time.  I therefore updated by data processing script to generate the KWIC for each of the 3.5 million words as they were extracted from the texts.  This took some time to both implement and execute.  I decided to pull out the 10 words on either side of the term, which used the ‘word order’ column that gets generated as each page is processed.  Some complications were introduced in cases where the term is either before the tenth word on the page or there are less than ten words after the term on the page.  I such cases the script needed to look at the page before or after the current page in order to pull out the words and fill out the KWIC with the appropriate words on the other pages.

With the updates to data processing in place and a fair bit of testing of the KWIC facility carried out, I re-ran my scripts to regenerate the data and all looked good.  However, after inserting the KWIC data the querying of the tables slowed to a crawl.  On my local PC queries which were previously taking 0.5 seconds were taking more than 10 seconds, while on the server execution time was almost 30 seconds.  It was really baffling as the only difference was the search words table now had two additional fields (KWIC left and KWIC right), neither of which were being queried or returned in the query.  It seemed really strange that adding new columns could have such an effect if they were not even being used in a query.  I had to spend quite a bit of time investigating this, including looking at MySQL settings such as key buffer size and trying again to change storage engines, switching from MyISAM to InnoDB and back again to see what was going on.  Eventually I looked again at the indexes I’d created for the table, and decided to delete them and start over, in case this somehow jump-started the search speed.  I previously had the ‘word stripped’ column indexed in a multiple column index with page ID and word type (either main page or textual apparatus).  Instead I created an index of the ‘word stripped’ column on its own, and this immediately boosted performance.  Queries that were previously taking close to 30 seconds to execute on the server were now taking less than a second.  It was such a relief to have figured out what the issue was, as I had been considering whether my whole approach would need to be dropped and replaced by something completely different.

As I now had a useable search facility I continued to develop the front-end that would use this facility.  Previously the exact match for a term was bringing up just the term in question and a link through to the page the term appeared on, but now I could begin to incorporate the KWIC text too.  My initial idea was to use a tabular layout, with each word of the KWIC in a different column, with a clickable table heading that would allow the data to be ordered by any of the columns (e.g. order the data alphabetically by the first word to the left of the term).  However, after creating such a facility I realised it didn’t work very well.  The text just didn’t scan very well due to columns having to be the width of whatever the longest word in the column was, and the text just took up too much horizontal space.  Instead, I decided to revert to using an unordered list, with the KWIC left and KWIC right in separate spans, with KWIC left right aligned to push it up against the search term no matter what the length of the KWIC left text.  I split the KWIC text up into individual words and stored this in an array to enable each search result to be ordered by any word in the KWIC, and began working on a facility to change the order using a select box above the search results.  This is as far as I got this week, but I’m pretty confident that I’ll get things finished next week.  Here’s a screenshot of how the KWIC looks so far:

Also this week I had an email conversation with the other College of Arts developers about professional web designers after Stevie Barrett enquired about them, arranged to meet with Gerry Carruthers to discuss the journal he would like us to host, gave some advice to Thomas Clancy about mailing lists and spoke to Joanna Kopaczyk about a website she would like to set up for a conference she’s organising next year.

Week Beginning 9th August 2021

I’d taken last week off as our final break of the summer, and we spent it on the Kintyre peninsula.  We had a great time and were exceptionally lucky with the weather.  The rains began as we headed home and I returned to a regular week of work.  My major task for the week was to begin work on the search facilities for the Anglo-Norman Dictionary’s textbase, a collection of almost 80 lengthy texts for which I had previously created facilities to browse and view texts.  The editors wanted me to replicate the search options that were available through the old site, which enabled a user to select which texts to search (either individual texts or groups of texts arranged by genre), enter a single term to search (either a full match or partial match at the beginning or end of a word), select a specific term from a list of possible matches and then view each hit via a keyword in context (KWIC) interface, showing a specific number of words before and after the hit, with a link through to the full text opened at that specific point.

This is a pretty major development and I decided initially that I’d have two major tasks to tackle.  I’d have to categorise the texts by their genre and I’d have to research how best to handle full text searching including limiting to specific texts, KWIC and reordering KWIC, and linking through to specific pages and highlighting the results.  I reckoned it was potentially going to be tricky as I don’t have much experience with this kind of searching.  My initial thought was to see whether Apache Solr might be able to offer the required functionality.  I used this for the DSL’s advanced search, which searches the full text of the entries and returns snippets featuring the word, with the word highlighted and the word then highlighted throughout the entry when an entry in the results is loaded (e.g. https://dsl.ac.uk/results/dreich/fulltext/withquotes/both/).  This isn’t exactly what is required here, but I hoped that there might be further options I can explore.  Failing that I wondered whether I could repurpose the code for the Scottish Corpus of Texts and Speech.  I didn’t create this site, but I redeveloped it significantly a few years ago and may be able to borrow parts from the concordance search. E.g. https://scottishcorpus.ac.uk/advanced-search/ and select ‘general’ then ‘word search’ then ‘word / phrase (concordance)’ then search for ‘haggis’ and scroll down to the section under the map.  When opening a document you can then cycle through the matching terms, which are highlighted, e.g. https://scottishcorpus.ac.uk/document/?documentid=1572&highlight=haggis#match1.

After spending some further time with the old search facility and considering the issues I realised there are a lot of things to be considered regarding preparing the texts for search purposes.  I can’t just plug the entire texts in as only certain parts of them should be used for searching – no front or back matter, no notes, textual apparatus or references.  In addition, in order to properly ascertain which words follow on from each other all XML tags need to be removed too, and this introduces issues where no space has been entered between tags but a space needs to exist between the contents of the tags, e.g. ‘dEspayne</item><item>La charge’ would otherwise become ‘dEspayneLa charge’.

As I’d need to process the texts no matter which search facility I end up using I decided to focus on this first, and set up some processing scripts and a database on my local PC to work with the texts.  Initially I managed to extract the page contents for each required page, remove notes etc and strip the tags and line breaks so that the page content is one continuous block of text.

I realised that the old search seems to be case sensitive, which doesn’t seem very helpful.  E.g. search for ‘Leycestre’ and you find nothing – you need to enter ‘leycestre’, even though all 264 occurrences actually have a capital L.  I decided to make the new search case insensitive – so searching for ‘Leycestre’, ‘leycestre’ or ‘LEYCESTRE’ will bring back the same results.  Also, the old search limits the keyword in context display to pages.  E.g. the first ‘Leycestre’ hit has no text after it as it’s the last word on the page.  I’m intending to take the same approach as I’m processing text on a page-by-page basis.  I may be able to fill out the KWIC with text from the preceding / subsequent page if you consider this to be important, but it would be something I’d have to add in after the main work is completed.  The old search also limits the KWIC to text that’s on the same line, e.g. in a search for ‘arcevesque’ the result ‘L’arcevesque puis metre en grant confundei’ has no text before because it’s on a different line (it also chops off the end of ‘confundeisun’ for some reason).  The new KWIC will ignore breaks in the text (other than page breaks) when displaying the context.  I also realised that I need to know what to do about words that have apostrophes in them.  The old search splits words on the apostrophe, so for example you can search for arcevesque but not l’arcevesque.  I’m intending to do the same.  The old search retains both parts before and after the apostrophe as separate search terms, so for example in “qu’il” you can search for “qu” and “il” (but not “qu’il”).

After some discussions with the editor, I updated my system to include textual apparatus, stored in a separate field to the main page text.  With all of the text extracted I decided that I’d just try and make my own system initially, to see whether it would be possible.  I therefore created a script that would take each word from the extracted page and textual apparatus fields and store this in a separate table, ensuring that words with apostrophes in them are split into separate words and for search purposes all non-alphanumeric characters are removed and the text is stored as lower-case.  I also needed to store the word as it actually appears in the text, the word order on the page and whether the word is a main page word or in the textual apparatus.  This is because after finding a word I’ll need to extract those around it for the KWIC display.  After running my script I ended up with around 3.5 million rows in the ‘words’ table, and this is where I ran into some difficulties.

I ran some test queries on the local version of the database and all looked pretty promising, but after copying the data to the server and running the same queries it appeared that the server is unusably slow.  On my desktop a query  to find all occurrences of ‘jour’, with the word table joined to the page table and then to the text table completed in less than 0.5 seconds but on the server the same query took more than 16 seconds, so about 32 times slower.  I tried the same query a couple of times and the results are roughly the same each time.  My desktop PC is a Core i5 with 32GB of RAM, and the database is running on an NVMe M.2 SSD, which no doubt makes things quicker, but I wouldn’t expect it to be 32 times quicker.

I then did some further experiments with the server.  When I query the table containing the millions of rows on its own the query is fast (much less than a second).  I added a further index to the column that is used for the join to the page table (previously it was indexed, but in combination with other columns) and then when limiting the query to just these two tables the query runs at a fairly decent speed (about 0.5 seconds).  However, the full query involving all three tables still takes far too long, and I’m not sure why.  It’s very odd as there are indexes on the joining columns and the additional table is not big – it only has 77 rows.  I read somewhere that ordering the results by a column in the joined table can make things slower, as can using descending order on a column, so I tried updating the ordering but this has had no effect.  It’s really weird – I just can’t figure out why adding the table has such a negative effect on the performance and I may end up just having to incorporate some of the columns from the text table into the page table, even though it will mean duplicating data.  I also still don’t know why the performance is so different on my local PC either.

One final thing I tried was to change the database storage type.  I noticed that the three tables were set to use MyISAM storage rather than InnoDB, which the rest of the database was set to.  I migrated the tables to InnoDB in the hope that this might speed things up, but it’s actually slowed things down, both on my local PC and the server.  The two-table query now takes several seconds while the three-table query now takes about the same, so is quicker, but still too slow.  On my desktop PC the speed has doubled to about 1 second.  I therefore reverted back to using MyISAM.

I decided to leave the issue of database speed at that point and to focus on other things instead.  I added a new ‘genre’ column to the texts and added in the required categorisation.  I then updated the API to add in this new column and updated the ‘browse’ and ‘view’ front-ends so that genre now gets displayed.  I then began work on the front-end for the search, focussing on the options for listing texts by genre and adding in the options to select / deselect specific texts or entire genres of text.  This required quite a bit of HTML, JavaScript and CSS work and made a nice change from all of the data processing.  By the end of the week I’d completed work on the text selection facility, and next week I’ll tackle the actual processing of the search, at which point I’ll know whether my database way of handling things will be sufficiently speedy.

Also this week I had a chat with Eleanor Lawson about the STAR project that has recently begun.  There was a project meeting last week that unfortunately I wasn’t able to attend due to my holiday, so we had an email conversation about some of the technical issues that were raised at the meeting, including how it might be possible to view videos side by side and how a user may choose to select multiple videos to be played automatically one after the other.

I also fixed a couple of minor formatting issues for the DSL people and spoke to Katie Halsey, PI of the Books and Borrowing project about the development of the API for the project and the data export facilities.  I also received further feedback from Kirsteen McCue regarding the Data Management Plan for her AHRC proposal and went through this, responding to the comments and generating a slightly tweaked version of the plan.

 

Week Beginning 26th July 2021

I returned to Glasgow for a more regular week of working from home, after spending a delightful time at my parents’ house in Yorkshire for the past two weeks.  I continued to work on the Comparative Kingship front-ends this week.  I fixed a couple of issues with the content management systems, such as ensuring that the option to limit the list of place-names by parish worked for historical parishes and fixing an issue whereby searching by sources was returning zero results.  Last week I’d contacted Chris Fleet at NLS Maps to ask whether we might be able to incorporate a couple of maps of Ireland that they host into our resource, and Chris got back to me with a very helpful reply, giving us permission to use the maps and also pointing out some changes to the existing map layers that I could make.

I updated the attribution links on the site, and pointed the OS six-inch map links to the NLS’s hosting on AWS.  I also updated these things on the other place-name resources I’ve created too.  We had previously been using a modern OS map layer hosted by the NLS, and Chris pointed out that a more up to date version could now be accessed directly from the OS website (see Chris’s blog post here: https://www.ordnancesurvey.co.uk/newsroom/blog/comparing-past-present-new-os-maps-api-layers).  I followed the instructions and signed up for an OS API key, and it was then a fairly easy process to replace the OS layer with the new one.  I did the same with the other place-name resources too, and its looks pretty good.  See for example how it looks on a map showing placenames beginning with ‘B’ on the Berwickshire site: https://berwickshire-placenames.glasgow.ac.uk/place-names/?p=results&source=browse&reels_name=B*#13/55.7939/-2.2884/resultsTabs-0/code/tileOS//

With these changes and the Irish historical maps in place I continued to work on the Irish front-end.  I added in the parish boundaries for all of the currently required parishes and also added in three-letter acronyms that the researcher Nick Evans had created for each parish.  These are needed to identify the parishes on the map, as full parish names would clutter things up too much.  I then needed to manually positing each of the acronyms on the map, and to do so I updated the Irish map to print the latitude and longitude of a point to the console whenever a mouse click is made.  This made it very easy to grab the coordinates of an ideal location for each acronym.

There were a few issues with the parish boundaries, and Nick wondered whether the boundary shapefiles he was using might work better.  I managed to open the parish boundary shapefile in QGIS, converted the boundary data to WGS84 (latitude / longitude) and then extracted the boundaries as a GeoJSON file that I can use with my system.  I then replaced the previous parish boundaries with the ones from this dataset, but unfortunately something was not right with the positioning.  The northern Irish ones appear to be too far north and east, with the boundary for BNT extending into the sea rather than following the coast and ARM not even including the town of Armoy, as the following screenshot demonstrates:

In QGIS I needed to change the coordinate reference system from TM65 / Irish Grid to WGS84 to give me latitude and longitude values, and I wondered whether this process had caused the error, therefore I loaded the parish data into QGIS again and added an OpenStreetMap base map to it too, and the issue with the positioning is still apparent in the original data, as you can see from the following QGIS screenshot:

I can’t quite tell if the same problem exists with the southern parishes.  I’d positioned the acronyms in the middle of the parishes and they mostly still seem to be in the middle, which suggests these boundaries may be ok, although I’m not sure how some could be wrong while others are correct as everything is joined together.  After consultation with Nick I reverted to the original boundaries, but kept a copy of the other ones in case we want to reinstate them in future.

Also this week I investigated a strange issue with the Anglo-Norman Dictionary, whereby a quick search for ‘resoler’ brings back an ‘autocomplete’ match, but then finds zero results if you click on it.  ‘Resoler’ is a cross-reference entry and works in the ‘browse’ option too.  It seemed very strange that the redirect from the ‘browse’ would work, and also that a quick search for ‘resolut’, which is another variant of ‘resoudre’ was also working.  It turns out that it’s an issue with the XML for the entry for ‘resoudre’.  It lists ‘resolut’ as a variant, but does not include ‘resoler’ as you can see:

<variant gram=”imp.5″>resolvez</variant>

<deviant gram=”imp.5″>resoylez</deviant>

<varref><reference><source siglum=”Alchimie”><loc>380.5</loc></source></reference></varref>

<variant gram=”p.p.”>resolé</variant>

<variant>resolu</variant>

<variant>resolut</variant>

<newvargroup/>

<variant gram=”p.p.pl.”>resolous</variant>

<deviant gram=”p.p.pl.”>resouz</deviant>

<varref><reference><source siglum=”Secr1″><loc>1524</loc></source></reference></varref>

<deviant>resus</deviant>

<varref><reference><source siglum=”Alchimie”><loc>379.1</loc></source></reference></varref>

The search uses the variants / deviants from the XML to figure out which main entry to load from a cross reference.  As ‘resoler’ is not present the system doesn’t know what entry ‘resoler’ refers to and therefore displays no results.  I pointed this out to the editor, who changed the XML to add in the missing variant, which fixed the issue.

Also this week I responded to some feedback on the Data Management Plan for Kirsteen’s project, which took a little time to compile, and spoke to Jennifer Smith about her upcoming follow-on project for SCOSYA, which begins in September and I’ll be heavily involved with.  I also had a chat with Rhona about the ancient DSL server that we should now be able to decommission.

Finally, Gerry Carruthers sent me some further files relating to the International Journal of Scottish Theatre, which he is hoping we will be able to host an archive of at Glasgow.  It consisted of a database dump, which I imported into a local database and had a look at it.  It mostly consists of tables used to manage some sort of editorial system and doesn’t seem to contain the full text of the articles.  Some of the information contained in it may be useful, though – e.g. it stores information about article titles, authors, the issues articles appear in, the original PDF filenames for each article etc.

In addition, the full text of the articles is available as both PDF and HTML in the folder ‘1>articles’.  Each article has a numerically numbered folder (e.g. 109) that contains two folders: ‘public’ and ‘submission’.  ‘public’ contains the PDF version of the article.  ‘submission’ contains two further folders: ‘copyedit’ and ‘layout’.  ‘copyedit’ contains an HTML version of the article while ‘layout’ contains a further PDF version.  It would be possible to use each HTML version as a basis for a WordPress version of the article.  However, some things need to be considered:

Does the HTML version always represent the final published version of the article?  The fact that it exists in folders labelled ‘submission’ and ‘copyedit’ and not ‘public’ suggests that the HTML version is likely to be a work in progress version and editorial changes may have been made to the PDF in the ‘public’ folder that are not present in the HTML version.  Also, there are sometimes multiple HTML versions of the article.  E.g. in the folder ‘1>articles>154>submission>copyedit’ there are two HTML files: ‘164-523-1-CE.htm’ and ‘164-523-2-CE.htm’.  These both contain the full text of the article but have different formatting (and may have differences in the content, but I haven’t checked this).

After looking at the source of the HML versions I realised these have been auto-generated from MS Word.  Word generates really messy, verbose HTML with lots of unnecessary tags and I therefore wanted to see what would happen if I copied and pasted it into WordPress.  My initial experiment was mostly a success, but WordPress treats line breaks in the pasted file as actual line breaks, meaning the text didn’t display as it should.  What I needed to do in my text editor was find and replace all line break characters (\r and \n) with spaces.  I also had to make sure I only copied the contents within the HTML <body> tag rather than the whole text of the file.  After that the process worked quite well.

However, there are other issues with the dataset.  For example, article 138 only has Word files rather than HTML or PDF files and article 142 has images in it, and these are broken in the HTML version of the article.  Any images in articles will probably have to be manually added in during proofreading.  We’ll need to consider whether we’ll have to get someone to manually migrate the data, or whether I can write a script that will handle the bulk of the process.

I had my second vaccination jab on Wednesday this week, which thankfully didn’t hit me as hard as the first one did.  I still felt rather groggy for a couple of days, though.  Next week I’m on holiday again, this time heaving to the Kintyre peninsula to a cottage with no internet or mobile signal, so I’ll be unreachable until the week after next.

Week Beginning 19th July 2021

This was my second and final week staying at my parents’ house in Yorkshire, where I’m working a total of four days over the two weeks.  This week I had an email conversation with Eleanor Lawson about her STAR project, which will be starting very shortly.  We discussed the online presence for the project, which will be split between a new section on the Seeing Speech website and an entirely new website, the project’s data and workflows and my role over the 24 months of the project.  I also created a script to batch process some of the Edinburgh registers for the Books and Borrowing project.  The page images are double spreads and had been given a number for both the recto and the verso (e.g. 1-2, 3-4), but the student registers only ever use the verso page.  I was therefore asked to write a script to renumber all of these (e.g. 1-2 becomes 1, 3-4 becomes 2), which I created and executed on a test version of the site before applying to the live data.

I also continued to make tweaks to the front-ends for the Comparative Kingship project.  I fixed a bug with the Elements glossary of the Irish site, which was loading the Scottish version instead.  I also contacted Chris Fleet at NLS Maps to enquire about using a couple of their historical Irish maps with the site.  I also fixed the ‘to top’ button in the CMSes not working; the buttons now actually scroll the page to the top as they should.  I also fixed some issues relating to parish names no longer being unique in the system (e.g. the parish of Gartly is in the system twice due to it changing county at some point).  This was causing issues with the browse option as data was being grouped by parish name.  Changing the grouping to the parish ID thankfully fixed the issue.

I also had a chat with Ann Fergusson at the DSL about multi-item bibliographical entries in the existing DSL data.  These are being split into individual items, and a new ‘sldid’ attribute in the new data will be used to specify which item in the old entry the new entry corresponds to.  We agreed that I would figure out a way to ensure that these IDs can be used in the new website once I receive the updated data.

My final task of the week was to investigate a problem with Rob Maslen’s City of Lost Books blog (https://thecityoflostbooks.glasgow.ac.uk/) when went offline this week and only displayed a ‘database error’.  Usually when this happens it’s a problem with the MySQL database and it takes down all of the sites on the server, but this time it was only Rob’s site that was being affected.  I tried accessing the WP admin pages and this gave a different error about the database being corrupted.   I needed to update the wordpress config file to add the line define(‘WP_ALLOW_REPAIR’, true); and upon reloading the page WordPress attempted to fix the database.  After doing so it stated that “The wp_options table is not okay. It is reporting the following error: Table is marked as crashed and last repair failed. WordPress will attempt to repair this table… Failed to repair the wp_options table. Error: Wrong block with wrong total length starting at 10356”.  WordPress appeared to regenerate the table, as after this the table existed and was populated with data and the blog went online again and could be logged into.  I’ll have to remember this if it happens again in future.

Next week I’ll be back in Glasgow.

Week Beginning 12th July 2021

I’m down visiting my parents in Yorkshire for the first time in 18 months this week and next, working a total of four days over the two-week period.  This week I mainly focussed on the Irish front-end for the Comparative Kingship place-names project, but also adding in some updates to the Scotland system that I recently set up, such as making the Gaelic forms of the classification codes visible and adding options to browse Gaelic forms of place-names and historical forms to the ‘Browse’ facility and ensuring the other place-name and historical form browses only bring back English forms.

The Irish system is mostly identical to the Scottish system, but I did need to make some changes that took a bit of time to implement.  As the place-names covered appear to be much more geographically spread out, I’ve allowed the map to be zoomed out further.  I’ve also had to remove the modern OS and historical OS map layers as they don’t cover Ireland, so currently there are only three map layers available (the default view, satellite view and satellite view with labels).  The Ordnance Survey of Ireland provides access to some historical map layers here: https://geohive.maps.arcgis.com/apps/webappviewer/index.html?id=9def898f708b47f19a8d8b7088a100c4 but their terms and conditions makes it clear that you can’t use the maps on another online resource.  However, there are a couple of Irish maps on the NLS website, the Bartholomew Quarter-Inch 1940 (https://maps.nls.uk/geo/explore/#zoom=9&lat=53.10286&lon=-7.34481&layers=13&b=1) and the GSGS One-Inch 1941-3 (https://maps.nls.uk/geo/explore/#zoom=9&lat=53.10286&lon=-7.34481&layers=14&b=1) and we could investigate integrating these as the NLS maps people have always been very helpful.

I also updated the map pop-ups to include the new Irish data fields, such as baronies, townlands and the different map types.  Both English and Gaelic forms of things like parishes, baronies and classification codes are displayed throughout the site and on the Record page the ITM figures also appear.  I updated the ‘Browse’ page so that it features baronies and the element glossary should work, but I haven’t tested it out as there is no data yet.  The Advanced search features a selectable list of baronies and currently a simple textbox for townlands.  I may change this to an autocomplete (whereby you start typing and townlands that include the letters appear in a selectable list), or I may leave it as it is, meaning multiple townlands can be searched for and wildcard characters can be used.

I managed to locate downloadable files containing parish boundaries for Ireland here: https://www.townlands.ie/page/download/ and have added these to the data for the two parishes that currently contain data.  I haven’t added in any other parish boundaries yet as there are over 200 parishes in our database I don’t want to have to manually add in the boundaries for all of these if it won’t be necessary.  Also, on the Scotland maps the three-letter acronym appears in the middle of each parish in order to identify it, but the Irish parishes don’t have TLAs so currently don’t have any labels.  The full text of the parish will clutter up the map too much if I use it, so I’m not sure what we could do to label the parishes.

Also this week I responded to some feedback about the Data Management Plan for Kirsteen McCue’s proposal and created a slightly revised version.  I also had an email conversation with Eleanor Lawson about her new speech project and how the web presence for the project may function.  Finally, I made some tweaks to the Dictionary of the Scots Language, updating the layout of the ‘Contact’ page and updating the bibliography page on the new website so that URLs that use the old style IDs will continue to work.  I also had a chat with Rhona Alcorn about some new search options that we are going to add in to the new site before it goes live, although probably not until the autumn.

 

Week Beginning 14th June 2021

I divided my time this week primarily into three.  Firstly, I wrote a Data Management Plan for Craig Lamont’s proposal.  I can’t really say much about it at this stage, but it took about a day to write, including several email conversations with Craig.

Secondly, I made some updates to the Books and Borrowing CMS.  This took some time to get started on as my access to the Stirling VPN had been cancelled, and without such access I couldn’t access the project’s server.  Thankfully with the help of Stirling’s Information Services people my access was reinstated on Monday and I could start working on the updates.  After familiarising myself with the systems again I had some further questions about the updates suggested by Matt Sangster, resulting in an email conversation and a suggestion by him that he discusses things further with the team next Monday.  Gerry McKeever had suggested some further updates, though, and I worked on these.

The first issue was the ordering of the ‘Books’ tab when viewing a library.  This list of books (of which there can be thousands) is paginated with 200 books per page, with options to order the table by a variety of columns (e.g. book name and number of associated borrowings).  However, the ordering was only ordering the subset of 200 books rather than the whole set.

I updated the page so that the complete dataset is reordered rather than just the 200 records that are displayed per page.  However, this has a massive performance hit that wipes out the page loading speed increase that was gained from paginating the list in the first place.  To reorder the data the page needs to load the entire dataset and then reorder it.  In the case of St Andrews this means that more than 7,200 book records need to be loaded, with multiple sub-queries for each of these records required to bring back the counts of borrowing records and information about book items, book editions and authors.

With the previous paginated way of viewing the data the CMS was taking a couple of seconds to load the ‘Books’ page for St Andrews.  With the new update in place it was taking more than 1 minute and 20 seconds for the page to load.  When running the exact same code and database on my local PC it was taking 10 seconds to load, so presumably the spec of my local PC is considerably better than the server (either that or it’s having to handle a lot of other database requests at the same time, which is affecting performance).

I had considered storing the data in a session variable, which would mean after the first horrendous load time the data would be ready and waiting in the server’s memory to be used until you closed your browser, however, as the data is continuously being worked on this would mean the information displayed would possibly not accurately reflect the current state of the data, which may be confusing.  What I am planning on doing when I develop the front-end is to create a cached version of the data, so counts of borrowing records etc won’t need to be recalculated each time a user queries something, but creating such a cached version wouldn’t really work whilst the data is still being worked on.  I could set the system up to refresh the cache every night, but that would mean the CMS would again not reflect the current state of the data, which isn’t good.  I also updated the ‘Borrowers’ page to allow full reordering of data here too.  This isn’t quite as slow as the books page.

I spoke to the server admin people to see if they could think of a reason why the server loading speed was so much worse that on my local PC.  They reckoned it was because the database is stored on a different server to the code, and the sheer number of individual queries being sent meant that small delays in connecting between servers were mounting up.  I reworked the code somewhat to try and streamline the number of database queries that need to be made.  Only two of the columns can now be selected to order the data by: Book Holding title and number of borrowing records.  I’m hoping these are the most important anyway.  I have updated the queries so that the bulk of the data is only retrieved for the 200 records that are on the visible page (as used to be the case) with only a single query of the holding table and then a further query for each relevant holding record to bring back a count of its borrowing records now being made on the full dataset (e.g. for St Andrews for each of the 7,391 books).  This has made a huge difference and has brought the page loading times back down to a more acceptable few seconds.

Gerry’s second request was that when the book list is limited to a specific register the counts of borrowings updated to reflect this.  I updated the code so that counts of borrowing records on both the ‘Books’ and ‘Borrowers’ tabs get limited to just the selected register and thankfully there was no performance hit associated with this update.

The third project of the week for me was the Anglo-Norman Dictionary.  As mentioned in last week’s lengthy post, I had discovered a fourth version of the texts for the textbase that appear to be the ones that the old site actually used.  I spent most of Tuesday splitting this fourth version of the texts into individual pages and preparing them for display.  They had new issues that needed to be tackled (following the previous process resulted in about 2,000 fewer pages and it turned out that this was caused by some page breaks in the fourth version not having ‘n’ numbers).  By the end of the day I’d managed to get the same number of pages as with my initial version, with the new pages available via the front-end and all working with spacing issues resolved.

I discovered that the weird spacing issue that I had previously thought was an issue with the first version of the texts I was working with had actually been introduced via the ‘Tidy’ library I’d used to remove mismatched opening and closing tags from sections of the XML that I’d split into pages.  It’s really bizarre, but the library was inserting space characters and rearranging existing space characters between tags in a way that completely destroyed the integrity of the data.  After some Googling I came across this item about the issue: https://stackoverflow.com/questions/15147711/php-tidy-removes-whitespace-and-inserts-newlines and a suggested way around the issue is to enclose the XML in a <pre> tag before passing it through the Tidy library, which means the library doesn’t mess about with the layout.  The placement of spaces in a text can be of vital importance so why the library by default messes with spaces and doesn’t even provide an option to stop the library doing so is baffling.  However, the <pre> hack worked, thankfully.

However, on Wednesday I received an email from the editor Geert to say that they had received approval for the AND to display each of the textbase texts in full on one page, rather than being split up into individual pages.  This was great news, but did mean that all my work on splitting up and reformatting the pages was all for nothing.  Still, that’s the way it goes sometimes.  As the week drew to a close I began working on a new version of the textbase, and by the end of the week I had completed a preliminary version of the textbase featuring the full content of each text on one long page.  I have to say it’s a lot easier to use now and is a massive improvement on having to navigate through hundreds of individual small pages.

The contents page is pretty much the same, and still includes a ‘jump to page’ feature, although this now takes you to the relevant section of the long page rather than an individual page.  When you load a text, either by clicking on its title or selecting a page the full text will load.

I added the copyright statement to the top as well as the bottom of the text to make it more visible, and have given it a blue background for a similar reason.  There is also a ‘jump to page’ feature on this page too, which takes you directly to the appropriate section of the text.  I also added an option to show / hide notes so you can hide them to declutter the page a bit.  The individual pages are divided with a horizontal line with the page number centred in the middle of this.  Explanatory notes appear in a grey section at the foot of each page.  There are still some things I need to work on, namely to go through each text to check that the formatting is correct throughout and to fix the footnote numbering and ordering.  I think I have a plan for this, but will need to look into this next week.

Also this week I heard that a proposal involving Jane Stuart-Smith and Eleanor Lawson at QMU that I helped put together last year has been funded and is due to stary in July, which is great news.  I also made a few further tweaks to the Dictionary of the Scots Language and had a chat about some new dictionaries that are going to be added to the site.

Week Beginning 7th June 2021

This week I finished an initial version of the ‘Browse Textbase’ feature for the Anglo-Norman Dictionary. Processing the XML proved to be rather tricky as I couldn’t just use the old XSLT file as it included a lot of stuff that wasn’t needed in the new site (e.g. formatting headers and footers) and gave errors when plugged directly into the new system.  For these reasons I had to adapt the XSLT.  Also, I’d split up the full XML files into chunks for each page, resulting in more than 12,700 chunks.  However, the XML often included elements that extended across pages, and when the content was extracted on a per-page basis this led to an invalid XML structure, as some tags ended up missing their closing tags, or closed without featuring an opening tag.  XSLT only works on valid XML files so I needed to find a way to fix this tag issue.  After some Googling I discovered that there is a PHP extension called Tidy that can take an invalid XML file and fix it.  What this does is to strip out all tags that don’t have an opening or closing tag, which is exactly what I wanted.  I wrote a little script that used the extension, tested it successfully on a few files and then ran all of the 12,700 pages through it.

With a full set of valid XML page files I then began work on the XSLT to display the documents as required.  This has been a very laborious process as I needed to go through each of the 77 documents and check the layout for any issues, and fix these as they cropped up.  With more than 12,700 pages I couldn’t look at each individually, but instead I generally looked at every page of the front matter, and then a random selection of pages in the main body of the text, as generally the structure is more consistent here.  I think this approach has worked well as most formatting issues were to be found in the front matter (e.g. some tables were split across multiple pages and needed table tags to be inserted at the top and bottom).

With regards to the main body of the texts the largest challenge has been getting the explanatory notes to appear correctly, as these had been tagged in at least nine different ways throughout the documents, sometimes with entirely different XML structures and content.  One possible issue is that I dealt with new XML features as they cropped up as I worked through the books, but in dealing with these features I may have inadvertently messed up how things looked in earlier books.  One example that I thankfully spotted is that I wanted <bibl> tags to start on a new line as this would make the bibliographies easier to read, but other texts have the <bibl> tag mid-sentence and my change resulted in lines breaking where they shouldn’t.

There are some other issues that have cropped up that we may still need to address.  There are many spacing issues caused by whoever tagged the documents not leaving spaces between tags, or adding spaces between tags where there shouldn’t be spaces.  It’s a bit of a strange issue as it doesn’t seem to exhibit itself on the old site, but isn’t something that is dealt with by the scripts I have access to.  I don’t know if perhaps the texts were ‘fixed’ at some point and I just don’t have access to the fixed versions.  It’s not something that can be fixed automatically (at least not without coming up with a set of rules for fixing) as it’s not always the case that a tag should always have (or not have) a space after it.  Here are some examples, with the text as displayed before the colon and the XML after:

  1. ‘M cMoroug’: M <hi rend=”sup”>c</hi>Moroug
  2. ‘Lettres et pétitions( Legge’: <title lang=”FR” rend=”italic”>Lettres et p&#xE9;titions</title>( <editor>Legge</editor>
  3. ‘CDqui’: <title type=”MS”>CD</title>qui
  4. ‘( 17et 22)’: ( <ref target=”D1396_17″>17</ref>et <ref target=”D1396_22″>22</ref>)
  5. ‘n o2’: n <hi rend=”sup”>o</hi>2</ref>
  6. ‘Sire’: <hi rend=”bold”>S</hi>ire
  7. ‘T hepresent’: T <hi rend=”sc”>he</hi>present
  8. ‘Le xxx eiour ‘: Le xxx <hi rend=”sup”>e</hi>iour

Another issue is that the speed of loading a page is erratic.  Sometimes it’s instant, other times it takes several agonising seconds.  It’s really frustrating, and it’s not caused by my code.  I’m hoping when we get the new server (which we now have a quote for) this issue will resolve itself.  Also, Some of the pages are split at different points in two texts.  This must be due to the structure of the XML.  However, despite this all of the content is still included.                In addition, a couple of texts in the old system were broken – either the navigation just did not work or page contents were displaying multiple times.  I’m afraid I didn’t make a note of which these were, but they’re all sorted in the new system anyway.

There are currently some issues with footnote numbers due to all of the different ways these are tagged (sometimes with multiple ways being used on a single page).  Some examples:

  1. If multiple ways of tagging are used in the same page this can result in footnotes appearing out of order. This can be because some notes are <note> and others are <app>.  This is also causing some issue with the numbering as well (e.g. there are two [1] footnotes but the first listed should actually be [3].  This clearly needs some work, but I’m not sure how best to fix the issue.  On the old site notes of different types are given letters, but I’m not sure which letters to use for what, and if we want to continue using letters.
  2. In some places note numbers are being displayed where they weren’t previously being displayed. I’m not sure what should be done about this – I could for example add in an option to show / hide the notes.
  3. I’ve ensured all footnotes appear on a new line rather than having some that run on one line and others (sometimes in the same page) that have their own line.
  4. Sometimes an extended form of a footnote number appears where one didn’t previously (e.g. ‘[p2n5]’ rather than just ‘[5]’).
  5. Sometimes multiple notes appear straight after each other, and currently in such cases the numbering appears correctly in the text, but in the footnotes the first number in the line is duplicated. For example [2] and [3] in the text appear as [2] and [2] in the footnotes.

After spending a lot of time over the past two weeks working through the XML texts and wondering why the old site doesn’t display the spacing errors found in the texts I had access to, I did some further investigation into this.  It would appear that the old site uses different versions of the XML files to the ones I’ve been using.  I’m not sure why there are multiple versions of the XML files, but I’ve discovered that there are XML files in the ‘reduce’ folder that Heather gave me access to a couple of weeks ago, and these are different to the ones I have been using and must have been stored somewhere else on the server.

For example, the file ‘kingscouncil.xml’ that I have been using exhibits the spacing issue, see for example ‘M <hi rend=”sup”>c</hi>Moroug‘ and ‘xxx <hi rend=”sup”>e</hi>jour’ in this snippet:


<p> <hi lang=”LA” rend=”italic”>indorsacio</hi>. Eient les supplians la garde et la mariage dedens cestes contenues, selonc la purport de ceste peticion, pour xx. s. vi. d. appaier en le Haneper pur le fyn, par les lettres patentes notre Seignour le Roy souz son grant seel en Irland en due fourme. Doune a Dyvelyn le xxx <hi rend=”sup”>e</hi>jour Doctobre, lan notre dit Seignour le Roy Richard Seconde seszisme. <anchor id=”P4A1″ type=”note”/> <note place=”foot” target=”P4A1″>The <date>30th of October 1392</date>. The regnal years of this king commenced on the <date>22nd of June</date>in each year. Here, and elsewhere throughout the Roll, the year of the present style is used, but no rectification of the day of the month has been attempted.</note>A tresreverent pere <anchor id=”P4A2″ type=”note”/> <note place=”foot” target=”P4A2″>As letters patent were to issue, Robert Archbishop of Dublin, Chancellor of Ireland, must have been the person here addressed. See enrolment No. 15, <hi rend=”italic”>infra</hi>.</note>&amp;c., comme desus.</p> <div n=”2″> <p> <note place=”omargin”> <date>A.D. 1392</date> </note>A tresnobles Justice et Consel notre Seignour le Roy en Irland supplie Johan Creef de Ballaghmoun, que comme sa ville, sa mansion, ses blees et diverses autres benes furent arses, degastes et destruys par M <hi rend=”sup”>c</hi>Moroug et autres Irrois enemys notre Seignour le Roy, comme est comme est cognuz et notifie a vous, tresnobles</p> </div>


But in the ‘reduce’ folder there are two further versions of this (and all) textbase files.  One is named ‘kingscouncil.xml’ but is different to the one I’ve been using.  It has different TEIHeader data and doesn’t exhibit the spacing issue, see for example:


<p><hi lang=”LA” rend=”italic”>indorsacio</hi>. Eient les supplians la garde et la mariage dedens cestes contenues, selonc la purport de ceste peticion, pour xx. s. vi. d. appaier en le Haneper pur le fyn, par les lettres patentes notre Seignour le Roy souz son grant seel en Irland en due fourme. Doune a Dyvelyn le xxx<hi rend=”sup”>e</hi> jour Doctobre, lan notre dit Seignour le Roy Richard Seconde seszisme.<anchor id=”P4A1″ type=”note”/><note place=”foot” target=”P4A1″>The <date>30th of October 1392</date>. The regnal years of this king commenced on the <date>22nd of June</date> in each year. Here, and elsewhere throughout the Roll, the year of the present style is used, but no rectification of the day of the month has been attempted.</note> A tresreverent pere<anchor id=”P4A2″ type=”note”/><note place=”foot” target=”P4A2″>As letters patent were to issue, Robert Archbishop of Dublin, Chancellor of Ireland, must have been the person here addressed. See enrolment No. 15, <hi rend=”italic”>infra</hi>.</note> &amp;c., comme desus.</p></div>

<div n=”2″><p><note place=”omargin”><date>A.D. 1392</date></note> A tresnobles Justice et Consel notre Seignour le Roy en Irland supplie Johan Creef de Ballaghmoun, que comme sa ville, sa mansion, ses blees et diverses autres benes furent arses, degastes et destruys par M<hi rend=”sup”>c</hi>Moroug et autres Irrois enemys notre Seignour le Roy, comme est comme est cognuz et notifie a vous, tresnobles


Finally, there is a further version named ‘kingscouncil-apps.xml’ that appears to be just the text (no TEIHeader), again doesn’t exhibit the spacing issue, but in addition seems to use different tags in places.  See the tag around ‘indorsacio’, for example:


<p><term lang=”LA” rend=”i”>Indorsacio</term>. Eient les supplians la garde et la mariage dedens cestes contenues, selonc la purport de ceste peticion, pour xx. s. vi. d. appaier en le Haneper pur le fyn, par les lettres patentes notre Seignour le Roy souz son grant seel en Irland en due fourme. Doune a Dyvelyn le xxx<hi rend=”sup”>e</hi> jour Doctobre, lan notre dit Seignour le Roy Richard Seconde seszisme.<anchor id=”P4A1″ type=”note”/><note place=”foot” target=”P4A1″>The <date>30th of October 1392</date>. The regnal years of this king commenced on the <date>22nd of June</date> in each year. Here, and elsewhere throughout the Roll, the year of the present style is used, but no rectification of the day of the month has been attempted.</note> A tresreverent pere<anchor id=”P4A2″ type=”note”/><note place=”foot” target=”P4A2″>As letters patent were to issue, Robert Archbishop of Dublin, Chancellor of Ireland, must have been the person here addressed. See enrolment No. 15, <hi rend=”italic”>infra</hi>.</note> &amp;c., comme desus.</p></div>

<div n=”2″><p><note place=”omargin”><date>A.D. 1392</date></note> A tresnobles Justice et Consel notre Seignour le Roy en Irland supplie Johan Creef de Ballaghmoun, que comme sa ville, sa mansion, ses blees et diverses autres benes furent arses, degastes et destruys par M<hi rend=”sup”>c</hi>Moroug et autres Irrois enemys notre Seignour le Roy, comme est comme est cognuz et notifie a vous, tresnobles


So yet again the old site has me wanting to tear my hair out in exasperation at how badly organised, maintained and thought out it is.  It’s looking like I’ll have to replace all of the content I’ve been working on over the past couple of weeks with different versions.  But the question is which version?  Should it be the ‘apps’ version or the other version?  I realise now that the ‘apps’ version is referenced in the URLs used by the old site.  However, what is confusing is the ‘apps’ version doesn’t include the front-matter, but this is included in the old site, meaning it can’t be purely using the ‘apps’ version of the XML.  Even more strangely, the ‘kingscouncil.xml’ file in ‘reduce’ folder has a different structure to the version published on the old site, which is in fact closer to the version of the XML I have been using.  On the old site the first page begins:


“[p.xxvi]

INTRODUCTION.

[…]

Whether the Roll…”


But the ‘reduce’ version of ‘kingscouncil.xml’ includes two previous pages:


<pb n=”ix”/><div lang=”EN” type=”Introduction”><head>INTRODUCTION.</head>

<pb n=”xxv”/><p>It may be mentioned here that the folios are all mounted on linen guards, and that no part of the parchment has been inserted into the back, and none cut away at the fore-edge, top, or bottom, of the volume.</p>

<pb n=”xxvi”/><p>Whether the Roll…


Whereas the XML I’ve been using matches the published text:


<pb n=”xxvi” ed=”base”/><div lang=”EN” type=”Introduction”><head>INTRODUCTION.</head>

<p>[…]</p>

<p>Whether the Roll…


I had been intending to extract pages from the non-apps files in the ‘reduce’ folder and to present these alongside the existing pages in the front-end so the editors could look at them, but I’m encountering difficulties right from the start.  The first XML file in the data I originally had is ‘albus.xml’, which I expected to find as ‘albus-apps.xml’, yet there is no such file in the ‘reduce’ folder, nor a non-app ‘albus.xml’ file.  There are files called ‘libalbapp.xml’ and ‘libalbapp-apps.xml’, which would seem to correspond to the AND Source reference (Lib_Alb).  However, the contents of these files in no way correspond to the contents of the ‘albus.xml’ file I have and nor do they correspond to the text that is displayed on the old site at the above URL.

I can only conclude that there is yet another version of the files stored in another location that the old site uses.  It’s definitely not the same file as I have been using as the text on the old site has the spacing issue corrected.  I have done a ‘find in files’ for certain strings found in the ‘Albus’ text across all files in the ‘reduce’ folder and the text is definitely not found there.  It’s very confusing as the scripts suggest they are processing files only in this folder.  The script ‘and-getloc’ uses the variable ‘filename’ from the URL and passes this to the script ‘and-fetcher’ in the ‘reduce’ folder.  This in turn loads the file, finds and processes the required page.

As I was working through this I managed to figure things out.  It looks like I was right – there is yet another version of the files stored somewhere else that the old system actually uses.  Buried towards the end of the ‘and-fetcher’ script is this:

##############################################

## TODO !!!!

## HARDCODED TEXTS LOCATION HERE!

## SHIFT THIS TO CONSTANTS SYSTEM!!!

##

my $textpath = “/and/reduce/ready1/$text”;

##

##############################################

So the texts that are used are in a folder called ‘ready1’ within the ‘reduce’ folder.  However, there were no subfolders in the zip file of the ‘reduce’ folder that Heather sent me a couple of weeks ago.  If we can somehow track down this fourth(!) version of the files then perhaps I’ll be able to make some progress.  Heather managed to get access to the server again and located the additional folder, which did indeed include yet another version of the XML files.  It looks like this fourth version is the correct version.  It would appear to be the files that appear on the old website, including correction of spacings and all front matter (despite all files ending in ‘apps’, whereas the other ‘apps’ versions didn’t include the front matter).  Looking at the files discussed above:

The file ‘albus-apps.xml’ is present and includes all front-matter the same as both the file I was previously working with and the old site, but with spacing issues fixed.  The file ‘kingscouncil-apps’ also appears to be structurally identical to the ‘kingscouncil’ file I was originally working with (unlike the other two versions in ‘reduce’) and has the spacing issues fixed (e.g. M<hi rend=”sup”>c</hi>Moroug).

So now I’ll be able to begin again with the process I started a couple of weeks ago.  It’s going to take some time again, although hopefully most of the XSLT issues will be the same as before and will already be sorted.

Also this week I read through the bib documentation for Craig Lamont’s project and had a chat with him about a data management plan, which I’ll have to work on next week.  I also fixed a couple of issues on the SCOCO website for Matthew Creasy and spoke to Mike Black about the quote for a new server, which will hopefully be purchased soon.  I gave some advice to Katie Halsey about file formats and data transfer options for a new digitisation unit that will be working with the Books and Borrowing project, and also spent some time trying to sort out access to the server at Stirling for this project as it turned out that my access privileges had been removed midway through last month.

I also fixed an issue with the bibliography search on the new DSL website.  This was occurring when a search for ‘author or title’ was performed, which prefixes ‘Author: ‘ or ‘Title: ‘ to each entry in the autocomplete to help users differentiate between the two.  Selecting from the autocomplete list ran the search fine as this was based on the bibliographical ID hidden in the autocomplete, but if you pressed the ‘search’ button before the event was fired the search was looking for the full contents of the box – i.e. looking for authors and titles that begin with ‘Author: ‘ or ‘Title: ‘.  This was also happening if you pressed the browser’s back button from the results as the textbox would still then contain the full text.  I fixed this issue.  So it’s been a pretty busy week.

Week Beginning 31st May 2021

It was the late May bank holiday on Monday, so this was a four-day week.  On Tuesday I decided to try working at my office at the University – my first full day back at my office since the first lockdown began.  All went very smoothly; I didn’t meet anyone in the building and it seemed very quiet on campus generally.  The only issue was the number of updates my computer had to install, which caused some delays.  I’m probably going to try and come back to work on Tuesdays on a semi-regular basis now to see how things go.

I had some discussions with Marc and Arts IT Support this week about the possibility of purchasing a new server, and some progress is being made there.  I also responded to a query regarding the Scots Syntax Atlas that Jennifer Smith forwarded on to me and spoke to Roslyn Potter about a project that a lecturer in History is needing a website for.

Other than these tasks I spent the week continuing to work on the Textbase feature of the Anglo-Norman Dictionary.  Last week I’d left off with the infrastructure in place to browse texts, display the raw XML of pages and navigate between pages.  My task for this week was to ensure that the XML displayed properly.  This proved to be rather tricky as although I had managed to get access to the XSLT file that the Textbase on the old site used to transform the XML to HTML, it included a lot of stuff that wasn’t needed in the new site (e.g. formatting headers and footers) and also gave errors when plugged directly into the new system.  For these reasons I had to adapt the XSLT.  Also, I’d split up the full XML files into chunks for each page, resulting in more than 12,000 chunks.  However, the XML often included elements that extended across pages, and when the content was extracted on a per-page basis this led to an invalid XML structure, as some tags ended up missing their closing tags, or closed without featuring an opening tag.  XSLT only works on valid XML files so I needed to fid a way to fix this tag issue.  After some Googling I discovered that there is a PHP extension called Tidy (https://www.php.net/manual/en/intro.tidy.php) that can take an invalid XML file and fix it.  What this does is to strip out all tags that don’t have an opening or closing tag, which is exactly what I wanted.  I wrote a little script that used the extension, tested it successfully on a few files and then ran all of the 12,000 pages through it.

With a full set of valid XML page files I then began work on the XSL to display the documents as required.  This has been a very laborious process as I needed to go through each of the more than 70 documents and check the layout for any issues, and fix these as they cropped up.  With more than 12,000 pages I couldn’t look at each individually, but instead took a random selection, a process that’s is working pretty well so far.  The largest challenge was getting the explanatory notes to appear correctly, as these had been tagged in at least eight different ways throughout the documents, sometimes with entirely different XML structures and content.  So far all is looking good, and I’m about halfway through checking the documents.  I’ll continue with this task next week.

Week Beginning 24th May 2021

I had my first dose of the Covid vaccine on Tuesday morning this week (the AstraZeneca one), so I lost a bit of time whilst going to get that done.  Unfortunately I had a bit of a bad reaction to it and ended up in bed all day Wednesday with a pretty nasty fever.  I had Covid in October last year but only experienced mild symptoms and wasn’t even off work for a day with it, so in my case the cure has been much worse than the disease.  However, I was feeling much better again by Thursday, so I guess I lost a total of about a day and a half of work, which is a small price to pay if it helps to ensure I don’t catch Covid again and (what would be worse) pass it on to anyone else.

In terms of work this week I continued to work on the Anglo-Norman Dictionary, beginning with a few tweaks to the data builder that I had completed last week.  I’d forgotten to add a bit of processing to the MS date that was present in the Text Date section to handle fractions, so I added that in.  I also updated the XML output so that ‘pref’ and ‘suff’ only appear if they have content now, as the empty attributes were causing issues in the XML editor.

I then began work on the largest outstanding task I still have to tackle for the project: the migration of the textbase texts to the new site.  There are about 80 lengthy XML digital editions on the old site that can be searched and browsed, and I need to ensure these are also available on the new site.  I managed to grab a copy of all of the source XML files and I tracked down a copy of the script that the old site used to process the files.  At least I thought I had.  It turned out that this file actually references another file that must do most of the processing, including the application of an XSLT file to transform the XML into HTML, which is the thing I really could do with getting access to.  Unfortunately this file was no in the data from the server that I had been given access to, which somewhat limited what I could do.  I still have access to the old site and whilst experimenting with the old textbase I managed to make it display an error message that gives the location of the file: [DEBUG: Empty String at /var/and/reduce/and-fetcher line 486. ].  With this location available I asked Heather, the editor who has access to the server, if she might be able to locate this file and others in the same directory.  She had to travel to her University in order to be able to access the server, but once she did she was able to track the necessary directory down and get a copy to me.  This also included the XSLT file, which will help a lot.

I wrote a script to process all of the XML files, extracting titles, bylines, imprints, dates, copyright statements and splitting each file up into individual pages.  I then updated the API to create the endpoints necessary to browse the texts and navigate through the pages, for example the retrieval of summary data for all texts, or information about a specified texts, or information about a specific page (including its XML).  I also began working on a front-end for the textbase, which is still very much in progress.  Currently it lists all texts with options to open a text at the first available page or select a page from a drop-down list of pages.  There are also links directly into the AND bibliography and DEAF where applicable, as the following screenshot demonstrates:

It is also possible to view a specific page, and I’ve completed work on the summary information about the text and a navbar through which it’s possible to navigate through the pages (or jump directly to a different page entirely).  What I haven’t yet tackled is the processing of the XML, which is going to be tricky and I hope to delve into next week.   Below is a screenshot of the page view as it currently looks, with the raw XML displayed.

I also investigated and fixed an issue the editor Geert spotted, whereby the entire text of an entry was appearing in bold.  The issue was caused by an empty <link_form/> tag.  In the XSLT each <link_form> becomes a bold tag <b> with the content of the link form in the middle.  As there was no content it became a self-closed tag <b/> which is valid in XML but not valid in HTML, where it was treated as an opening tag with no corresponding closing tag, resulting in the remainder of the page all being bold.  I got around this by placing the space that preceded the bold tag “ <b></b>” within the bold tag instead “<b> </b>” meaning the tag is no longer considered empty and the XSLT doesn’t self-close it, but ideally if there is no <link_form> then the tag should just be omitted, which would also solve the problem.

I also looked into an issue with the proofreader that Heather encountered.  When she uploaded a ZIP file with around 50 entries in it some of the entries wouldn’t appear in the output, but would just display their title.  The missing entries would be random without any clear reason as to why some were missing.    After some investigation I realised what the problem was:  each time an XML file is processed for display the DTD referenced in the file was being checked.  When processing lots of files all at once this was exceeding the maximum number of file requests the server was allowing from a specific client and was temporarily blocking access to the DTD, causing the processing of some of the XML files to silently fail.  The maximum number would be reached at a different point each time, thus meaning a different selection of entries would be blank.  To fix this I updated the proofreader script to remove the reference to the DTD from the XML files in the uploaded ZIP before they are processed for display.  The DTD isn’t actually needed for the display of the entry – all it does is specify the rules for editing it.  With the DTD reference removed it looks like all entries are getting properly displayed.

Also this week I gave some further advice to Luca Guariento about a proposal he’s working on, fixed a small display issue with the Historical Thesaurus and spoke to Craig Lamont about the proposal he’s putting together.  Other than that I spent a bit of time on the Dictionary of the Scots Language, creating four different mockups of how the new ‘About this entry’ box could look and investigating why some of the bibliographical links in entries in the new front-end were not working.  The problem was being caused by the reworking of cref contents that the front-end does in order to ensure only certain parts of the text become a link.  In the XML the bib ID is applied to the full cref, (e.g. <cref refid=”bib018594″><geo>Sc.</geo> <date>1775</date> <title>Weekly Mag.</title> (9 Mar.) 329: </cref>) but we wanted the link to only appear around titles and authors rather than the full text.  The issue with the missing links was cropping up where there is no author or title for the link to be wrapped around (e.g. <cit><cref refid=”bib017755″><geo>Ayr.</geo><su>4</su> <date>1928</date>: </cref><q>The bag’s fu’ noo’ we’ll sadden’t.</q></cit>).  In such cases the link wasn’t appearing anywhere.  I’ve updated this now so that if no author or title is found then the link gets wrapped around the <geo> tag instead, and if there is no <geo> tag the link gets wrapped around the whole <cref>.

I also fixed a couple of advanced search issues that had been encountered with the new (and as yet not publicly available) site.  There was a 404 error that was being caused by a colon in the title.  The selected title gets added into the URL and colons are special characters in URLs, which was causing a problem.  However, I updated the scripts to allow colons to appear and the search now works.  It also turned out that the full-text searches were searching the contents of the <meta> tag in the entries, which is not something that we want.  I knew there was some other reason why I stripped the <meta> section out of the XML and this is it.  The contents of <meta> end up in the free-text search and are therefore both searchable and returned in the snippets.  To fix this I updated my script that generates the free-text search data to remove <meta> before the free-text search is generated.  This doesn’t remove it permanently, just in the context of the script executing.  I regenerated the free-text data and it no longer includes <meta>, and I then passed this on to Arts IT Support who have the access rights to update the Solr collection.  With this in place the advanced search no longer does anything with the <meta> section.