Week Beginning 14th June 2021

I divided my time this week primarily into three.  Firstly, I wrote a Data Management Plan for Craig Lamont’s proposal.  I can’t really say much about it at this stage, but it took about a day to write, including several email conversations with Craig.

Secondly, I made some updates to the Books and Borrowing CMS.  This took some time to get started on as my access to the Stirling VPN had been cancelled, and without such access I couldn’t access the project’s server.  Thankfully with the help of Stirling’s Information Services people my access was reinstated on Monday and I could start working on the updates.  After familiarising myself with the systems again I had some further questions about the updates suggested by Matt Sangster, resulting in an email conversation and a suggestion by him that he discusses things further with the team next Monday.  Gerry McKeever had suggested some further updates, though, and I worked on these.

The first issue was the ordering of the ‘Books’ tab when viewing a library.  This list of books (of which there can be thousands) is paginated with 200 books per page, with options to order the table by a variety of columns (e.g. book name and number of associated borrowings).  However, the ordering was only ordering the subset of 200 books rather than the whole set.

I updated the page so that the complete dataset is reordered rather than just the 200 records that are displayed per page.  However, this has a massive performance hit that wipes out the page loading speed increase that was gained from paginating the list in the first place.  To reorder the data the page needs to load the entire dataset and then reorder it.  In the case of St Andrews this means that more than 7,200 book records need to be loaded, with multiple sub-queries for each of these records required to bring back the counts of borrowing records and information about book items, book editions and authors.

With the previous paginated way of viewing the data the CMS was taking a couple of seconds to load the ‘Books’ page for St Andrews.  With the new update in place it was taking more than 1 minute and 20 seconds for the page to load.  When running the exact same code and database on my local PC it was taking 10 seconds to load, so presumably the spec of my local PC is considerably better than the server (either that or it’s having to handle a lot of other database requests at the same time, which is affecting performance).

I had considered storing the data in a session variable, which would mean after the first horrendous load time the data would be ready and waiting in the server’s memory to be used until you closed your browser, however, as the data is continuously being worked on this would mean the information displayed would possibly not accurately reflect the current state of the data, which may be confusing.  What I am planning on doing when I develop the front-end is to create a cached version of the data, so counts of borrowing records etc won’t need to be recalculated each time a user queries something, but creating such a cached version wouldn’t really work whilst the data is still being worked on.  I could set the system up to refresh the cache every night, but that would mean the CMS would again not reflect the current state of the data, which isn’t good.  I also updated the ‘Borrowers’ page to allow full reordering of data here too.  This isn’t quite as slow as the books page.

I spoke to the server admin people to see if they could think of a reason why the server loading speed was so much worse that on my local PC.  They reckoned it was because the database is stored on a different server to the code, and the sheer number of individual queries being sent meant that small delays in connecting between servers were mounting up.  I reworked the code somewhat to try and streamline the number of database queries that need to be made.  Only two of the columns can now be selected to order the data by: Book Holding title and number of borrowing records.  I’m hoping these are the most important anyway.  I have updated the queries so that the bulk of the data is only retrieved for the 200 records that are on the visible page (as used to be the case) with only a single query of the holding table and then a further query for each relevant holding record to bring back a count of its borrowing records now being made on the full dataset (e.g. for St Andrews for each of the 7,391 books).  This has made a huge difference and has brought the page loading times back down to a more acceptable few seconds.

Gerry’s second request was that when the book list is limited to a specific register the counts of borrowings updated to reflect this.  I updated the code so that counts of borrowing records on both the ‘Books’ and ‘Borrowers’ tabs get limited to just the selected register and thankfully there was no performance hit associated with this update.

The third project of the week for me was the Anglo-Norman Dictionary.  As mentioned in last week’s lengthy post, I had discovered a fourth version of the texts for the textbase that appear to be the ones that the old site actually used.  I spent most of Tuesday splitting this fourth version of the texts into individual pages and preparing them for display.  They had new issues that needed to be tackled (following the previous process resulted in about 2,000 fewer pages and it turned out that this was caused by some page breaks in the fourth version not having ‘n’ numbers).  By the end of the day I’d managed to get the same number of pages as with my initial version, with the new pages available via the front-end and all working with spacing issues resolved.

I discovered that the weird spacing issue that I had previously thought was an issue with the first version of the texts I was working with had actually been introduced via the ‘Tidy’ library I’d used to remove mismatched opening and closing tags from sections of the XML that I’d split into pages.  It’s really bizarre, but the library was inserting space characters and rearranging existing space characters between tags in a way that completely destroyed the integrity of the data.  After some Googling I came across this item about the issue: https://stackoverflow.com/questions/15147711/php-tidy-removes-whitespace-and-inserts-newlines and a suggested way around the issue is to enclose the XML in a <pre> tag before passing it through the Tidy library, which means the library doesn’t mess about with the layout.  The placement of spaces in a text can be of vital importance so why the library by default messes with spaces and doesn’t even provide an option to stop the library doing so is baffling.  However, the <pre> hack worked, thankfully.

However, on Wednesday I received an email from the editor Geert to say that they had received approval for the AND to display each of the textbase texts in full on one page, rather than being split up into individual pages.  This was great news, but did mean that all my work on splitting up and reformatting the pages was all for nothing.  Still, that’s the way it goes sometimes.  As the week drew to a close I began working on a new version of the textbase, and by the end of the week I had completed a preliminary version of the textbase featuring the full content of each text on one long page.  I have to say it’s a lot easier to use now and is a massive improvement on having to navigate through hundreds of individual small pages.

The contents page is pretty much the same, and still includes a ‘jump to page’ feature, although this now takes you to the relevant section of the long page rather than an individual page.  When you load a text, either by clicking on its title or selecting a page the full text will load.

I added the copyright statement to the top as well as the bottom of the text to make it more visible, and have given it a blue background for a similar reason.  There is also a ‘jump to page’ feature on this page too, which takes you directly to the appropriate section of the text.  I also added an option to show / hide notes so you can hide them to declutter the page a bit.  The individual pages are divided with a horizontal line with the page number centred in the middle of this.  Explanatory notes appear in a grey section at the foot of each page.  There are still some things I need to work on, namely to go through each text to check that the formatting is correct throughout and to fix the footnote numbering and ordering.  I think I have a plan for this, but will need to look into this next week.

Also this week I heard that a proposal involving Jane Stuart-Smith and Eleanor Lawson at QMU that I helped put together last year has been funded and is due to stary in July, which is great news.  I also made a few further tweaks to the Dictionary of the Scots Language and had a chat about some new dictionaries that are going to be added to the site.