I’m down visiting my parents in Yorkshire for the first time in 18 months this week and next, working a total of four days over the two-week period. This week I mainly focussed on the Irish front-end for the Comparative Kingship place-names project, but also adding in some updates to the Scotland system that I recently set up, such as making the Gaelic forms of the classification codes visible and adding options to browse Gaelic forms of place-names and historical forms to the ‘Browse’ facility and ensuring the other place-name and historical form browses only bring back English forms.
The Irish system is mostly identical to the Scottish system, but I did need to make some changes that took a bit of time to implement. As the place-names covered appear to be much more geographically spread out, I’ve allowed the map to be zoomed out further. I’ve also had to remove the modern OS and historical OS map layers as they don’t cover Ireland, so currently there are only three map layers available (the default view, satellite view and satellite view with labels). The Ordnance Survey of Ireland provides access to some historical map layers here: https://geohive.maps.arcgis.com/apps/webappviewer/index.html?id=9def898f708b47f19a8d8b7088a100c4 but their terms and conditions makes it clear that you can’t use the maps on another online resource. However, there are a couple of Irish maps on the NLS website, the Bartholomew Quarter-Inch 1940 (https://maps.nls.uk/geo/explore/#zoom=9&lat=53.10286&lon=-7.34481&layers=13&b=1) and the GSGS One-Inch 1941-3 (https://maps.nls.uk/geo/explore/#zoom=9&lat=53.10286&lon=-7.34481&layers=14&b=1) and we could investigate integrating these as the NLS maps people have always been very helpful.
I also updated the map pop-ups to include the new Irish data fields, such as baronies, townlands and the different map types. Both English and Gaelic forms of things like parishes, baronies and classification codes are displayed throughout the site and on the Record page the ITM figures also appear. I updated the ‘Browse’ page so that it features baronies and the element glossary should work, but I haven’t tested it out as there is no data yet. The Advanced search features a selectable list of baronies and currently a simple textbox for townlands. I may change this to an autocomplete (whereby you start typing and townlands that include the letters appear in a selectable list), or I may leave it as it is, meaning multiple townlands can be searched for and wildcard characters can be used.
I managed to locate downloadable files containing parish boundaries for Ireland here: https://www.townlands.ie/page/download/ and have added these to the data for the two parishes that currently contain data. I haven’t added in any other parish boundaries yet as there are over 200 parishes in our database I don’t want to have to manually add in the boundaries for all of these if it won’t be necessary. Also, on the Scotland maps the three-letter acronym appears in the middle of each parish in order to identify it, but the Irish parishes don’t have TLAs so currently don’t have any labels. The full text of the parish will clutter up the map too much if I use it, so I’m not sure what we could do to label the parishes.
Also this week I responded to some feedback about the Data Management Plan for Kirsteen McCue’s proposal and created a slightly revised version. I also had an email conversation with Eleanor Lawson about her new speech project and how the web presence for the project may function. Finally, I made some tweaks to the Dictionary of the Scots Language, updating the layout of the ‘Contact’ page and updating the bibliography page on the new website so that URLs that use the old style IDs will continue to work. I also had a chat with Rhona Alcorn about some new search options that we are going to add in to the new site before it goes live, although probably not until the autumn.
After a lovely week’s holiday in the East Neuk of Fife last week I returned to full week of work. I spent Monday catching up with emails and making some updates to two project websites. Firstly, for the Anglo-Norman Dictionary I updated the Textbase to add in the additional PDF texts. As these are not part of the main Textbase I created a separate page that listed and linked to them, and added a reference to the page to the introductory paragraph of the main Textbase page. Secondly, I made some further updates to the content management system for the Books and Borrowing project. There was a bug in the ‘clear borrower’ feature that resulted in the normalised occupation fields not getting clears. This meant that unless a researcher noticed and manually removed the selected occupations it would be very easy to end up with occupations assigned to the wrong borrower. I implemented a fix for this bug, so all is well now. I had also been alerted to an issue with the library’s ‘books’ tab. When limiting the listed books to only those mentioned in a specific register the list of associated borrowing records that appears in a popup was not limiting the records to those in the specified register. I fixed this as well, and also made a comparable fix to the ‘borrowers’ tab as well.
During the week I also had an email conversation with Kirsteen McCue about her ‘Singing the Nation’ AHRC proposal, and made a new version of the Data Management Plan for her. I also investigated some anomalies with the stats for the Dictionary of the Scots Language website for Rhona Alcon. Usage figures were down compared to last year, but it looks like last year may have been a blip caused by Covid, as figures for this year match up pretty well with the figures for years before the dreaded 2020.
On Wednesday I was alerted to an issue with the Historical Thesaurus website, which appeared to be completely inaccessible. Further investigation revealed that other sites on the server were also all down. Rather strangely the Arts IT Support team could all access the sites without issue, and I realised that if I turned wifi off on my phone and accessed the site via mobile data I could access the site too. I had thought it was an issue with my ISP, but Marc Alexander reported that he used a different ISP and could also not access the sites. Marc pointed me in the direction of two very handy websites that are useful for checking whether websites are online or not. https://downforeveryoneorjustme.com checks the site and lets you know whether it’s working while https://www.uptrends.com/tools/uptime is a little more in-depth and checks whether the site is available from various locations across the globe. I’ll need to remember these in future.
The sites were still inaccessible on Thursday morning and after some Googling I found an answer to someone with a similar issue here: https://webmasters.stackexchange.com/questions/104092/why-is-my-site-showing-as-being-down-for-some-places-and-not-others I asked Arts IT Support to check with central IT Services to see whether any DNS settings had been changed recently or if they know what might be causing the issue, as it turned out to be a more widespread issue than I had thought, and was affecting sites on different servers too. A quick check of the sites linked to from this site showed that around 20 websites were inaccessible.
Thankfully by Thursday lunchtime the sites had begun to be accessible again, although not for everyone. I could access them, but Marc Alexander still couldn’t. By Friday morning all of the sites were fully accessible again from locations around the globe, and Arts IT Support got back to me with a cause for the issue. Apparently there was some server in the Boyd Orr that controls DNS records for the University and it had gone wrong and sent out garbled instructions to other DNS servers around the world, which knocked out access to our sites, even though the sites themselves were all working perfectly.
I spent the rest of the week working on the front-end for the Scotland data for the Comparative Kingship project, a task that I’d begun before I went away on my holiday. I managed to complete an initial version of the Scotland front-end, which involved taking the front-end from one of the existing place-names websites (e.g. https://kcb-placenames.glasgow.ac.uk/) and adapting it. I had to make a number of adaptations, such as ensuring that two parallel interfaces and APIs could function on one site (one for Scotland, one for Ireland), updating a lot of the site text, creating a new, improved menu system and updating the maps so that they defaulted to the new area of research. I also needed to add in facilities to search, return data for and display new Gaelic fields, e.g. Gaelic versions of place-names and historical forms. This meant updating the advanced search to add in a new ‘language’ choice option, to enable a user to limit their search to just English or Gaelic place-name forms or historical forms. This in turn meant updating the API to add in this additional option.
An additional complication came when I attempted to grab the parish boundary data, which for previous project I’d successfully exported from Scottish Government’s Spatial Data website (https://www.spatialdata.gov.scot/geonetwork/srv/eng/catalog.search#/metadata/c1d34a5d-28a7-4944-9892-196ca6b3be0c) via a handy API (https://maps.gov.scot/server/rest/services/ScotGov/AgricultureEnvironment/MapServer/1/query). However, the parish boundary data was not getting returned with latitude / longitude pairs marking the parish shape, but instead used esriMeters instead. I found someone else who wanted to covert esriMeters into lat/lng (https://gis.stackexchange.com/questions/54534/how-can-i-convert-esrimeters-to-lat-lng) and one of the responses was that with an ArcGIS service (which the above API appears to be) you should be able to set the ‘output spatial reference’, with the code 4326 being used for WGS84, which would give lat/lng values. The API form does indeed have an ‘Output Spatial Reference’ field, but unfortunately it doesn’t seem to do anything. I did lots of further Googling and tried countless different ways of entering the code, but nothing changed the output.
Eventually I gave up and tried an alternative approach. The site also provides the parish data as an ESRI Shapefile (https://maps.gov.scot/ATOM/shapefiles/SG_AgriculturalParishes_2016.zip) and I wondered whether I could plug this into a desktop GIS package and use it to migrate the coordinates to lat/lng. I installed the free GIS package QGIS (https://www.qgis.org/en/site/forusers/download.html) and after opening it went to the ‘Layer’ menu, selected ‘Add Layer’, then ‘Add Vector Layer’ then selected the zip file and pressed ‘add’, at which point all of the parish data loaded in, allowing me to select a parish and view the details for it. What I then needed to do was to find a means of changing the spatial reference and saving a geoJSON file. After much trial and error I discovered that in the ‘Layer’ menu there is a ‘Save as’ option. This allowed me to specify the output format (geoJSON) and change the ‘CRS’, which is the ‘Coordinate Reference System’. In the drop-down list I located ESPG: 4326 / WGS84 and selected it. I then specified a filename (the folder defaults to a Windows System folder and needs to be updated too) and pressed ‘OK’ and after a long wait the geoJSON file was generated, with latitude and longitude values for all parishes. Phew! It was quite a relief to get this working.
With access to a geoJSON file containing parishes with lat/lng pairings I could then find and import the parishes that we needed for the current project, of which there were 28. It took a bit of time to grab all of these, and I then needed to figure out where I wanted the three-letter acronyms for each parish to be displayed, as well, which I worked out using the National Library of Scotland’s parish boundaries map (https://maps.nls.uk/geo/boundaries/), which helpfully displays lat/lng coordinates for your cursor position in the bottom right. With all of the parish boundary data in place the infrastructure for the Scotland front-end is now more or less complete and I await feedback from the project team. I will begin work on the Ireland section next, which will take quite some work as the data fields are quite different. I’m only going to be working a total of four days over the next two weeks (probably as half-days) so my reports for the next couple of weeks are likely to be a little shorter!
I divided my time this week primarily into three. Firstly, I wrote a Data Management Plan for Craig Lamont’s proposal. I can’t really say much about it at this stage, but it took about a day to write, including several email conversations with Craig.
Secondly, I made some updates to the Books and Borrowing CMS. This took some time to get started on as my access to the Stirling VPN had been cancelled, and without such access I couldn’t access the project’s server. Thankfully with the help of Stirling’s Information Services people my access was reinstated on Monday and I could start working on the updates. After familiarising myself with the systems again I had some further questions about the updates suggested by Matt Sangster, resulting in an email conversation and a suggestion by him that he discusses things further with the team next Monday. Gerry McKeever had suggested some further updates, though, and I worked on these.
The first issue was the ordering of the ‘Books’ tab when viewing a library. This list of books (of which there can be thousands) is paginated with 200 books per page, with options to order the table by a variety of columns (e.g. book name and number of associated borrowings). However, the ordering was only ordering the subset of 200 books rather than the whole set.
I updated the page so that the complete dataset is reordered rather than just the 200 records that are displayed per page. However, this has a massive performance hit that wipes out the page loading speed increase that was gained from paginating the list in the first place. To reorder the data the page needs to load the entire dataset and then reorder it. In the case of St Andrews this means that more than 7,200 book records need to be loaded, with multiple sub-queries for each of these records required to bring back the counts of borrowing records and information about book items, book editions and authors.
With the previous paginated way of viewing the data the CMS was taking a couple of seconds to load the ‘Books’ page for St Andrews. With the new update in place it was taking more than 1 minute and 20 seconds for the page to load. When running the exact same code and database on my local PC it was taking 10 seconds to load, so presumably the spec of my local PC is considerably better than the server (either that or it’s having to handle a lot of other database requests at the same time, which is affecting performance).
I had considered storing the data in a session variable, which would mean after the first horrendous load time the data would be ready and waiting in the server’s memory to be used until you closed your browser, however, as the data is continuously being worked on this would mean the information displayed would possibly not accurately reflect the current state of the data, which may be confusing. What I am planning on doing when I develop the front-end is to create a cached version of the data, so counts of borrowing records etc won’t need to be recalculated each time a user queries something, but creating such a cached version wouldn’t really work whilst the data is still being worked on. I could set the system up to refresh the cache every night, but that would mean the CMS would again not reflect the current state of the data, which isn’t good. I also updated the ‘Borrowers’ page to allow full reordering of data here too. This isn’t quite as slow as the books page.
I spoke to the server admin people to see if they could think of a reason why the server loading speed was so much worse that on my local PC. They reckoned it was because the database is stored on a different server to the code, and the sheer number of individual queries being sent meant that small delays in connecting between servers were mounting up. I reworked the code somewhat to try and streamline the number of database queries that need to be made. Only two of the columns can now be selected to order the data by: Book Holding title and number of borrowing records. I’m hoping these are the most important anyway. I have updated the queries so that the bulk of the data is only retrieved for the 200 records that are on the visible page (as used to be the case) with only a single query of the holding table and then a further query for each relevant holding record to bring back a count of its borrowing records now being made on the full dataset (e.g. for St Andrews for each of the 7,391 books). This has made a huge difference and has brought the page loading times back down to a more acceptable few seconds.
Gerry’s second request was that when the book list is limited to a specific register the counts of borrowings updated to reflect this. I updated the code so that counts of borrowing records on both the ‘Books’ and ‘Borrowers’ tabs get limited to just the selected register and thankfully there was no performance hit associated with this update.
The third project of the week for me was the Anglo-Norman Dictionary. As mentioned in last week’s lengthy post, I had discovered a fourth version of the texts for the textbase that appear to be the ones that the old site actually used. I spent most of Tuesday splitting this fourth version of the texts into individual pages and preparing them for display. They had new issues that needed to be tackled (following the previous process resulted in about 2,000 fewer pages and it turned out that this was caused by some page breaks in the fourth version not having ‘n’ numbers). By the end of the day I’d managed to get the same number of pages as with my initial version, with the new pages available via the front-end and all working with spacing issues resolved.
I discovered that the weird spacing issue that I had previously thought was an issue with the first version of the texts I was working with had actually been introduced via the ‘Tidy’ library I’d used to remove mismatched opening and closing tags from sections of the XML that I’d split into pages. It’s really bizarre, but the library was inserting space characters and rearranging existing space characters between tags in a way that completely destroyed the integrity of the data. After some Googling I came across this item about the issue: https://stackoverflow.com/questions/15147711/php-tidy-removes-whitespace-and-inserts-newlines and a suggested way around the issue is to enclose the XML in a <pre> tag before passing it through the Tidy library, which means the library doesn’t mess about with the layout. The placement of spaces in a text can be of vital importance so why the library by default messes with spaces and doesn’t even provide an option to stop the library doing so is baffling. However, the <pre> hack worked, thankfully.
However, on Wednesday I received an email from the editor Geert to say that they had received approval for the AND to display each of the textbase texts in full on one page, rather than being split up into individual pages. This was great news, but did mean that all my work on splitting up and reformatting the pages was all for nothing. Still, that’s the way it goes sometimes. As the week drew to a close I began working on a new version of the textbase, and by the end of the week I had completed a preliminary version of the textbase featuring the full content of each text on one long page. I have to say it’s a lot easier to use now and is a massive improvement on having to navigate through hundreds of individual small pages.
The contents page is pretty much the same, and still includes a ‘jump to page’ feature, although this now takes you to the relevant section of the long page rather than an individual page. When you load a text, either by clicking on its title or selecting a page the full text will load.
I added the copyright statement to the top as well as the bottom of the text to make it more visible, and have given it a blue background for a similar reason. There is also a ‘jump to page’ feature on this page too, which takes you directly to the appropriate section of the text. I also added an option to show / hide notes so you can hide them to declutter the page a bit. The individual pages are divided with a horizontal line with the page number centred in the middle of this. Explanatory notes appear in a grey section at the foot of each page. There are still some things I need to work on, namely to go through each text to check that the formatting is correct throughout and to fix the footnote numbering and ordering. I think I have a plan for this, but will need to look into this next week.
Also this week I heard that a proposal involving Jane Stuart-Smith and Eleanor Lawson at QMU that I helped put together last year has been funded and is due to stary in July, which is great news. I also made a few further tweaks to the Dictionary of the Scots Language and had a chat about some new dictionaries that are going to be added to the site.
This week I finished an initial version of the ‘Browse Textbase’ feature for the Anglo-Norman Dictionary. Processing the XML proved to be rather tricky as I couldn’t just use the old XSLT file as it included a lot of stuff that wasn’t needed in the new site (e.g. formatting headers and footers) and gave errors when plugged directly into the new system. For these reasons I had to adapt the XSLT. Also, I’d split up the full XML files into chunks for each page, resulting in more than 12,700 chunks. However, the XML often included elements that extended across pages, and when the content was extracted on a per-page basis this led to an invalid XML structure, as some tags ended up missing their closing tags, or closed without featuring an opening tag. XSLT only works on valid XML files so I needed to find a way to fix this tag issue. After some Googling I discovered that there is a PHP extension called Tidy that can take an invalid XML file and fix it. What this does is to strip out all tags that don’t have an opening or closing tag, which is exactly what I wanted. I wrote a little script that used the extension, tested it successfully on a few files and then ran all of the 12,700 pages through it.
With a full set of valid XML page files I then began work on the XSLT to display the documents as required. This has been a very laborious process as I needed to go through each of the 77 documents and check the layout for any issues, and fix these as they cropped up. With more than 12,700 pages I couldn’t look at each individually, but instead I generally looked at every page of the front matter, and then a random selection of pages in the main body of the text, as generally the structure is more consistent here. I think this approach has worked well as most formatting issues were to be found in the front matter (e.g. some tables were split across multiple pages and needed table tags to be inserted at the top and bottom).
With regards to the main body of the texts the largest challenge has been getting the explanatory notes to appear correctly, as these had been tagged in at least nine different ways throughout the documents, sometimes with entirely different XML structures and content. One possible issue is that I dealt with new XML features as they cropped up as I worked through the books, but in dealing with these features I may have inadvertently messed up how things looked in earlier books. One example that I thankfully spotted is that I wanted <bibl> tags to start on a new line as this would make the bibliographies easier to read, but other texts have the <bibl> tag mid-sentence and my change resulted in lines breaking where they shouldn’t.
There are some other issues that have cropped up that we may still need to address. There are many spacing issues caused by whoever tagged the documents not leaving spaces between tags, or adding spaces between tags where there shouldn’t be spaces. It’s a bit of a strange issue as it doesn’t seem to exhibit itself on the old site, but isn’t something that is dealt with by the scripts I have access to. I don’t know if perhaps the texts were ‘fixed’ at some point and I just don’t have access to the fixed versions. It’s not something that can be fixed automatically (at least not without coming up with a set of rules for fixing) as it’s not always the case that a tag should always have (or not have) a space after it. Here are some examples, with the text as displayed before the colon and the XML after:
- ‘M cMoroug’: M <hi rend=”sup”>c</hi>Moroug
- ‘Lettres et pétitions( Legge’: <title lang=”FR” rend=”italic”>Lettres et pétitions</title>( <editor>Legge</editor>
- ‘CDqui’: <title type=”MS”>CD</title>qui
- ‘( 17et 22)’: ( <ref target=”D1396_17″>17</ref>et <ref target=”D1396_22″>22</ref>)
- ‘n o2’: n <hi rend=”sup”>o</hi>2</ref>
- ‘Sire’: <hi rend=”bold”>S</hi>ire
- ‘T hepresent’: T <hi rend=”sc”>he</hi>present
- ‘Le xxx eiour ‘: Le xxx <hi rend=”sup”>e</hi>iour
Another issue is that the speed of loading a page is erratic. Sometimes it’s instant, other times it takes several agonising seconds. It’s really frustrating, and it’s not caused by my code. I’m hoping when we get the new server (which we now have a quote for) this issue will resolve itself. Also, Some of the pages are split at different points in two texts. This must be due to the structure of the XML. However, despite this all of the content is still included. In addition, a couple of texts in the old system were broken – either the navigation just did not work or page contents were displaying multiple times. I’m afraid I didn’t make a note of which these were, but they’re all sorted in the new system anyway.
There are currently some issues with footnote numbers due to all of the different ways these are tagged (sometimes with multiple ways being used on a single page). Some examples:
- If multiple ways of tagging are used in the same page this can result in footnotes appearing out of order. This can be because some notes are <note> and others are <app>. This is also causing some issue with the numbering as well (e.g. there are two  footnotes but the first listed should actually be . This clearly needs some work, but I’m not sure how best to fix the issue. On the old site notes of different types are given letters, but I’m not sure which letters to use for what, and if we want to continue using letters.
- In some places note numbers are being displayed where they weren’t previously being displayed. I’m not sure what should be done about this – I could for example add in an option to show / hide the notes.
- I’ve ensured all footnotes appear on a new line rather than having some that run on one line and others (sometimes in the same page) that have their own line.
- Sometimes an extended form of a footnote number appears where one didn’t previously (e.g. ‘[p2n5]’ rather than just ‘’).
- Sometimes multiple notes appear straight after each other, and currently in such cases the numbering appears correctly in the text, but in the footnotes the first number in the line is duplicated. For example  and  in the text appear as  and  in the footnotes.
After spending a lot of time over the past two weeks working through the XML texts and wondering why the old site doesn’t display the spacing errors found in the texts I had access to, I did some further investigation into this. It would appear that the old site uses different versions of the XML files to the ones I’ve been using. I’m not sure why there are multiple versions of the XML files, but I’ve discovered that there are XML files in the ‘reduce’ folder that Heather gave me access to a couple of weeks ago, and these are different to the ones I have been using and must have been stored somewhere else on the server.
For example, the file ‘kingscouncil.xml’ that I have been using exhibits the spacing issue, see for example ‘M <hi rend=”sup”>c</hi>Moroug‘ and ‘xxx <hi rend=”sup”>e</hi>jour’ in this snippet:
<p> <hi lang=”LA” rend=”italic”>indorsacio</hi>. Eient les supplians la garde et la mariage dedens cestes contenues, selonc la purport de ceste peticion, pour xx. s. vi. d. appaier en le Haneper pur le fyn, par les lettres patentes notre Seignour le Roy souz son grant seel en Irland en due fourme. Doune a Dyvelyn le xxx <hi rend=”sup”>e</hi>jour Doctobre, lan notre dit Seignour le Roy Richard Seconde seszisme. <anchor id=”P4A1″ type=”note”/> <note place=”foot” target=”P4A1″>The <date>30th of October 1392</date>. The regnal years of this king commenced on the <date>22nd of June</date>in each year. Here, and elsewhere throughout the Roll, the year of the present style is used, but no rectification of the day of the month has been attempted.</note>A tresreverent pere <anchor id=”P4A2″ type=”note”/> <note place=”foot” target=”P4A2″>As letters patent were to issue, Robert Archbishop of Dublin, Chancellor of Ireland, must have been the person here addressed. See enrolment No. 15, <hi rend=”italic”>infra</hi>.</note>&c., comme desus.</p> <div n=”2″> <p> <note place=”omargin”> <date>A.D. 1392</date> </note>A tresnobles Justice et Consel notre Seignour le Roy en Irland supplie Johan Creef de Ballaghmoun, que comme sa ville, sa mansion, ses blees et diverses autres benes furent arses, degastes et destruys par M <hi rend=”sup”>c</hi>Moroug et autres Irrois enemys notre Seignour le Roy, comme est comme est cognuz et notifie a vous, tresnobles</p> </div>
But in the ‘reduce’ folder there are two further versions of this (and all) textbase files. One is named ‘kingscouncil.xml’ but is different to the one I’ve been using. It has different TEIHeader data and doesn’t exhibit the spacing issue, see for example:
<p><hi lang=”LA” rend=”italic”>indorsacio</hi>. Eient les supplians la garde et la mariage dedens cestes contenues, selonc la purport de ceste peticion, pour xx. s. vi. d. appaier en le Haneper pur le fyn, par les lettres patentes notre Seignour le Roy souz son grant seel en Irland en due fourme. Doune a Dyvelyn le xxx<hi rend=”sup”>e</hi> jour Doctobre, lan notre dit Seignour le Roy Richard Seconde seszisme.<anchor id=”P4A1″ type=”note”/><note place=”foot” target=”P4A1″>The <date>30th of October 1392</date>. The regnal years of this king commenced on the <date>22nd of June</date> in each year. Here, and elsewhere throughout the Roll, the year of the present style is used, but no rectification of the day of the month has been attempted.</note> A tresreverent pere<anchor id=”P4A2″ type=”note”/><note place=”foot” target=”P4A2″>As letters patent were to issue, Robert Archbishop of Dublin, Chancellor of Ireland, must have been the person here addressed. See enrolment No. 15, <hi rend=”italic”>infra</hi>.</note> &c., comme desus.</p></div>
<div n=”2″><p><note place=”omargin”><date>A.D. 1392</date></note> A tresnobles Justice et Consel notre Seignour le Roy en Irland supplie Johan Creef de Ballaghmoun, que comme sa ville, sa mansion, ses blees et diverses autres benes furent arses, degastes et destruys par M<hi rend=”sup”>c</hi>Moroug et autres Irrois enemys notre Seignour le Roy, comme est comme est cognuz et notifie a vous, tresnobles
Finally, there is a further version named ‘kingscouncil-apps.xml’ that appears to be just the text (no TEIHeader), again doesn’t exhibit the spacing issue, but in addition seems to use different tags in places. See the tag around ‘indorsacio’, for example:
<p><term lang=”LA” rend=”i”>Indorsacio</term>. Eient les supplians la garde et la mariage dedens cestes contenues, selonc la purport de ceste peticion, pour xx. s. vi. d. appaier en le Haneper pur le fyn, par les lettres patentes notre Seignour le Roy souz son grant seel en Irland en due fourme. Doune a Dyvelyn le xxx<hi rend=”sup”>e</hi> jour Doctobre, lan notre dit Seignour le Roy Richard Seconde seszisme.<anchor id=”P4A1″ type=”note”/><note place=”foot” target=”P4A1″>The <date>30th of October 1392</date>. The regnal years of this king commenced on the <date>22nd of June</date> in each year. Here, and elsewhere throughout the Roll, the year of the present style is used, but no rectification of the day of the month has been attempted.</note> A tresreverent pere<anchor id=”P4A2″ type=”note”/><note place=”foot” target=”P4A2″>As letters patent were to issue, Robert Archbishop of Dublin, Chancellor of Ireland, must have been the person here addressed. See enrolment No. 15, <hi rend=”italic”>infra</hi>.</note> &c., comme desus.</p></div>
<div n=”2″><p><note place=”omargin”><date>A.D. 1392</date></note> A tresnobles Justice et Consel notre Seignour le Roy en Irland supplie Johan Creef de Ballaghmoun, que comme sa ville, sa mansion, ses blees et diverses autres benes furent arses, degastes et destruys par M<hi rend=”sup”>c</hi>Moroug et autres Irrois enemys notre Seignour le Roy, comme est comme est cognuz et notifie a vous, tresnobles
So yet again the old site has me wanting to tear my hair out in exasperation at how badly organised, maintained and thought out it is. It’s looking like I’ll have to replace all of the content I’ve been working on over the past couple of weeks with different versions. But the question is which version? Should it be the ‘apps’ version or the other version? I realise now that the ‘apps’ version is referenced in the URLs used by the old site. However, what is confusing is the ‘apps’ version doesn’t include the front-matter, but this is included in the old site, meaning it can’t be purely using the ‘apps’ version of the XML. Even more strangely, the ‘kingscouncil.xml’ file in ‘reduce’ folder has a different structure to the version published on the old site, which is in fact closer to the version of the XML I have been using. On the old site the first page begins:
Whether the Roll…”
But the ‘reduce’ version of ‘kingscouncil.xml’ includes two previous pages:
<pb n=”ix”/><div lang=”EN” type=”Introduction”><head>INTRODUCTION.</head>
<pb n=”xxv”/><p>It may be mentioned here that the folios are all mounted on linen guards, and that no part of the parchment has been inserted into the back, and none cut away at the fore-edge, top, or bottom, of the volume.</p>
<pb n=”xxvi”/><p>Whether the Roll…
Whereas the XML I’ve been using matches the published text:
<pb n=”xxvi” ed=”base”/><div lang=”EN” type=”Introduction”><head>INTRODUCTION.</head>
<p>Whether the Roll…
I had been intending to extract pages from the non-apps files in the ‘reduce’ folder and to present these alongside the existing pages in the front-end so the editors could look at them, but I’m encountering difficulties right from the start. The first XML file in the data I originally had is ‘albus.xml’, which I expected to find as ‘albus-apps.xml’, yet there is no such file in the ‘reduce’ folder, nor a non-app ‘albus.xml’ file. There are files called ‘libalbapp.xml’ and ‘libalbapp-apps.xml’, which would seem to correspond to the AND Source reference (Lib_Alb). However, the contents of these files in no way correspond to the contents of the ‘albus.xml’ file I have and nor do they correspond to the text that is displayed on the old site at the above URL.
I can only conclude that there is yet another version of the files stored in another location that the old site uses. It’s definitely not the same file as I have been using as the text on the old site has the spacing issue corrected. I have done a ‘find in files’ for certain strings found in the ‘Albus’ text across all files in the ‘reduce’ folder and the text is definitely not found there. It’s very confusing as the scripts suggest they are processing files only in this folder. The script ‘and-getloc’ uses the variable ‘filename’ from the URL and passes this to the script ‘and-fetcher’ in the ‘reduce’ folder. This in turn loads the file, finds and processes the required page.
As I was working through this I managed to figure things out. It looks like I was right – there is yet another version of the files stored somewhere else that the old system actually uses. Buried towards the end of the ‘and-fetcher’ script is this:
## TODO !!!!
## HARDCODED TEXTS LOCATION HERE!
## SHIFT THIS TO CONSTANTS SYSTEM!!!
my $textpath = “/and/reduce/ready1/$text”;
So the texts that are used are in a folder called ‘ready1’ within the ‘reduce’ folder. However, there were no subfolders in the zip file of the ‘reduce’ folder that Heather sent me a couple of weeks ago. If we can somehow track down this fourth(!) version of the files then perhaps I’ll be able to make some progress. Heather managed to get access to the server again and located the additional folder, which did indeed include yet another version of the XML files. It looks like this fourth version is the correct version. It would appear to be the files that appear on the old website, including correction of spacings and all front matter (despite all files ending in ‘apps’, whereas the other ‘apps’ versions didn’t include the front matter). Looking at the files discussed above:
The file ‘albus-apps.xml’ is present and includes all front-matter the same as both the file I was previously working with and the old site, but with spacing issues fixed. The file ‘kingscouncil-apps’ also appears to be structurally identical to the ‘kingscouncil’ file I was originally working with (unlike the other two versions in ‘reduce’) and has the spacing issues fixed (e.g. M<hi rend=”sup”>c</hi>Moroug).
So now I’ll be able to begin again with the process I started a couple of weeks ago. It’s going to take some time again, although hopefully most of the XSLT issues will be the same as before and will already be sorted.
Also this week I read through the bib documentation for Craig Lamont’s project and had a chat with him about a data management plan, which I’ll have to work on next week. I also fixed a couple of issues on the SCOCO website for Matthew Creasy and spoke to Mike Black about the quote for a new server, which will hopefully be purchased soon. I gave some advice to Katie Halsey about file formats and data transfer options for a new digitisation unit that will be working with the Books and Borrowing project, and also spent some time trying to sort out access to the server at Stirling for this project as it turned out that my access privileges had been removed midway through last month.
I also fixed an issue with the bibliography search on the new DSL website. This was occurring when a search for ‘author or title’ was performed, which prefixes ‘Author: ‘ or ‘Title: ‘ to each entry in the autocomplete to help users differentiate between the two. Selecting from the autocomplete list ran the search fine as this was based on the bibliographical ID hidden in the autocomplete, but if you pressed the ‘search’ button before the event was fired the search was looking for the full contents of the box – i.e. looking for authors and titles that begin with ‘Author: ‘ or ‘Title: ‘. This was also happening if you pressed the browser’s back button from the results as the textbox would still then contain the full text. I fixed this issue. So it’s been a pretty busy week.
I had my first dose of the Covid vaccine on Tuesday morning this week (the AstraZeneca one), so I lost a bit of time whilst going to get that done. Unfortunately I had a bit of a bad reaction to it and ended up in bed all day Wednesday with a pretty nasty fever. I had Covid in October last year but only experienced mild symptoms and wasn’t even off work for a day with it, so in my case the cure has been much worse than the disease. However, I was feeling much better again by Thursday, so I guess I lost a total of about a day and a half of work, which is a small price to pay if it helps to ensure I don’t catch Covid again and (what would be worse) pass it on to anyone else.
In terms of work this week I continued to work on the Anglo-Norman Dictionary, beginning with a few tweaks to the data builder that I had completed last week. I’d forgotten to add a bit of processing to the MS date that was present in the Text Date section to handle fractions, so I added that in. I also updated the XML output so that ‘pref’ and ‘suff’ only appear if they have content now, as the empty attributes were causing issues in the XML editor.
I then began work on the largest outstanding task I still have to tackle for the project: the migration of the textbase texts to the new site. There are about 80 lengthy XML digital editions on the old site that can be searched and browsed, and I need to ensure these are also available on the new site. I managed to grab a copy of all of the source XML files and I tracked down a copy of the script that the old site used to process the files. At least I thought I had. It turned out that this file actually references another file that must do most of the processing, including the application of an XSLT file to transform the XML into HTML, which is the thing I really could do with getting access to. Unfortunately this file was no in the data from the server that I had been given access to, which somewhat limited what I could do. I still have access to the old site and whilst experimenting with the old textbase I managed to make it display an error message that gives the location of the file: [DEBUG: Empty String at /var/and/reduce/and-fetcher line 486. ]. With this location available I asked Heather, the editor who has access to the server, if she might be able to locate this file and others in the same directory. She had to travel to her University in order to be able to access the server, but once she did she was able to track the necessary directory down and get a copy to me. This also included the XSLT file, which will help a lot.
I wrote a script to process all of the XML files, extracting titles, bylines, imprints, dates, copyright statements and splitting each file up into individual pages. I then updated the API to create the endpoints necessary to browse the texts and navigate through the pages, for example the retrieval of summary data for all texts, or information about a specified texts, or information about a specific page (including its XML). I also began working on a front-end for the textbase, which is still very much in progress. Currently it lists all texts with options to open a text at the first available page or select a page from a drop-down list of pages. There are also links directly into the AND bibliography and DEAF where applicable, as the following screenshot demonstrates:
It is also possible to view a specific page, and I’ve completed work on the summary information about the text and a navbar through which it’s possible to navigate through the pages (or jump directly to a different page entirely). What I haven’t yet tackled is the processing of the XML, which is going to be tricky and I hope to delve into next week. Below is a screenshot of the page view as it currently looks, with the raw XML displayed.
I also investigated and fixed an issue the editor Geert spotted, whereby the entire text of an entry was appearing in bold. The issue was caused by an empty <link_form/> tag. In the XSLT each <link_form> becomes a bold tag <b> with the content of the link form in the middle. As there was no content it became a self-closed tag <b/> which is valid in XML but not valid in HTML, where it was treated as an opening tag with no corresponding closing tag, resulting in the remainder of the page all being bold. I got around this by placing the space that preceded the bold tag “ <b></b>” within the bold tag instead “<b> </b>” meaning the tag is no longer considered empty and the XSLT doesn’t self-close it, but ideally if there is no <link_form> then the tag should just be omitted, which would also solve the problem.
I also looked into an issue with the proofreader that Heather encountered. When she uploaded a ZIP file with around 50 entries in it some of the entries wouldn’t appear in the output, but would just display their title. The missing entries would be random without any clear reason as to why some were missing. After some investigation I realised what the problem was: each time an XML file is processed for display the DTD referenced in the file was being checked. When processing lots of files all at once this was exceeding the maximum number of file requests the server was allowing from a specific client and was temporarily blocking access to the DTD, causing the processing of some of the XML files to silently fail. The maximum number would be reached at a different point each time, thus meaning a different selection of entries would be blank. To fix this I updated the proofreader script to remove the reference to the DTD from the XML files in the uploaded ZIP before they are processed for display. The DTD isn’t actually needed for the display of the entry – all it does is specify the rules for editing it. With the DTD reference removed it looks like all entries are getting properly displayed.
Also this week I gave some further advice to Luca Guariento about a proposal he’s working on, fixed a small display issue with the Historical Thesaurus and spoke to Craig Lamont about the proposal he’s putting together. Other than that I spent a bit of time on the Dictionary of the Scots Language, creating four different mockups of how the new ‘About this entry’ box could look and investigating why some of the bibliographical links in entries in the new front-end were not working. The problem was being caused by the reworking of cref contents that the front-end does in order to ensure only certain parts of the text become a link. In the XML the bib ID is applied to the full cref, (e.g. <cref refid=”bib018594″><geo>Sc.</geo> <date>1775</date> <title>Weekly Mag.</title> (9 Mar.) 329: </cref>) but we wanted the link to only appear around titles and authors rather than the full text. The issue with the missing links was cropping up where there is no author or title for the link to be wrapped around (e.g. <cit><cref refid=”bib017755″><geo>Ayr.</geo><su>4</su> <date>1928</date>: </cref><q>The bag’s fu’ noo’ we’ll sadden’t.</q></cit>). In such cases the link wasn’t appearing anywhere. I’ve updated this now so that if no author or title is found then the link gets wrapped around the <geo> tag instead, and if there is no <geo> tag the link gets wrapped around the whole <cref>.
I also fixed a couple of advanced search issues that had been encountered with the new (and as yet not publicly available) site. There was a 404 error that was being caused by a colon in the title. The selected title gets added into the URL and colons are special characters in URLs, which was causing a problem. However, I updated the scripts to allow colons to appear and the search now works. It also turned out that the full-text searches were searching the contents of the <meta> tag in the entries, which is not something that we want. I knew there was some other reason why I stripped the <meta> section out of the XML and this is it. The contents of <meta> end up in the free-text search and are therefore both searchable and returned in the snippets. To fix this I updated my script that generates the free-text search data to remove <meta> before the free-text search is generated. This doesn’t remove it permanently, just in the context of the script executing. I regenerated the free-text data and it no longer includes <meta>, and I then passed this on to Arts IT Support who have the access rights to update the Solr collection. With this in place the advanced search no longer does anything with the <meta> section.
I continued to work on updates to the Anglo Norman Dictionary for most of this week, looking at fixing the bad citation dates in entries that were causing the display of ‘earliest date’ to be incorrect. A number of the citation dates have a proper date in text form (e.g. s.xii/xiii) but have incorrect ‘post’ and ‘pre’ attributes (e.g. ‘00’ and ‘99’). The system uses these ‘post’ and ‘pre’ attributes for date searching and for deciding which is the earliest date for an entry, and if one of these bad dates was encountered it was considering it to be the earliest date. Initially I thought there were only a few entries that had ended up with an incorrect earliest date, because I was searching the database for all earliest dates that were less than 1000. However, I then realised that the bulk of the entries with incorrect earliest dates had the earliest date field set to ‘null’ and in database queries ‘null’ is not considered less than 1000 but a separate thing entirely and so such entries were not being found. I managed to identify several hundred entries that needed their dates fixed and wrote a script to do so.
It was slightly more complicated than a simple ‘find and replace’ as the metadata about the entry needed to be regenerated too – e.g. the dates extracted from the citations that are used in the advanced search and the earliest date display for entries. I managed to batch correct several hundred entries using the script and also adapted it to look for other bad dates that needed fixing too.
In addition, cogrefs appear before variants and deviants, commentaries appear (as full text, not cut off), Xrefs at the bottom now have the ‘see also’ text above them as in the live site, editor initials now appear where they exist and numerals only appear where there is more than once sense in a POS.
Also this week I did some further work for the Dictionary of the Scots Language based on feedback after my upload of data from the DSL’s new editing system. There was a query about the ‘slug’ used for referencing an entry in a URL. When the new data is processed by the import script the ‘slug’ is generated from the first <url> entry in the XML. If this <url> begins ‘dost’ or ‘snd’ it means a headword is not present in the <url> and therefore the new system ID is taken as the new ‘slug’ instead. All <url> forms are also stored as alternative ‘slugs’ that can still be used to access the entry. I checked the new database and there are 3258 entries that have a ‘slug’ beginning with ‘dost’ or ‘snd’, i.e. they have the new ID as their ‘slug’ because they had an old ID as their first <url> in the XML. I checked a couple of these and they don’t seem to have the headword as a <url>, e.g. ‘beit’ (dost00052776) only has the old ID (twice) as URLs: <url>dost2543</url><url>dost2543</url>, ‘well-fired’ (snd00090568) only has the old ID (twice) as URLs: <url>sndns4098</url><url>sndns4098</url>. I’ve asked the editors what should be done about this.
Also this week I wrote a script to generate a flat CSV from the Historical Thesaurus’s relational database structure, joining the lexeme and category tables together and appending entries from the new ‘date’ table as additional columns as required. It took a little while to write the script and then a bit longer to run it, resulting in a 241MB CSV file.
I also gave some advice to Craig Lamont in Scottish Literature about a potential bid he’s putting together, and spoke to Luca about a project he’s been asked to write a DMP for. I also looked through some journals that Gerry Carruthers is hoping to host at Glasgow and gave him an estimate of the amount of time it would take to create a website based on the PDF contents of the old journal items.
It was a return to a full five-day week this week, after taking some days off to cover the Easter school holidays for the previous two weeks. The biggest task I tackled this week was to import the data from the Dictionary of the Scots Language’s new editing system into my online system. I’d received a sample of the data from the company responsible for the new editing system a couple of weeks ago, and we had agreed on a slightly updated structure after that. Last week I was sent the full dataset and I spent some time working with it this week. I set up a local version of the online system on my PC and tweaked the existing scripts I’d previously written to import the XML dataset generated by the old editing system. Thankfully the new XML was not massively different in structure to the old set, and different mostly in the addition of a few new attributes, such as ‘oldid’ that referenced to old ID of each entry, and ‘typeA’ and ‘typeB’, which contain numerical codes that denote which text should be displayed to note when the entry was published. With changes made to the database to store these attributers and updates to the import script to process them I was ready to go, and all 80,432 DOST and SND entries were successfully imported, including extracting all forms and URLs for use in the system.
I had a conversation with the DSL team about whether my ‘browse order’ would still be required, as the entries now appear to be ordered nicely by their new IDs. Previously I ran a script to generate the dictionary order based on the alphanumeric characters in the headword and the ‘posnum’ that I generated based on the classification of parts of speech taken from a document written by Thomas Widmann when he worked for the DSL (e.g. all POS beginning ‘n.’ have a ‘posnum’ of 1, all POS beginning ‘ppl. adj.’ have a ‘posnum’ of 8). Although the new data is now nicely ordered by the new ID field I wanted to check whether I should still be generating and using my browse order columns or whether I should just order things by ID. I suggested that going forward it will not be possible to use the ID field as browse order, as whenever the editors add a new entry its ID will position it in the wrong place (unless the ID field is not static and is regenerated whenever a new entry is added). My assumption was correct and we agreed to continue using my generated browse order.
In a related matter my script extracts the headword of each entry from the XML and this is used in my system and also to generate the browse order. The headword is always taken to be the first <f> of type “form” within <meta> in the <entry>. However, I noticed that there are five entries that have no <f> of type “form” and are therefore missing a headword, and are appearing first in the ‘browseorder’ because of this. This is something that still needs to be addressed.
In our conversations, Ann Ferguson mentioned that my browse system wasn’t always getting the correct order where there were multiple identical headwords all within the same generate part of speech. For example there are multiple noun ‘point’ entries in DOST – n. 1, n. 2 and n. 3. These were appearing in the ‘browse’ feature with n. 3 first. This is because (as per Thomas’s document) all entries with a POS starting with ‘n.’ are given a ‘posorder’ of 1. In cases such as ‘point’ where the headword is the same and there are several entries with a POS beginning ‘n.’ the order is then set to depend on the ID, and ‘Point n.3’ has the lowest ID, so appears first. I therefore updated the script that generates the browse order so that in such cases entries are ordered alphabetically by POS instead.
I also regenerated the data for the Solr full-text search, but I’ll need Arts IT Support to update this, and they haven’t got back to me yet. I then migrated all of the new data to the online server and also created a table for the ‘about’ text that will get displayed based on the ‘typeA’ and ‘tyepB’ number in the entry. I then created a new version of the API that uses the new data and pulls in the necessary ‘about’ data. When I did this I noticed that some slugs (the identifier that will be used to reference an entry in a URL) are still coming out as old IDs because this is what is found in the <url> elements. So for example the entry ‘snd00087693’ had the slug ‘snds165’. After discussion we agreed that in such cases the slug should be the new ID, and I tweaked the import script and regenerated the data to make this the case. I then updated one of our test front-ends to use the new API, updating the XSLT to ensure that the <meta> tag that now appears in the XML is not displayed and updating bibliographical references and cross references to use the new ‘refid’ attribute. I also set up the entry page to display the ‘about’ text, although the actual placement and formatting of this text still needs to be decided upon. I then moved on to the bibliographical data, but this is going to take a bit longer to sort out, as previous bib info was imported from a CSV.
Also this week I read through and gave feedback on a data management plan for a proposal Marc Alexander in involved with and created a new version of the DMP for the new metaphor proposal that Wendy Anderson is involved with. I also gave some advice to Gerry Carruthers about hosting some journal issues at Glasgow.
For the Books and Borrowing project I made some updates to the data of the 18th Century Borrowers pilot project, including fixing some issues with special characters, updating information relating to a few books and merging a couple of book records. I also continued to upload the page images of the Edinburgh registers, finishing the upload of 16 registers and then generating the page records for all of the pages in the content management system. I then started on the St Andrews registers.
I also participated in a Zoom call about GIS for the place-names of Iona project, where we discussed the sort of data and maps that would appear in the QGIS system and how this would relate to the online CMS, and also tweaked the Call of Papers page of the website.
Finally, I continued to make updates to the content management systems for the Comparative Kingship project, adding in Irish versions of the classifications and some of the labels, changing some parishes, adding in the languages that are needed for the Irish system and removing the unnecessary place-names that were imported from the GB1900 dataset. These are things like ‘F.P.’ for ‘footpath’. A total of 2,276 names, with their parish references, historical forms and links to the OS source were deleted by a little script I wrote for the purpose. I think I’m up to date with this project for the moment, so next week I intend to continue with the DSL bibliographical data import and to return to working on the Anglo-Norman Dictionary.
This was a four-day week due to Good Friday. I spent a couple of these days working on a new place-names project called Comparative Kingship that involves Aberdeen University. I had several email exchanges with members of the project team about how the website and content management systems for the project should be structured and set up the subdomain where everything will reside. This is a slightly different project as it will involve place-name surveys in Scotland and Ireland that will be recorded in separate systems. This is because slightly different data needs to be recorded for each survey, and Ireland has a different grid reference system to Scotland. For these reasons I’ll need to adapt my existing CMS that I’ve used on several other place-name projects, which will take a little time. I decided to take the opportunity to modernise the CMS whilst redeveloping it. I created the original version of the CMS back in 2016, with elements of the interface based on older projects than this, and the interface now looks pretty dated and doesn’t work so well on touchscreens. I’m migrating the user interface to the Bootstrap user interface framework, which looks more modern and works a lot better on a variety of screen sizes. It is going to take some time to complete this migration, as I need to update all of the forms used in the CMS, but I made good progress this week and I’m probably about half-way through the process. After this I’ll still need to update the systems to reflect the differences in the Scottish and Irish data, which will probably take several more days, especially if I need to adapt the system of automatically generating latitude, longitude and altitude from a grid reference to work with Irish grid references.
I also continued with the development of the Dictionary Management System for the Anglo-Norman Dictionary, fixing some issues relating to how sense numbers are generated (but uncovering further issues that still need to be addressed) and fixing a bug whereby older ‘history’ entries were not getting associated with new versions of entries that were uploaded. I also created a simple XML preview facility, which allows the editor to paste their entry XML into a text area and for this to then be rendered as it would appear in the live site. I also made a large change to how the ‘upload XML entries’ feature works. Previously editors could attach any number of individual XML files to the form (even thousands) and these would then get uploaded. However, I encountered an issue with the server rejecting so many file uploads in such a short period of time and blocking access to the PC that sent the files. To get around this I investigated allowing a ZIP file containing XML files to be uploaded instead. Upon upload my script would then extract the ZIP and process all of the XML files contained therein. It turns out that this approach worked very well – no more issues with the server rejecting files and the processing is much speedier as it all happens in a batch rather than the script being called each time a single file is uploaded. I tested the ZIP approach by zipping up all 3,179 XML files from the recent R data update and the Zip file was uploaded and processed in a few seconds, with all entries making their way into the holding area. However, with this approach there is no feedback in the ‘Upload Log’ until the server-side script has finished processing all of the files in the ZIP, at which point all updates appear in the log at the same time, so there may be a wait of maybe 20-30 seconds (if it’s a big ZIP file) before it looks like anything has happened. Despite this I’d say that with this update the DMS should now be able to handle full letter updates.
Also this week I added a ‘name of the month’ feature to the homepage of the Iona place-names project (https://iona-placenames.glasgow.ac.uk/) and continued to process the register images for the Books and Borrowing project. I also spoke to Marc Alexander about Data Management Plans for a new project he’s involved with.
My son returned to school on Monday this week, marking an end to the home-schooling that began after the Christmas holidays. It’s quite a relief to no longer have to split my day between working and home-schooling after so long. This week I continued with some Data Management Plan related activities, completing a DMP for the metaphor project involving Duncan of Jordanstone College of Art and Design in Dundee and drafting a third version of the DMP for Kirsteen McCue’s proposal following a Zoom call with her on Wednesday.
I also spent some further time on the Books and Borrowing project, creating tilesets and page records for several new volumes. In fact, we ran out of space on the server. The project is digitising around 20,000 pages of library records from 1750-1830 and we’re approaching 5,000 pages so far. I’d originally suggested that we’d need about 60GB of server space for the images (3MB per image x 20,000). However, the JPEGS we’ve been receiving from the digitisation units have been generated at maximum quality / minimum compression and are around 9MB each, so my estimates were out. Dropping the JPEG quality setting down from 12 to 10 would result in 3MB files so I could do this to save space if required. However, there is another issue. The tilesets I’m generating for each image so that they can be zoomed and panned like a Google Map are taking up as much as 18MB per image. So we may need a minimum of 540GB of space (possibly 600GB to be safe): 9×20,000 for the JPEGs plus 18×20,000 for the tilesets. This is an awful lot of space, and storing image tilesets isn’t actually necessary these days of an IIIF server (https://iiif.io/about/) could be set up. IIIF is now well established as the best means of hosting images online and it would be hugely useful to use. Rather than generating and hosting thousands of tilesets at different zoom levels we could store just one image per page on the server and it would serve up the necessary subsection at the required zoom level based on the request from the client. This issue is that people in charge of servers don’t’ like having to support new software. I entered into discussions with Stirling’s IT people about the possibility of setting up an IIIF server, and these talks are currently ongoing, so in the meantime I still need to generate the tilesets.
Also this week I discussed a couple of issues with the Thesaurus of Old English with Jane Roberts. A search was bringing back some word results but when loading the category browser no content was being displayed. Some investigations uncovered that these words were in subcategories of ’02.03.03.03.01’ but there was no main category with that number in the system. A subcategory needs a main category in order to display in the tree browser and as none was available nothing was displaying. Looking at the underlying database I discovered that while there was no ’02.03.03.03.01’ main category there were two ’02.03.03.03.01|01’ subcategories: ‘A native people’ and ‘Natives of a country’. I bumped the former up from subcategory to main category and the search results then worked.
I spent the rest of the week continuing with the development of the Anglo-Norman Dictionary. I made the new bibliography pages live this week (https://anglo-norman.net/bibliography/), which also involved updating the ‘cited source’ popup in the entry page so that it displays all of the new information. For example, go to this page: https://anglo-norman.net/entry/abanduner and click on the ‘A-N Med’ link to see a record with multiple items in it. I also updated the advanced search for citations so that the ‘Citation siglum’ drop-down list uses the new data too.
After that I continued to update the Dictionary Management System. I updated the ‘View / Download Entry’ page so that the ‘Phase’ of the entry can be updated if necessary. In the ‘Phase’ section of the page all of the phases are now listed as radio buttons, with the entry’s phase checked. If you need to change the entry’s phase you can select a different radio button and press the ‘Update Phase’ button. I also added facilities to manage phase statements via the DMS. In the menu there’s now an ‘Add Phase’ button, through which you can add a new phase, and a ‘Browse Phases’ button which lists all of the active phases, the number of entries assigned to each, and an option to edit the phase statement. If there’s a phase statement that has no associated entries you’ll find an option to delete it here too.
I’m still working on the facilities to upload and manage XML entry files via the DMS. I’ve added in a new menu item labelled ‘Upload Entries’ that when pressed on loads a page through which you can upload entry XML files. There’s a text box where you can supply the lead editor initials to be added to the batch of files you upload (any files that already have a ‘lead’ attribute will not be affected) and an option to select the phase statement that should be applied to the batch of files. Below this area is a section where you can either click to open a file browser and select files to upload or drag and drop files from Windows Explorer (or other file browser). When files are attached they will be processed, with the results shown in the ‘Update log’ section below the upload area. Uploaded files are kept entirely separate from the live dictionary until they’ve been reviewed and approved (I haven’t written these sections yet). The upload process will generate all of the missing attributes I mentioned last week – ‘lead’ initials, the various ID fields, POS, sense numbers etc. If any of these are present the system won’t overwrite them so it should be able to handle various versions of files. The system does not validate the XML files – the editors will need to ensure that the XML is valid before it is uploaded. However, the ‘preview’ option (see below) will quickly let you know if your file is invalid as the entry won’t display properly. Note also that you can change the ‘lead’ and the phase statement between batches – you can drag and drop a set of files with one lead and statement selected, then change these and upload another batch. You can of course choose to upload a single file too.
When XML files are uploaded, in the ‘update log’ there will be links directly through to a preview of the entry, but you can also find all entries that have been uploaded but not yet published on the website in the ‘Holding Area’, which is linked to in the DMS menu. There are currently two test files in this. The holding area lists the information about the XML entries that have been uploaded but not yet published, such as the IDs, the slug, the phase statement etc. There is also an option to delete the holding entry. The last two columns in the table are links to any live entry. There are two columns. The first links to the entry as specified by the numerical ID in the XML filename, which will be present in the filename of all XML files exported via the DMS’s ‘Download Entry’ option. This is the ‘existing ID’ column in the table. The second linking column is based on the ‘slug’ of the holding entry (generated from the ‘lemma’ in the XML). The ‘slug’ is unique in the data so if a holding entry has a link in this column it means it will overwrite this entry if it’s made live. For XML files exported view the DMS and them uploaded both ‘live entry’ links should be the same, unless the editor has changed the lemma. For new entries both these columns should be blank.
The ‘Review’ button opens up a preview of the uploaded holding entry in the interface of the live site. This allows the editors to proofread the new entry to ensure that the XML is valid and that everything looks right. You can return to the holding area from this page by pressing on the button in the left-hand column. Note that this is just a preview – it’s not ‘live’ and no-one else can see it.
There’s still a lot I need to do. I’ll be adding in an option to publish an entry in the holding area, at which point all of the data needed for searching will be generated and stored and the existing live entry (if there is one) will be moved to the ‘history’ table. I also maybe need to extract the earliest date information to display in the preview and in the holding area. This information is only extracted when the data for searching is generated, but I guess it would be good to see it in the holding area / preview too. I also need to add in a preview of cross reference entries as these don’t display yet. I should probably also add in an option to allow the editors to view / download the holding entry XML as they might want to check how the upload process has changed this. So still lots to tackle over the coming weeks.
It was another Data Management Plan heavy week this week. I created an initial version of a DMP for Kirsteen McCue’s project at the start of the week and then participated in a Zoom call with Kirsteen and other members of the proposed team on Thursday where the plan was discussed. I also continued to think through the technical aspects of the metaphor-related proposal involving Wendy and colleagues at Duncan Jordanstone College of Art and Design at Dundee and reviewed another DMP that Katherine Forsyth in Celtic had asked me to look at.
Also this week I spent a bit of time working on the Books and Borrowing project, generating more page image tilesets and their corresponding pages for two more of the Edinburgh ledgers and adding an ‘Events’ page to the project website and giving more members of the project team permission to edit the site. I also had an email chat with Thomas Clancy about the Iona project and created a ‘Call for Papers’ page including submission form on the project website (it’s not live yet, though).
I spent the rest of my week continuing to work on the Anglo-Norman Dictionary. We received the excellent news this week that our AHRC application for funding to complete the remaining letters of the dictionary (and carry out more development work) was successful. This week I mage some further tweaks to the new blog pages, adding in the first image in the blog post to the right of the blog snippet on the blog summary page. I also made the new blog pages live, and you can now access them here: https://anglo-norman.net/blog/.
I also made some updates to the bibliography system based on requests from the editors to separate out the display of links to the DEAF website from the actual URLs (previously just the URLs were displayed). I updated the database, the DMS and the new bibliography page to add in a new ‘DEAF link text’ field for both main source text records and items within source text records. I copied the contents of the DEAF field into this new field for all records, I updated the DMS to add in the new fields when adding / editing sources and I updated the new bibliography page so that the text that gets displayed for the DEAF link uses the new field, whereas the actual link through to the DEAF website uses the original field.
The scripts I written when uploading the new ‘R’ dataset needed to make changes to the data to bring it into line with the data already in the system as the ‘R’ data didn’t include some attributes that were necessary for the system to work with the XML files, namely:
In the <main_entry> tag the attribute ‘lead’, which is used to display the editor’s initials in the front end (e.g. “gdw”) and the ‘id’ attribute, which although not used to uniquely identify the entries in my new system is still used in the XML for things like cross-references and therefore is required and must be unique. In the <sense> tag the attribute ‘n’, which increments from 1 within each part of speech and is used to identify senses in the front-end. In the <senseInfo> tag the ID attribute, which is used in the citation and translation searches and the POS attribute which is used to generate the summary information at the top of each entry page. In the <attestation> tag the ID attribute, which is used in the citation search.
We needed to decide how these will be handled in future – whether they will be manually added to the XML as the editors work on them or whether the upload script needs to add them in at the point of upload. We also needed to consider updates to existing entries. If an editor downloads an entry and then works on it (e.g. adding in a new sense or attestation) then the exported file will already include all of the above attributes, except for any new sections that are added. In such cases should the new sections have the attributes added manually, or do I need to ensure my script checks for the existence of the attributes and only adds the missing ones as required?
We decided that I’d set up the systems to automatically check for the existence of the attributes and add them in if they’re not already present. It will take more time to develop such a system but it will make it more robust and hopefully will result in fewer errors. I’ll also add an option to specify the ‘lead’ initials for the batch of files that are being uploaded, but this will not overwrite the ‘lead’ attribute for any XML files in the batch that already have the attribute specified.
I’ll hopefully get a chance to work on this next week. Thankfully this is the last week of home-schooling for us so I should have a bit more time from next week onwards.