I continued with the import of new data for the Dictionary of the Scots Language this week. Raymond at Arts IT Support has set up a new collection and had imported the full-text search data into the Solr server, and I tested this out via the new front-end I’d configured to work with the new data source. I then began working on the import of the bibliographical data, but noticed that the file exported from the DSL’s new editing system didn’t feature an attribute denoting what source dictionary each record is from. We need this as the bibliography search allows users to limit their search to DOST or SND. The new IDs all start with ‘bib’ no matter what the source is. I had thought I could use the ‘oldid’ to extract the source (db = DOST, sb = SND) but I realised there are also composite records where the ‘oldid’ is something like ‘a200’. In such cases I don’t think I have any data that I can use to distinguish between DOST and SND records. The person in charge of exporting the data from the new editing system very helpfully agreed to add in a ‘source dictionary’ attribute to all bibliographical records and sent me an updated version of the XML file. Whilst working with the data I realised that all of the composite records are DOST records anyway, so I didn’t need the ‘sourceDict’ attribute, but I think it’s better to have this explicitly as an attribute as differentiating between dictionaries is important.
I imported all of the bibliographical records into the online system, including the composite ones as these are linked to from dictionary entries and are therefore needed, even though their individual parts are also found separately in the data. However, I decided to exclude the composite records from the search facilities, otherwise we’d end up with duplicates in the search results. I updated the API to use the new bibliography tables and I updated the new front-end so that bibliographical searches use the new data. One thing that needs some further work is the display of individual bibliographies. These are now generated from the bibliography XML via an XSLT whereas previously they were generated from a variety of different fields in the database. The display doesn’t completely match up with the display on the live and Sienna versions of the bibliography pages and I’m not sure exactly how the editors would like entries to be displayed. I’ll need further input from them on this matter, but the import of data from the new editing system has now been completed successfully. I’d been documenting the process as I worked through it and I sent the documentation and all scripts I wrote to handle the workflow to the editors to be stored for future use.
I also worked on the Books and Borrowing project this week. I received the last of the digitised images of borrowing registers from Edinburgh (other than one register which needs conservation work), and I uploaded these to the project’s content management system, creating all of the necessary page records. We have a total of 9,992 page images as JPEG files from Edinburgh, totalling 105GB. Thank goodness we managed to set up an IIIF server for the image files rather than having to generate and store image tilesets for each of these page images. Also this week I uploaded the images for 14 borrowing registers from St Andrews and generated page records for each of these.
I had a further conversation with GIS expert Piet Gerrits for the Iona project and made a couple of tweaks to the Comparative Kingship content management systems, but other than that I spent the remainder of the week returning to the Anglo-Norman Dictionary, which I hadn’t worked on since before Easter. To start with I went back through old emails and documents and wrote a new ‘to do’ list containing all of the outstanding tasks for the project, some 20 items of varying degrees of size and intricacy. After some communication with the editors I began tackling some of the issues, beginning with the apparent disappearance of <note> tags from certain entries.
In the original editor’s XML (the XML as structured before uploaded into the old DMS) there were ‘edGloss’ notes tagged as ‘<note type=”edgloss” place=”inline”>’ that were migrated to <edGloss> elements during whatever processing happened with the old DMS. However, there were also occasionally notes tagged as ‘<note place=”inline”>’ that didn’t get transformed and remained tagged as this.
I’m not entirely sure how or where, but at some point during my processing of the data these ‘<note place=”inline”>’ notes have been lost. It’s very strange as the new DMS import script is based entirely on the scripts I wrote to process the old DMS XML entries, but I tested the DMS import by uploading the old DMS XML version of ‘poer_1’ to the new DMS and the ‘<note place=”inline”>’ have been retained, yet in the live entry for ‘poer_1’ the <note> text is missing.
I searched the database for all entries where the DMS XML as exported from the old DMS system contains the text ‘<note place=”inline”>’ and there are 323 entries, which I added to a spreadsheet and sent to the editors. It’s likely that the new XML for these entries will need to be manually corrected to reinstate the missing <note> elements. Some entries (as with ‘poer_1’) have several of these. II still have the old DMS XML for these so it is at least possible to recover the missing tags. I wish I could identify exactly when and how the tags were removed, but that would quite likely require many hours of investigation, as I already spent a couple of hours trying to get to the bottom of the issue without success.
Moving on to a different issue, I changed the upload scripts so that the ‘n’ numbers are always fully regenerated automatically when a file is uploaded, as previously there were issues when a mixture of senses with and without ‘n’ numbers were included in an entry. This means that any existing ‘n’ values are replaced, so it’s no longer possible to manually set the ‘n’ value. Instead ‘n’ values for senses within a POS will always increment from 1 depending on the order they appear in the file, with ‘n’ being reset to 1 whenever a new POS is encountered.
Main senses in locutions were not being assigned an ‘n’ on upload, and I changed this so that they are assigned an ‘n’ in exactly the same way as regular main senses. I tested this with the ‘descendre’ entry and it worked, although I encountered an issue. The final locution main sense (to descend to (by way of inheritance)) had a POS of ‘sbst._inf.’ In its <senseInfo> whereas it should have been (based on the POS of the previous two senses) ‘sbst. Inf.’. The script was therefore considering this to be a new POS and gave the sense an ‘n’ of 1. In my test file I updated the POS and re-uploaded the file and the sense was assigned the correct value of 3 to its ‘n’, but we’ll need to investigate why a different form of POS was recorded for this sense.
I also updated the front-end so that locution main senses with an ‘n’ now have the ‘n’ displayed, (e.g. https://anglo-norman.net/entry/descendre) and wrote a script that will automatically add missing ‘n’ attributes to all locution main senses in the system. I haven’t run this on the live database yet as I need further feedback from the editors before I do. As the week drew to a close I worked on a method to hide sense numbers in the front-end in case where there was only one sense in a part of speech, but I didn’t manage to get this completed and will continue with it next week.
It was a return to a full five-day week this week, after taking some days off to cover the Easter school holidays for the previous two weeks. The biggest task I tackled this week was to import the data from the Dictionary of the Scots Language’s new editing system into my online system. I’d received a sample of the data from the company responsible for the new editing system a couple of weeks ago, and we had agreed on a slightly updated structure after that. Last week I was sent the full dataset and I spent some time working with it this week. I set up a local version of the online system on my PC and tweaked the existing scripts I’d previously written to import the XML dataset generated by the old editing system. Thankfully the new XML was not massively different in structure to the old set, and different mostly in the addition of a few new attributes, such as ‘oldid’ that referenced to old ID of each entry, and ‘typeA’ and ‘typeB’, which contain numerical codes that denote which text should be displayed to note when the entry was published. With changes made to the database to store these attributers and updates to the import script to process them I was ready to go, and all 80,432 DOST and SND entries were successfully imported, including extracting all forms and URLs for use in the system.
I had a conversation with the DSL team about whether my ‘browse order’ would still be required, as the entries now appear to be ordered nicely by their new IDs. Previously I ran a script to generate the dictionary order based on the alphanumeric characters in the headword and the ‘posnum’ that I generated based on the classification of parts of speech taken from a document written by Thomas Widmann when he worked for the DSL (e.g. all POS beginning ‘n.’ have a ‘posnum’ of 1, all POS beginning ‘ppl. adj.’ have a ‘posnum’ of 8). Although the new data is now nicely ordered by the new ID field I wanted to check whether I should still be generating and using my browse order columns or whether I should just order things by ID. I suggested that going forward it will not be possible to use the ID field as browse order, as whenever the editors add a new entry its ID will position it in the wrong place (unless the ID field is not static and is regenerated whenever a new entry is added). My assumption was correct and we agreed to continue using my generated browse order.
In a related matter my script extracts the headword of each entry from the XML and this is used in my system and also to generate the browse order. The headword is always taken to be the first <f> of type “form” within <meta> in the <entry>. However, I noticed that there are five entries that have no <f> of type “form” and are therefore missing a headword, and are appearing first in the ‘browseorder’ because of this. This is something that still needs to be addressed.
In our conversations, Ann Ferguson mentioned that my browse system wasn’t always getting the correct order where there were multiple identical headwords all within the same generate part of speech. For example there are multiple noun ‘point’ entries in DOST – n. 1, n. 2 and n. 3. These were appearing in the ‘browse’ feature with n. 3 first. This is because (as per Thomas’s document) all entries with a POS starting with ‘n.’ are given a ‘posorder’ of 1. In cases such as ‘point’ where the headword is the same and there are several entries with a POS beginning ‘n.’ the order is then set to depend on the ID, and ‘Point n.3’ has the lowest ID, so appears first. I therefore updated the script that generates the browse order so that in such cases entries are ordered alphabetically by POS instead.
I also regenerated the data for the Solr full-text search, but I’ll need Arts IT Support to update this, and they haven’t got back to me yet. I then migrated all of the new data to the online server and also created a table for the ‘about’ text that will get displayed based on the ‘typeA’ and ‘tyepB’ number in the entry. I then created a new version of the API that uses the new data and pulls in the necessary ‘about’ data. When I did this I noticed that some slugs (the identifier that will be used to reference an entry in a URL) are still coming out as old IDs because this is what is found in the <url> elements. So for example the entry ‘snd00087693’ had the slug ‘snds165’. After discussion we agreed that in such cases the slug should be the new ID, and I tweaked the import script and regenerated the data to make this the case. I then updated one of our test front-ends to use the new API, updating the XSLT to ensure that the <meta> tag that now appears in the XML is not displayed and updating bibliographical references and cross references to use the new ‘refid’ attribute. I also set up the entry page to display the ‘about’ text, although the actual placement and formatting of this text still needs to be decided upon. I then moved on to the bibliographical data, but this is going to take a bit longer to sort out, as previous bib info was imported from a CSV.
Also this week I read through and gave feedback on a data management plan for a proposal Marc Alexander in involved with and created a new version of the DMP for the new metaphor proposal that Wendy Anderson is involved with. I also gave some advice to Gerry Carruthers about hosting some journal issues at Glasgow.
For the Books and Borrowing project I made some updates to the data of the 18th Century Borrowers pilot project, including fixing some issues with special characters, updating information relating to a few books and merging a couple of book records. I also continued to upload the page images of the Edinburgh registers, finishing the upload of 16 registers and then generating the page records for all of the pages in the content management system. I then started on the St Andrews registers.
I also participated in a Zoom call about GIS for the place-names of Iona project, where we discussed the sort of data and maps that would appear in the QGIS system and how this would relate to the online CMS, and also tweaked the Call of Papers page of the website.
Finally, I continued to make updates to the content management systems for the Comparative Kingship project, adding in Irish versions of the classifications and some of the labels, changing some parishes, adding in the languages that are needed for the Irish system and removing the unnecessary place-names that were imported from the GB1900 dataset. These are things like ‘F.P.’ for ‘footpath’. A total of 2,276 names, with their parish references, historical forms and links to the OS source were deleted by a little script I wrote for the purpose. I think I’m up to date with this project for the moment, so next week I intend to continue with the DSL bibliographical data import and to return to working on the Anglo-Norman Dictionary.
This was a four-day week due to Good Friday. I spent a couple of these days working on a new place-names project called Comparative Kingship that involves Aberdeen University. I had several email exchanges with members of the project team about how the website and content management systems for the project should be structured and set up the subdomain where everything will reside. This is a slightly different project as it will involve place-name surveys in Scotland and Ireland that will be recorded in separate systems. This is because slightly different data needs to be recorded for each survey, and Ireland has a different grid reference system to Scotland. For these reasons I’ll need to adapt my existing CMS that I’ve used on several other place-name projects, which will take a little time. I decided to take the opportunity to modernise the CMS whilst redeveloping it. I created the original version of the CMS back in 2016, with elements of the interface based on older projects than this, and the interface now looks pretty dated and doesn’t work so well on touchscreens. I’m migrating the user interface to the Bootstrap user interface framework, which looks more modern and works a lot better on a variety of screen sizes. It is going to take some time to complete this migration, as I need to update all of the forms used in the CMS, but I made good progress this week and I’m probably about half-way through the process. After this I’ll still need to update the systems to reflect the differences in the Scottish and Irish data, which will probably take several more days, especially if I need to adapt the system of automatically generating latitude, longitude and altitude from a grid reference to work with Irish grid references.
I also continued with the development of the Dictionary Management System for the Anglo-Norman Dictionary, fixing some issues relating to how sense numbers are generated (but uncovering further issues that still need to be addressed) and fixing a bug whereby older ‘history’ entries were not getting associated with new versions of entries that were uploaded. I also created a simple XML preview facility, which allows the editor to paste their entry XML into a text area and for this to then be rendered as it would appear in the live site. I also made a large change to how the ‘upload XML entries’ feature works. Previously editors could attach any number of individual XML files to the form (even thousands) and these would then get uploaded. However, I encountered an issue with the server rejecting so many file uploads in such a short period of time and blocking access to the PC that sent the files. To get around this I investigated allowing a ZIP file containing XML files to be uploaded instead. Upon upload my script would then extract the ZIP and process all of the XML files contained therein. It turns out that this approach worked very well – no more issues with the server rejecting files and the processing is much speedier as it all happens in a batch rather than the script being called each time a single file is uploaded. I tested the ZIP approach by zipping up all 3,179 XML files from the recent R data update and the Zip file was uploaded and processed in a few seconds, with all entries making their way into the holding area. However, with this approach there is no feedback in the ‘Upload Log’ until the server-side script has finished processing all of the files in the ZIP, at which point all updates appear in the log at the same time, so there may be a wait of maybe 20-30 seconds (if it’s a big ZIP file) before it looks like anything has happened. Despite this I’d say that with this update the DMS should now be able to handle full letter updates.
Also this week I added a ‘name of the month’ feature to the homepage of the Iona place-names project (https://iona-placenames.glasgow.ac.uk/) and continued to process the register images for the Books and Borrowing project. I also spoke to Marc Alexander about Data Management Plans for a new project he’s involved with.
I continued to develop the ‘Dictionary Management System’ for the Anglo-Norman Dictionary this week, following on with the work I began last week to allow the editors to drag and drop sets of entry XML files into the system. I updated the form to add in another option underneath the selection of phase statement called ‘Phase Statements for existing records’. Here the editor can choose whether to retain existing statements or replace them. If ‘retain’ is selected then any XML entries attached to the form that either have an existing entry ID in their filename or have a slug that matches an existing entry in the system will retain whatever phase statement the existing entry has, no matter what phase statement is selected in the form. The phase statement selected in the form will still be applied to any XML entries attached to the form that don’t have an existing entry in the system. Selecting ‘replace existing statements’ will ignore all phase statements of existing entries and will overwrite them with whatever phase statement is selected in the form. I also updated the system so that it extracts the earliest date for an entry at the point of upload. I added two new columns to the holding area (for earliest date and the date that is displayed for this) and have ensured that the display date appears on the ‘review’ page too. In addition, I added in an option to download the XML of an entry in the holding area, if it needs further work.
I ran a large-scale upload test, comprising of around 3,200 XML files from the ‘R’ data to see how the system would cope with this, but unfortunately I ran into difficulties with the server rejecting too many requests in a short space of time and only about 600 of the files made it through. I asked Arts IT Support to see whether the server limits can be removed for this script, but haven’t heard anything back yet. I ran into a similar issue when processing files for the Mull and Ulva place-names project in January last year and Raymond was able to update the whitelist for the Apache module mod_evasive that was blocking such uploads and I’m hoping he’ll be able to do something similar this time. Alternatively, I’ll need to try and throttle the speed of uploads in the browser.
In the meantime, I continued with the scripts for publishing entries that had been uploaded to the holding area, using a test version of the site that I set up on my local PC to avoid messing up the live database. I updated the ‘holding area’ page quite significantly. At the top of the page is a box for publishing selected items, and beneath this is the table containing the holding items. Each row now features a checkbox, and there is an option above the table to select / deselect all rows on the page (so currently up to 200 entries can be published in one batch as 200 is the page limit). The ‘preview’ button has been replaced with an ‘eye’ icon but the preview page works in the same way as before. I was intending to add the ‘publish’ options to this page but I’ve moved this to the holding area page instead to allow multiple entries to be selected for publication at any one time.
Once all of the selected items are published there is one final task that the page performs, which is to completely regenerate the cross references data. This is something that unfortunately needs to be done after each batch (even if it’s only one record) because cross references rely on database IDs and when a new version of an existing entry is published it receives a new ID. This means any existing cross references to that item will no longer work. The publication log will state that the regeneration is taking place and then after about 30 seconds another statement will say it is complete. I tested this process on my local PC, publishing single items, a few items and entire pages (200 items) at a time and all seemed to be working fine so I then copied the new scripts to the server.
Also this week I continued with the processing of library registers for the Books and Borrowing project. These are coming in rather quickly now and I’m getting a bit of a backlog. This is because I have to download the image files, then process then to generate tilesets, and then upload all of the images and their tilesets to the server. It’s the tilesets that are the real sticking point, as these consist of thousands of small files. I’m only getting an upload speed of about 70KB/s and I’m having to upload many gigabytes of data. I did a test where I zipped up some of the images and uploaded this zip file instead and was getting a speed of around 900KB/s and as it looks like I can get command-line access to the server I’m going to investigate whether zipping up the files, then uploading them then unzipping them will be a quicker process. I also had to spend some time sorting out connection issues to the server as the Stirling VPN wasn’t letting me connect. It turned out that they had switched to multi-factor authentication and I needed to set this up before I could continue.
Also this week I wrote a summary of the work I’ve done so far for the Place-names of Iona project for a newsletter they’re putting together, spoke to people about the new ‘Comparative Kingship’ place-names project I’m going to be involved with, spoke to the Scots Language Policy people about setting up a mailing list for the project(it turns out that the University has software to handle this, available here: https://www.gla.ac.uk/myglasgow/it/emaillists/) and fixed an issue relating to the display of citations that have multiple dates for the DSL.
It was another Data Management Plan heavy week this week. I created an initial version of a DMP for Kirsteen McCue’s project at the start of the week and then participated in a Zoom call with Kirsteen and other members of the proposed team on Thursday where the plan was discussed. I also continued to think through the technical aspects of the metaphor-related proposal involving Wendy and colleagues at Duncan Jordanstone College of Art and Design at Dundee and reviewed another DMP that Katherine Forsyth in Celtic had asked me to look at.
Also this week I spent a bit of time working on the Books and Borrowing project, generating more page image tilesets and their corresponding pages for two more of the Edinburgh ledgers and adding an ‘Events’ page to the project website and giving more members of the project team permission to edit the site. I also had an email chat with Thomas Clancy about the Iona project and created a ‘Call for Papers’ page including submission form on the project website (it’s not live yet, though).
I spent the rest of my week continuing to work on the Anglo-Norman Dictionary. We received the excellent news this week that our AHRC application for funding to complete the remaining letters of the dictionary (and carry out more development work) was successful. This week I mage some further tweaks to the new blog pages, adding in the first image in the blog post to the right of the blog snippet on the blog summary page. I also made the new blog pages live, and you can now access them here: https://anglo-norman.net/blog/.
I also made some updates to the bibliography system based on requests from the editors to separate out the display of links to the DEAF website from the actual URLs (previously just the URLs were displayed). I updated the database, the DMS and the new bibliography page to add in a new ‘DEAF link text’ field for both main source text records and items within source text records. I copied the contents of the DEAF field into this new field for all records, I updated the DMS to add in the new fields when adding / editing sources and I updated the new bibliography page so that the text that gets displayed for the DEAF link uses the new field, whereas the actual link through to the DEAF website uses the original field.
The scripts I written when uploading the new ‘R’ dataset needed to make changes to the data to bring it into line with the data already in the system as the ‘R’ data didn’t include some attributes that were necessary for the system to work with the XML files, namely:
In the <main_entry> tag the attribute ‘lead’, which is used to display the editor’s initials in the front end (e.g. “gdw”) and the ‘id’ attribute, which although not used to uniquely identify the entries in my new system is still used in the XML for things like cross-references and therefore is required and must be unique. In the <sense> tag the attribute ‘n’, which increments from 1 within each part of speech and is used to identify senses in the front-end. In the <senseInfo> tag the ID attribute, which is used in the citation and translation searches and the POS attribute which is used to generate the summary information at the top of each entry page. In the <attestation> tag the ID attribute, which is used in the citation search.
We needed to decide how these will be handled in future – whether they will be manually added to the XML as the editors work on them or whether the upload script needs to add them in at the point of upload. We also needed to consider updates to existing entries. If an editor downloads an entry and then works on it (e.g. adding in a new sense or attestation) then the exported file will already include all of the above attributes, except for any new sections that are added. In such cases should the new sections have the attributes added manually, or do I need to ensure my script checks for the existence of the attributes and only adds the missing ones as required?
We decided that I’d set up the systems to automatically check for the existence of the attributes and add them in if they’re not already present. It will take more time to develop such a system but it will make it more robust and hopefully will result in fewer errors. I’ll also add an option to specify the ‘lead’ initials for the batch of files that are being uploaded, but this will not overwrite the ‘lead’ attribute for any XML files in the batch that already have the attribute specified.
I’ll hopefully get a chance to work on this next week. Thankfully this is the last week of home-schooling for us so I should have a bit more time from next week onwards.
I had a couple of Zoom meetings this week, then first on Monday was with the Historical Thesaurus team and members of the Oxford English Dictionary’s team to discuss how our two datasets will be aligned and updated in future. It was an interesting meeting, but there’s still a lot of uncertainty regarding how the datasets can be tracked and connected as future updates are made, at least some of which will probably only become apparent when we get new data to integrate.
My second Zoom meeting was on Tuesday with the Place-Names of Iona project to discuss how we will be working with the QGIS package that team members will be using to access some of the archaeological data and Lidar maps, and also to discuss the issue of 10 digit grid references and the potential change from the old OSGB-36 means of generating latitude and longitude from grid references to the new WGS84 method. It was a productive meeting and we decided that we would switch over to WGS84 and I would update the CMS to incorporate the new library for generating latitude and longitude from grid references.
Also this week I continued to work on the Books and Borrowing project, generating image tilesets for the scans of several volumes of ledgers from Edinburgh University Library and writing scripts to generate pages in the Content Management System, creating ‘next’ and ‘previous’ links as required and associating the relevant images. I also had an email correspondence about some of the querying methods we will develop for the data, such as collocation information.
I also gave some feedback on a data management plan for a project I’m involved with, had a chat with Wendy Anderson about a possible future project she’s trying to set up and spent some time making updates to the underlying data of the Interactive Map of Burns Suppers that launched last month. I didn’t have the time to do a huge amount of work on the Anglo-Norman Dictionary this week, but I still managed to migrate some of the project’s old blog posts to our new site over the course of the week.
Finally, I made some updates to the bibliography system for the Dictionary of the Scots Language, updating the new system so it works in a similar manner to the live site. I added ‘Author’ and ‘Title’ to the drop-down items when searching for both to help differentiate them and a search for an item when the user ignores the drop-down options and manually submits the search now works as it does in the live site. I also fixed the issue with selecting ‘Montgomerie, Norah & William’ resulting in a 404 error. This was caused by the ampersand. There were some issues with other non-alphanumeric characters that I’ve fixed too, including slashes and apostrophes.
I spent quite a bit of time this week continuing to work on the Anglo-Norman Dictionary, creating a new ‘bibliography’ page that will replace the existing ‘source texts’ page and uses the new source text management scripts that I added to the new content management system recently. This required rather a lot of updates as I needed to update the API to use the new source texts table and also to incorporate source text items as required, which took some time. I then created the new ‘bibliography’ page which uses the new source text data. There is new introductory text and each item features the new fields as requested by the editors. ‘Dean’ references always appear, the title and author are in bold and ‘details’ and ‘notes’ appear when present. If a source text has one or more items these are listed in numeric order, in a slightly smaller font and indented. Brackets for page numbers are added in. I also had to change the way the source texts were ordered as previously the list was ordered by the ‘slug’ but with the updates to the data it sometimes happens that the ‘slug’ doesn’t begin with the same letter as the siglum text and this was messing up the order and the alphabetical buttons. Now the list is ordered by the siglum text stripped of any tags and all seems to be working fine. I will still need to update the links from dictionary items to the bibliography when the new page goes live, and update the search facilities too, but I’ll leave this until we’re ready to launch the new page.
I also had to change the way items within bibliographical entries were ordered. These were previously ordered on the ‘numeral’ field, which contained a Roman numeral. I’d written a bit of a hack to ensure that these were ordered correctly up to 20, but it turns out that there are some entries with more than 60 items, and some of them have non-standard numerals, such as ‘IXa’. I decided that it would be too complicated to use the ‘numeral’ field for ordering as the contents are likely to be too inconsistent for a computer to automatically order successfully. I therefore created a new ‘itemorder’ column in the database that holds a numerical value that decides the order of the items. I wrote a little script that populates this field for the items already in the system and for any bibliographical entry with 20 or fewer items the order should be correct without manual intervention. For the handful of entries with more than 20 items the editors will have to manually update the order. I updated the DMS so that the new ‘item order’ field appears when you add or edit items, and this will need to be used for each item to rearrange the items into the order they should be in. The new bibliography page uses the new itemorder field so updates are reflected on this page.
I also needed to update the system to correctly process multiple DEAF links, which I’d forgotten to do previously, made some changes to the ordering of items (e.g. so that entries with a number appear before entries with the same text but without a number) and added in an option to hide certain fields by adding a special character into the field. Also for the AND I updated the XML of an entry and continued to migrate blog posts from the old blog to our new system.
Beneath the XML section you can view all of the information that is extracted from the XML and used in the system for search and display purposes: forms, parts of speech, cross references, labels, citations and translations. This is to enable the editors to check that the data extracted and used by the system is correct. I could possibly add in options for you to edit this data, but any edits made would then be overwritten the next time an XML file is uploaded for the entry, so I’m not sure how useful this would be. I think it would be better to limit the editing of this information to via a new XML file upload only.
However, we may want to make some of the information in this page directly editable, specifically some of the fields in the first table on the page. The editors may want to change the lemma or homonym number, or the slug or entry order. Similarly the editors may want to manually override the earliest date for the entry (although this would then be overwritten when a new XML version is uploaded) or change the ‘phase’ information.
The scripts to upload a new XML entry are going to take some time to get working, but at least for now you can view and download entries as required. Here’s a screenshot of how the facility works:
Also this week I dealt with a few queries about the Symposium for Seventeenth-Century Scottish Literature, which was taking place online this week and for which I had set up the website. I also spoke to Arts IT Support about getting a test server set up for the Historical Thesaurus. I spent a bit of time working for the Books and Borrowing project, processing images for a ledger from Edinburgh University Library, uploading these to the server and generating page records and links between pages for the ledger. I also gave some advice to the Scots Language Policy RA about how to use the University’s VPN, spoke to Jennifer Smith about her SCOSYA follow-on funding proposal and had a chat with Thomas Clancy about how we will use GIS systems in the Iona project.
I had two Zoom calls this week, the first on Wednesday with Kirsteen McCue to discuss a new, small project to publish a selection of musical settings to Burns poems and the second on Friday with Joanna Kopaczyk and her RA on the Scots Language Policy project to give a tutorial on how to use WordPress.
The majority of my week was divided between the Anglo-Norman Dictionary, the Dictionary of the Scots Language and the Place-names of Iona projects. For the AND I made a few tweaks to the static content of the site and migrated some more blog posts across to the new site (these are not live yet). I also added commentaries to more than 260 entries, which took some time to test. I also worked on the DTD file that the editors reference from their XML editing software to ensure that all of the elements and attributes found within commentaries are ‘allowed’ in the XML. Without doing this it was possible to add the tags in, but this would give errors in the editing software. I also batch updated all of the entries on the site to reference the new DTD and exported all of the files, zipped them up and sent them to the editors so they can work on them as required. I also began to think about migrating the TextBase from the old site to the new one, and managed to source the XML files that comprise this system. It looks like it may be quite tricky to work with these as there are more than 70 book-length XML files to deal with and so far I have not managed to locate the XSLT that was originally used to process these files.
For the DSL I completed work on the new bibliography search pages that use the new ‘V4’ data. These pages allow the authors and titles of bibliographical items to be searched, results to be viewed and individual items to be displayed. I also made some minor tweaks to the live site and had a discussion with Ann Fergusson about transferring the project’s data to the people who have set up a new editing interface for them, something I’m hoping to be able to tackle next week.
For the Place-names of Iona project I had a discussion about implementing a new ‘work of the month’ feature and spent quite a bit of time investigating using 10-digit OS grid references in the project’s CMS. The team need to use up to 10-digit grid references to get 1m accuracy for individual monuments, but the library I use in the CMS to automatically generate latitude and longitude from the supplied grid reference will only work with a 6-digit NGR. The automatically generated latitude and longitude are then automatically passed to Google Maps to ascertain the altitude of the location and all of this information is stored in the database whenever a new place-name record is created or an existing record is edited.
As the library currently in use will only accept 6-digit NGRs I had to do a bit of research into alternative libraries, and I managed to find one that can accept NGRs of 2,4,6,8 or 10 digits. Information about the library, including text boxes where you can enter an NGR and see the results can be found here: http://www.movable-type.co.uk/scripts/latlong-os-gridref.html along with an awful lot of description about the calculations and some pretty scary looking formulae.
This does mean the person filling out the form can see the generated latitude and longitude and also tweak it if required before submitting the form, which is a potentially useful thing. I may even be able to add a Google Map to the form so you can see (and possibly tweak) the point before submitting the form, but I’ll need to look into this further. I also still need to work on the format of the latitude and longitude as the new library generates them with a compass point (e.g. 6.420848° W) and we need to store them as a purely decimal value (e.g. -6.420848) with ‘W’ and ‘S’ figures being negatives.
However, whilst researching this I discovered a potentially worrying thing that needs discussion with the wider team. The way the Ordnance Survey generates latitude and longitude from their grid references was changed in 2014. Information about this can be found in the page linked to above in the ‘Latitude/longitudes require a datum’ section. Previously the OS used ‘OSGB-36’ to generate latitude and longitude, but in 2014 this was changed to ‘WGS84’, which is used by GPS systems. The difference in the latitude / longitude figures generated by the two systems is about 100 metres, which is quite a lot if you’re intending to pinpoint individual monuments.
The new library has facilities to generate latitude and longitude using either the new or old systems, but defaults to the new system. I’ve checked the output of the library we currently use and it uses the old ‘OSGB-36’ system. This means all of the place-names in the system so far (and all those for the previous projects) have latitudes and longitudes generated using the now obsolete (since 2014) system. To give an example of the difference, the place-name A’ Mhachair in the CMS has this location: https://www.google.com/maps/place/56%C2%B019’33.2%22N+6%C2%B025’11.4%22Wfirstname.lastname@example.org,-6.422022,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325885!4d-6.419828 and with the newer ‘WGS84’ system it would have this location: https://www.google.com/maps/place/56%C2%B019’32.7%22N+6%C2%B025’15.1%22Wemail@example.com,-6.4230367,582m/data=!3m2!1e3!4b1!4m5!3m4!1s0x0:0x0!8m2!3d56.325744!4d-6.420848
So what we need to decide before I replace the old library with the new one in the CMS is whether we switch to using ‘WGS84’ or we keep using ‘OSGB-36’. As I say, this will need further discussion before I implement any changes.
Also this week I responded to a query from Cris Sarg of the Medical Humanities Network project, spoke to Fraser Dallachy about future updates to the HT’s data from the OED, made some tweaks to the structure of the SCOSYA website for Jennifer Smith, added a plugin to the Editing Burns site for Craig Lamont and had a chat with the Books and Borrowing people about cleaning the authors data, importing the Craigston data and how to deal with a lot of borrowers that were excluded from the Selkirk data that I previously imported.
Next week I’ll be on holiday from Monday to Wednesday to cover the school half term.
This was a four-day week for me as I had an optician’s appointment on Tuesday, and as my optician is over the other side of the city (handy for work, not so handy for working from home) I decided I’d just take the day off. I spent most of Monday working on the interactive map of Burns Suppers I’m developing with Paul Malgrati in Scottish Literature. This week I needed to update my interface to incorporate the large number of filters that Paul wants added to the map. In doing so I had to change the way the existing ‘period’ filter works to fit in with the new filter options, as previously the filter was being applied as soon as a period button was pressed. Now you need to press on the ‘apply filters’ button to see the results displayed, and this allows you to select multiple options without everything reloading as you work. There is also a ‘reset filters’ button which turns everything back on. Currently the ‘period’, ‘type of host’ and ‘toasts’ filters are in place, all as a series of checkboxes that allow you to combine any selections you want. Within a section (e.g. type of host) the selections are joined with an OR (e.g. ‘Burns Society’ OR ‘Church’). Between sections are joined with an AND (e.g. ‘2019-2020’ AND (‘Burns Society’ OR ‘Church’). For ‘toasts’ a location may have multiple toasts and the location will be returned if any of the selected toasts are associated with the location. E.g. if a location has ‘Address to a Haggis’ but not ‘Loyal Toast’ it will still be returned if you’ve selected both of these toasts. I also updated the introductory panel to make it much wider, so we can accommodate some text and also so there is more room for the filters. Even with this it is going to be a bit tricky to squeeze all of the filters in, but I’ll see what I can do next week.
I then turned my attention to the Dictionary of the Scots Language. Ann had noticed last week that the full text searches were ‘accent sensitive’ – i.e. entries containing accented characters would only be returned if your search term also contained the accented characters. This is not what we wanted and I spend a bit of time investigating how to make Solr ignore accents. Although I found a few articles dealing with this issue they were all for earlier versions of Solr and the process didn’t seem all that easy to set up, especially as I don’t have direct access to the sever that Solr resides on so tweaking settings is not an easy process. I decided instead to just strip out accents from the source data before it was ingested into Solr, which was a much more straightforward process and fitted in with another task I needed to tackle, which was to regenerate the DSL data from the old editing system, delete the duplicate child entries and set up a new version of the API to use this updated dataset. This process took a while to go through, but I go there in the end and we now have a new Solr collection set up for the new data, which has the accents and the duplicate child entries removed. I then updated one of our test front-ends to connect to this dataset so people can test it. However, whilst checking this I realised that stripped out the accents has introduced some unforeseen issues. The fulltext search is still ‘accent sensitive’, it’s just that I’ve replaced all accented characters with their non-accented equivalents. This means an accented search will no longer find anything. Therefore we’ll need to ensure that any accents in a submitted search string are also stripped out. In addition, stripping accents out of the Solr text also means that accents no longer appear in the snippets in the search results, meaning the snippets don’t fully reflect the actual content of the entries. This may be a larger issue and will need further discussion. I also wrote a little script to generate a list of entry IDs that feature an <su> inside a <cref> excluding those that feature </geo><su> or </geo> <su>. These entries will need to be manually updated to ensure the <su> tags don’t get displayed.
I spent the remainder of the week continuing with the redevelopment of the Anglo-Norman Dictionary site. My 51-item ‘to do’ list that I compiled last week has now shot up to 65 items, but thankfully during this week I managed to tick 20 of the items off. I have now imported all of the corrected data that the editors were working on, including not just the new ‘R’ records but corrections to other letters too. I also ensured that the earliest dates for entries now appear in the ‘search results’ and ‘log’ tabs in the left-hand panel, and these dates also appear in the fixed header that gets displayed when you scroll down long entries. I also added the date to the title in the main page.
I added content to the ‘cite this entry’ popup, which now contains the same citation options as found on the Historical Thesaurus site, and I added help text to the ‘browse’, ‘search results’ and ‘entry log’ pop-ups. I made the ‘clear search’ button right aligned and removed the part of speech tooltips as parts of speech appear all over the place and I thought it would confuse things if they had tooltips in the search results but not elsewhere. I also added a ‘Try an advanced search’ button at the top of the quick search results page.
I set up WordPress accounts for the editors and added the IP addresses used by the Aberystwyth VPN to our whitelist to ensure that the editors will be able to access the WordPress part of the site and I added redirects that will work from old site URLs once the new site goes live. I also put redirects in for all static pages that I could find too. We also now have the old site accessible via a subdomain so we should be able to continue to access the old site when we switch the main domain over to the new site.
I figured out why the entry for ‘ipocrite’ was appearing in all bold letters. This was caused by an empty XML tag and I updated the XSLT to ensure these don’t get displayed. I updated Variant/Deviant forms in the search results so they just have the title ‘Forms’ now and I think I’ve figured out why the sticky side panel wasn’t sticking and I think I’ve fixed it.
I updated my script that generates citation date text and regenerated all of the earliest date text to include the prefixes and suffixes and investigated the issue of commentary boxes was appearing multiple times. I checked all of the entries and there were about 5 that were like this, so I manually fixed them. I also ensured that when loading the advanced search after a quick search the ‘headword’ tab is now selected and the quick search text appears in the headword search box and I updated the label search so that label descriptions appear in a tooltip.
Also this week I participated in the weekly Iona Placenames Zoom call and made some tweaks to the content management systems for both Iona and Mull to added in researcher notes fields (English and Gaelic) to the place-name elements. I also had a chat with Wendy Anderson about making some tweaks to the Mapping Metaphor website for REF and spoke to Carole Hough about her old Cogtop site that is going to expire soon.
I spent two days each working on updates to the Dictionary of the Scots Language and the redevelopment of the Anglo-Norman Dictionary this week, with the remaining day spent on tasks for a few projects. For the DSL I fixed an issue with the way a selected item was being remembered from the search results. If you performed a search, then navigated to a page of results that wasn’t the first page then clicked on one of the results to view an entry this was setting a ‘current entry’ variable. This was used when loading the results from the ‘return to results’ button on the entry page to ensure that the page of the results that featured the entry you were looking at would be displayed. However, if you then clicked the ‘refine search’ button to return to the advanced search page this ‘current entry’ value would be retained, meaning that when you clicked the search button to perform a new search the results would load at whatever page the ‘current entry’ was located on, if it appeared in the new search results. As the ‘current entry’ would not necessarily be in the new results set the issue only cropped up every now and then. Thankfully having identified the issue it was easy to fix – whenever the search form loads as a result of a ‘refine search’ selection the ‘current entry’ variable is now cleared. It is still retained and used when you return to the search results from an entry page.
I also had a discussion with the editors about removing duplicate child entries from the data and grabbing an updated dataset from the old editing system one last time. Removing duplicate child entries will mean deleting several thousand entries and I was reluctant to do so on from the test systems we already have in case anything goes wrong, so I’ve decided to set up a new test version, which will be available via a new version of the API and will be accessible via one of the existing test front-ends I’ve created. I’ll be tackling this next week.
For the AND I focussed on importing more than 4,500 new or updated entries into the dictionary. In order to do this I needed to look at the new data and figure out how it differed structurally from the existing data, as previously the editors’ XML was passed through an online system that made changes to it before it was published on the old website. I discovered that there were five main differences:
- <main_entry> does not have an ID, or a ‘lead’ or a ‘unikey’. I’ll need to add in the ‘lead’ attribute as this is what controls that editor’s initials on the entry page and I decided to add in an ID as a means of uniquely identifying the XML, although there is already a new unique ID for each entry in the database. ‘unikey’ doesn’t seem to be used anywhere so I decided not to do anything about this.
- <sense> and <subsense> do not have IDs or ‘n’ numbers. The latter I already set up a script to generate so I could reuse that. The former aren’t used anywhere as each sense also has a <senseInfo> with another ID associated with it.
- <senseInfo> does not have IDs, ‘seq’ or ‘pos’. IDs here are important as they are used to identify senses / subsenses for the searches and I needed to develop a way of adding these in here. ‘seq’ does not appear to be used but ‘pos’ is. It’s used to generate the different part of speech sections between senses and in the ‘summary’ and this will need to be added in.
- <attestation> does not have IDs and these are needed as they are recorded in the translation data.
- <dateInfo> has an ID but the existing data from the DMS does not have IDs for <dateInfo>. I decided to retain these but they won’t be used anywhere.
I wrote a script that processed the XML to add in entry IDs, editor initials, sense numbers, senseInfo IDs, parts of speech and attestation IDs. This took a while to implement and test but it seemed to work successfully. After that I needed to work on a script that would import the data into my online system, which included regenerating all of the data used for search purposes, such as extracting cross references, forms, labels, citations and their dates, translations and earliest dates for entries. All seemed to work well and I made the new data available via the front-end for the editors to test out.
I also asked the editors about how to track different versions of data so we know which entries are new or updated as a result of the recent update. It turned out that there are six different statements that need to be displayed underneath entries depending on when the entries were published so I spent a bit of time applying these to entries and updating the entry page to display the notices.
After that I made a couple of tweaks to the new data (e.g. links to the MED were sometimes missing some information needed for the links to work) and discussed adding in commentaries with Geert. I then went through all of the emails to and from the editors in order to compile a big list of items that I still needed to tackle before we can launch the site. It’s a list that totals some 51 items, so I’m going to have my work cut out for me, especially as it is vital that the new site launches before Christmas.
The other projects I worked on this week included the interactive map of Burns Suppers for Paul Malgrati in Scottish Literature. Last week I’d managed to import the core fields from his gigantic spreadsheet and this week I made some corrections to the data and created the structure to hold the filters all of the filters that are in the data.
I wrote a script that goes through all of the records in the spreadsheet and stores the filters in the database when required. Note that a filter is only stored for a record when it’s not set to ‘NA’, which cuts down on the clutter. There are a total of 24,046 filters now stored in the system. The next stage will be to update the front-end to add in the options to select any combination of filter and to build the queries necessary to process the selection and return the relevant locations, which is something I aim to tackle next week.
Also this week I participated in the weekly Zoom call for the Iona place-names project and I updated the Iona CMS to strip out all of the non-Iona names and gave the team access to the project’s Content Management system. The project website also went live this week, although as of yet there is still not much content on it. It can be found here, though: https://iona-placenames.glasgow.ac.uk/
Also this week I helped to export some data for a member of the Berwickshire place-names project team, I responded to a query from Gerry McKeever about the St. Andrews data in the Books and Borrowing project’s database and I fixed an issue with Rob Maslen’s City of Lost Books site, which had somehow managed to lose its nice header font.