
Month: May 2021
Week Beginning 17th May 2021
I spent a lot of this week continuing with the Anglo-Norman Dictionary, including making some changes to the proofreader feature I created recently. I tweaked the output of this so that there is now a space between siglum and ‘MS’, ‘edgloss’ now has brackets and there is now a blank paragraph before the ‘summary’ section and also before the ‘cognate refs’ section to split things up a bit. I also added some characters (~~) before and after the ‘summary’ section to help split things up and added extra spaces before and after sense numbers, and square brackets around them (because background styles, which give the round, black circle are not carried over into Word when the content is copied). I also added more spaces round the labels, added an extra line break before locutions and made the locution phrase appear in bold.
I also spent some time investigating some issues with the data, for example a meaning was not getting displayed in the summary section of https://anglo-norman.net/entry/chaucer_3 because the part of speech labels didn’t quite match up (one was ‘subst.’, the other was ‘sbst.’) and updated the entry display so that the ‘form section’ at the top of an entry gets displayed even if there is no ‘cognate refs’ section. My code repositions the ‘formSection’ so it appears before ‘cognateRefs’ and as it was not finding this section it wasn’t repositioning the forms anywhere – instead they just disappeared. I therefore updated the code to ensure that the forms will only be repositioned if the ‘cognateRefs’ section is present, and this has fixed the matter.
I also responded to a request for data from a researcher at Humboldt-Universität zu Berlin who wanted information on entries that featured specific grammatical labels. As of yet the advanced search does not include a part of speech search, but I could generate the necessary data from the underlying database. I also ran a few queries to update further batches of bad dates in the system.
With all of this out of the way I then moved onto a more substantial task – creating a new ‘date builder’ feature for the Dictionary Management System. The old DMS featured such a tool, which allows the editor to fill in some text boxes and for an XML form of the date (either text, manuscript or both) to be generated, copied and pasted into their XML editor. The old feature used a mixture of Perl scripts and JavaScript to generate the XML, over several thousand lines of code, but I wanted to handle it all in JavaScript in a (hopefully) more succinct way.
My initial version allowed an editor to add Text and MS dates using the input boxes and then by pressing the ‘Generate XML’ button the ‘XML’ box is populated and the date as it would be displayed on the site is also displayed. I amalgamated the ‘proof’ and ‘Build XML’ options from the old DMS as it seemed more useful to just do both at the same time. There is also a ‘clear’ button that does what you’d expect it to do and a ‘log’ that displays feedback about the date. E.g. if the date doesn’t conform to the expected pattern (yyyy / yyyy-yyyy / yyyy-yy / yyyy-y) or one of the characters isn’t a number or the date after the dash is earlier than the date before the dash then a warning will be displayed here. The XML area is editable so if needs be the content can be manually tweaked. There is also a ‘Copy XML’ button to copy the contents of the XML area to the clipboard.
What I didn’t realise was that non-numerical dates also need to be processed using the date builder, so for example ‘s.xiii’, ‘s.xivex’, ‘sxii/xiii’. I needed to update the date builder to handle seven different centuries which could be joined in a range either by a dash or a slash, and 16 different suffixes, each of which would change how the numerical date should be generated from the century, and all this in addition to the three prefixes ‘a’,’b’ and ‘c’ that also change the generated date. Getting this to work was all very complicated, but by the end of the week I had a working version, all of which took up less than 500 lines of JavaScript. Below is a screenshot of the date builder in action:
Also this week I set up some new user accounts for the Books and Borrowing project, I gave Luca Guariento some feedback about an AHRC proposal, I had to deal with the server and database going down a few times and I added a new publication to the SCOSYA website.
I also updated the DSL test site so that cross references in entries don’t use IDs (as found in the XML) but use ‘slugs’ (as we use on the site). This required me to write a new API endpoint to return slugs from IDs and to update the JavaScript to find and replace cross reference IDs when an entry is loaded. I also spoke to Rhona about the launch of the new DSL website, which is possibly going to be pushed back a bit now.
Finally, I made some further tweaks to the Comparative Kingship content management systems for Scottish and Irish placenames. When I set up the two systems I’d forgotten to add the x-refs section into the form. The code was all there to handle them, but the section wasn’t appearing. I therefore updated both Scotland and Ireland so x-refs now appear. I’d also noticed that some of the autogenerated lists that appear when you type into boxes in the Ireland site(e.g. xrefs) were pointing to the Scotland database and therefore bringing back the wrong data and I fixed this too.
I also added all of the sources from the Kirkcudbrightshire system to the Scotland CMS and replaced the Scotland elements database with the one from KCB as well, which required me to check the elements already associated with names to ensure they point to the same data. Thankfully all did except the existing name ‘Rhynie’, which was newly added and its ID ended up referencing an entirely different element from the KCB database, but I fixed this. I also fixed a bug with the name and element deletion code that was preventing things for getting deleted.
Week Beginning 10th May 2021
I continued to work on updates to the Anglo Norman Dictionary for most of this week, looking at fixing the bad citation dates in entries that were causing the display of ‘earliest date’ to be incorrect. A number of the citation dates have a proper date in text form (e.g. s.xii/xiii) but have incorrect ‘post’ and ‘pre’ attributes (e.g. ‘00’ and ‘99’). The system uses these ‘post’ and ‘pre’ attributes for date searching and for deciding which is the earliest date for an entry, and if one of these bad dates was encountered it was considering it to be the earliest date. Initially I thought there were only a few entries that had ended up with an incorrect earliest date, because I was searching the database for all earliest dates that were less than 1000. However, I then realised that the bulk of the entries with incorrect earliest dates had the earliest date field set to ‘null’ and in database queries ‘null’ is not considered less than 1000 but a separate thing entirely and so such entries were not being found. I managed to identify several hundred entries that needed their dates fixed and wrote a script to do so.
It was slightly more complicated than a simple ‘find and replace’ as the metadata about the entry needed to be regenerated too – e.g. the dates extracted from the citations that are used in the advanced search and the earliest date display for entries. I managed to batch correct several hundred entries using the script and also adapted it to look for other bad dates that needed fixing too.
I also created a new feature for the Dictionary Management System: an entry proofreader. It allows an editor to attach a ZIP file containing XML entries and it then displays all of these in a similar manner to the live site, only with all entries on one long page. The editor can then select all of the text, copy it and then paste it into Word and the major formatting elements will be retained (bold, italic, superscript etc.). I tested the feature by zipping up 3,178 XML entries and although it took a few minutes to process, the page displayed properly and I was able to copy the text to Word (resulting in a 1,029 page Word file). After finishing the initial version of the script I had to tweak it a bit, as I wrote the HTML and JavaScript with the expectation that there would be one dictionary item on the page and some aspects were not working when there were multiple items and needed updating. I also ensured that links to sources in entries work. In the actual dictionary they open a pop-up, which clearly isn’t going to work in Word so instead I made the link go to the relevant item in the bibliography page (e.g. https://anglo-norman.net/bibliography/B#bib-Best). Links to other dictionaries, labels and other AND entries also all now work from Word.
In addition, cogrefs appear before variants and deviants, commentaries appear (as full text, not cut off), Xrefs at the bottom now have the ‘see also’ text above them as in the live site, editor initials now appear where they exist and numerals only appear where there is more than once sense in a POS.
Also this week I did some further work for the Dictionary of the Scots Language based on feedback after my upload of data from the DSL’s new editing system. There was a query about the ‘slug’ used for referencing an entry in a URL. When the new data is processed by the import script the ‘slug’ is generated from the first <url> entry in the XML. If this <url> begins ‘dost’ or ‘snd’ it means a headword is not present in the <url> and therefore the new system ID is taken as the new ‘slug’ instead. All <url> forms are also stored as alternative ‘slugs’ that can still be used to access the entry. I checked the new database and there are 3258 entries that have a ‘slug’ beginning with ‘dost’ or ‘snd’, i.e. they have the new ID as their ‘slug’ because they had an old ID as their first <url> in the XML. I checked a couple of these and they don’t seem to have the headword as a <url>, e.g. ‘beit’ (dost00052776) only has the old ID (twice) as URLs: <url>dost2543</url><url>dost2543</url>, ‘well-fired’ (snd00090568) only has the old ID (twice) as URLs: <url>sndns4098</url><url>sndns4098</url>. I’ve asked the editors what should be done about this.
Also this week I wrote a script to generate a flat CSV from the Historical Thesaurus’s relational database structure, joining the lexeme and category tables together and appending entries from the new ‘date’ table as additional columns as required. It took a little while to write the script and then a bit longer to run it, resulting in a 241MB CSV file.
I also gave some advice to Craig Lamont in Scottish Literature about a potential bid he’s putting together, and spoke to Luca about a project he’s been asked to write a DMP for. I also looked through some journals that Gerry Carruthers is hoping to host at Glasgow and gave him an estimate of the amount of time it would take to create a website based on the PDF contents of the old journal items.
Week Beginning 3rd May 2021
This was a four-day week due to May Day on Monday, and it was a week I spent almost exclusively on work for the Anglo-Norman Dictionary. I started off by tackling the issue of sense numbers displaying when there was only one main sense in a part of speech for an entry. This meant that a ‘1’ appeared next to the sense unnecessarily. This required some additional processing of the entry in JavaScript before display, which took a bit of time to implement, as I needed to iterate through the ‘entrySense’ elements using a jQuery ‘each’ loop, then look at the following ‘entrySense’ element to get its ‘n’ value. I hadn’t used an ‘each’ loop in this way before and didn’t know quite how to reference the following element, but finally figured out you can use the iterator with an array in combination with the element selector, in this case var nxt = $(‘.dictionaryPanel > .entrySense’)[i+1]; grabs the next ‘entrySense’ element. With the code in place entries now display numbers in their nice round black circles in all places except for parts of speech that only have one sense, as you can see with the ‘v.refl.’ and ‘p.pr. as a.’ sections of this page: https://anglo-norman.net/entry/descendre.
I then spent some time investigating why part of speech in the <senseInfo> element of senses sometimes used underscores and other times used spaces. This discrepancy was messing up the numbering of senses, as this depends on the POS, with the number resetting to 1 when a new POS is encountered. If the POS is sometimes recorded as ‘p.p._as_a.’ (for example) and other times as ‘p.p. as a.’ then the code thinks these are different parts of speech and resets the counter to 1. I looked at the DTD, which sets the rules for creating or editing the XML files and it uses the underscore form of POS. However, this rule only applies to the ‘type’ attribute of the <pos> element and not to the ‘pos’ attribute of the <senseInfo> element. After investigating it turned out that these ‘pos’ attributes that the numbering system relies on are not manually added in by the editors, but are added in by my scripts at the point of upload. The reason I set up my script to add these in is because the old systems also added these in automatically during the conversion of the editors’ XML into the XML published via the old Dictionary Management System. However, this old system refactored the POS, replacing underscores with spaces and thus storing two different formats of POS within the XML. My upload scripts didn’t do this but instead kept things consistent, and this meant that when an entry was edited to add a new sense the new sense was added with the underscore form of POS, but the existing senses still had the space form of POS.
There were two possible ways I could fix this, I could either write a script that regenerates the <senseInfo> pos for every sense and subsense in every entry, replacing all existing ‘pos’ with the value of the preceding <pos type=””> (i.e. removing all old space forms of POS and ensuring all POS references were consistent); or I could adapt my upload script so that the assignment of <senseInfo> pos treats both ‘underscore’ and ‘space’ versions as the same. I decided on the former approach and wrote a script to first identify and then update all of the dictionary entries.
The script goes through each entry and finds all that have a <senseInfo> pos with a space in. There are 2,538 such entries. I then adapted the script so that for each <senseInfo> in an entry all spaces are changed to underscores and the result is then compared with the preceding <pos> type. I set the script to output content if there was a mismatch between the <senseInfo> pos and the <pos> type, because when I set the script to update it will use the value from <pos> type, so as to ensure consistency. The script identified 41 entries where there was a mismatch between <senseInfo> pos and the preceding <pos> type. These were often due to a question mark being added to the <senseInfo> pos, e.g. ‘a. as s. ?’ vs ‘a._as_s._’, but there were also some where the POS was completely different, e.g. ‘sbst. inf.’ and ‘v.n.’. I spoke to the editor Geert about this and it turned out that these were due to a locution being moved in the XML without having the pos value updated. Geert fixed these and I ran the update to bring all of the POS references into alignment.
My final AND task was to look into some issues regarding the variant and deviant section of the entry (where alternative forms of the headword are listed). Legiturs in this section were not getting displayed, plus there were several formatting issues that needed addressed, such as brackets not appearing in the right place and line breaks not worked as they should. This was a very difficult task to tackle as there is so much variety to the structure of this section, and the XML is not laid out in the most logical of manners, for example references are not added as part of a <variant> or <deviant> tag but are added after the corresponding tag as a sibling <varref> element. This really complicates navigating through the variants and deviants as there may be any number of varrefs at the same level. However, I managed to address the issues with this section, ensuring the legiturs appeared, repositioning semi-colons outside of the <deviant> brackets, ensuring line breaks always occur when a new POS is encountered and don’t occur anywhere else, ensuring multiple occurrences of the same POS label don’t get displayed and fixing the issue with double closing brackets sometimes appearing. It’s likely that there will be other issues with this section, as the content and formatting is so varied, but for now that’s all issues sorted.
The only other project I worked for this week was the Iona place-names project, for which I helped the RA Sofia with the formatting of this month’s ‘name of the month’ feature (https://iona-placenames.glasgow.ac.uk/names-of-the-month/). Next week I’ll continue with the outstanding AND tasks, of which there are still several.
Week Beginning 26th April 2021
I continued with the import of new data for the Dictionary of the Scots Language this week. Raymond at Arts IT Support has set up a new collection and had imported the full-text search data into the Solr server, and I tested this out via the new front-end I’d configured to work with the new data source. I then began working on the import of the bibliographical data, but noticed that the file exported from the DSL’s new editing system didn’t feature an attribute denoting what source dictionary each record is from. We need this as the bibliography search allows users to limit their search to DOST or SND. The new IDs all start with ‘bib’ no matter what the source is. I had thought I could use the ‘oldid’ to extract the source (db = DOST, sb = SND) but I realised there are also composite records where the ‘oldid’ is something like ‘a200’. In such cases I don’t think I have any data that I can use to distinguish between DOST and SND records. The person in charge of exporting the data from the new editing system very helpfully agreed to add in a ‘source dictionary’ attribute to all bibliographical records and sent me an updated version of the XML file. Whilst working with the data I realised that all of the composite records are DOST records anyway, so I didn’t need the ‘sourceDict’ attribute, but I think it’s better to have this explicitly as an attribute as differentiating between dictionaries is important.
I imported all of the bibliographical records into the online system, including the composite ones as these are linked to from dictionary entries and are therefore needed, even though their individual parts are also found separately in the data. However, I decided to exclude the composite records from the search facilities, otherwise we’d end up with duplicates in the search results. I updated the API to use the new bibliography tables and I updated the new front-end so that bibliographical searches use the new data. One thing that needs some further work is the display of individual bibliographies. These are now generated from the bibliography XML via an XSLT whereas previously they were generated from a variety of different fields in the database. The display doesn’t completely match up with the display on the live and Sienna versions of the bibliography pages and I’m not sure exactly how the editors would like entries to be displayed. I’ll need further input from them on this matter, but the import of data from the new editing system has now been completed successfully. I’d been documenting the process as I worked through it and I sent the documentation and all scripts I wrote to handle the workflow to the editors to be stored for future use.
I also worked on the Books and Borrowing project this week. I received the last of the digitised images of borrowing registers from Edinburgh (other than one register which needs conservation work), and I uploaded these to the project’s content management system, creating all of the necessary page records. We have a total of 9,992 page images as JPEG files from Edinburgh, totalling 105GB. Thank goodness we managed to set up an IIIF server for the image files rather than having to generate and store image tilesets for each of these page images. Also this week I uploaded the images for 14 borrowing registers from St Andrews and generated page records for each of these.
I had a further conversation with GIS expert Piet Gerrits for the Iona project and made a couple of tweaks to the Comparative Kingship content management systems, but other than that I spent the remainder of the week returning to the Anglo-Norman Dictionary, which I hadn’t worked on since before Easter. To start with I went back through old emails and documents and wrote a new ‘to do’ list containing all of the outstanding tasks for the project, some 20 items of varying degrees of size and intricacy. After some communication with the editors I began tackling some of the issues, beginning with the apparent disappearance of <note> tags from certain entries.
In the original editor’s XML (the XML as structured before uploaded into the old DMS) there were ‘edGloss’ notes tagged as ‘<note type=”edgloss” place=”inline”>’ that were migrated to <edGloss> elements during whatever processing happened with the old DMS. However, there were also occasionally notes tagged as ‘<note place=”inline”>’ that didn’t get transformed and remained tagged as this.
I’m not entirely sure how or where, but at some point during my processing of the data these ‘<note place=”inline”>’ notes have been lost. It’s very strange as the new DMS import script is based entirely on the scripts I wrote to process the old DMS XML entries, but I tested the DMS import by uploading the old DMS XML version of ‘poer_1’ to the new DMS and the ‘<note place=”inline”>’ have been retained, yet in the live entry for ‘poer_1’ the <note> text is missing.
I searched the database for all entries where the DMS XML as exported from the old DMS system contains the text ‘<note place=”inline”>’ and there are 323 entries, which I added to a spreadsheet and sent to the editors. It’s likely that the new XML for these entries will need to be manually corrected to reinstate the missing <note> elements. Some entries (as with ‘poer_1’) have several of these. II still have the old DMS XML for these so it is at least possible to recover the missing tags. I wish I could identify exactly when and how the tags were removed, but that would quite likely require many hours of investigation, as I already spent a couple of hours trying to get to the bottom of the issue without success.
Moving on to a different issue, I changed the upload scripts so that the ‘n’ numbers are always fully regenerated automatically when a file is uploaded, as previously there were issues when a mixture of senses with and without ‘n’ numbers were included in an entry. This means that any existing ‘n’ values are replaced, so it’s no longer possible to manually set the ‘n’ value. Instead ‘n’ values for senses within a POS will always increment from 1 depending on the order they appear in the file, with ‘n’ being reset to 1 whenever a new POS is encountered.
Main senses in locutions were not being assigned an ‘n’ on upload, and I changed this so that they are assigned an ‘n’ in exactly the same way as regular main senses. I tested this with the ‘descendre’ entry and it worked, although I encountered an issue. The final locution main sense (to descend to (by way of inheritance)) had a POS of ‘sbst._inf.’ In its <senseInfo> whereas it should have been (based on the POS of the previous two senses) ‘sbst. Inf.’. The script was therefore considering this to be a new POS and gave the sense an ‘n’ of 1. In my test file I updated the POS and re-uploaded the file and the sense was assigned the correct value of 3 to its ‘n’, but we’ll need to investigate why a different form of POS was recorded for this sense.
I also updated the front-end so that locution main senses with an ‘n’ now have the ‘n’ displayed, (e.g. https://anglo-norman.net/entry/descendre) and wrote a script that will automatically add missing ‘n’ attributes to all locution main senses in the system. I haven’t run this on the live database yet as I need further feedback from the editors before I do. As the week drew to a close I worked on a method to hide sense numbers in the front-end in case where there was only one sense in a part of speech, but I didn’t manage to get this completed and will continue with it next week.