Week 19 of Lockdown, and it was a short week for me as the Monday was the Glasgow Fair holiday. I spent a couple of days this week continuing to add features to the content management system for the Books and Borrowing project. I have now implemented the ‘normalised occupations’ part of the CMS. Originally occupations were just going to be a set of keywords, allowing one or more keyword to be associated with a borrower. However, we have been liaising with another project that has already produced a list of occupations and we have agreed to share their list. This is slightly different as it is hierarchical, with a top-level ‘parent’ containing multiple main occupations. E.g. ‘Religion and Clergy’ features ‘Bishop’. However, for our project we needed a third hierarchical level do differentiate types of minister/priest, so I’ve had to add this in too. I’ve achieved this by means of a parent occupation ID in the database, which is ‘null’ for top-level occupations and contains the ID of the parent category for all other occupations.
I completed work on the page to browse occupations, arranging the hierarchical occupations in a nested structure that features a count of the number of borrowers associated with the occupation to the right of the occupation name. These are all currently zero, but once some associations are made the numbers will go up and you’ll be able to click on the count to bring up a list of all associated borrowers, with links through to each borrower. If an occupation has any child occupations a ‘+’ icon appears beside it. Press on this to view the child occupations, which also have counts. The counts for ‘parent’ occupations tally up all of the totals for the child occupations, and clicking on one of these counts will display all borrowers assigned to all child occupations. If an occupation is empty there is a ‘delete’ button beside it. As the list of occupations is going to be fairly fixed I didn’t add in an ‘edit’ facility – if an occupation needs editing I can do it directly through the database, or it can be deleted and a new version created. Here’s a screenshot showing some of the occupations in the ‘browse’ page:
I also created facilities to add new occupations. You can enter an occupation name and optionally specify a parent occupation from a drop-down list. Doing so will add the new occupation as a child of the selected category, either at the second level if a top level parent is selected (e.g. ‘Agriculture’) or at the third level if a second level parent is selected (e.g. ‘Farmer’). If you don’t include a parent the occupation will become a new top-level grouping. I used this feature to upload all of the occupations, and it worked very well.
I then updated the ‘Borrowers’ tab in the ‘Browse Libraries’ page to add ‘Normalised Occupation’ to the list of columns in the table. The ‘Add’ and ‘Edit’ borrower facilities also now feature ‘Normalised Occupation’, which replicates the nested structure from the ‘browse occupations’ page, only features checkboxes beside each main occupation. You can select any number of occupations for a borrower and when you press the ‘Upload’ or ‘Edit’ button your choice will be saved. Deselecting all ticked checkboxes will clear all occupations for the borrower. If you edit a borrower who has one or more occupations selected, in addition to the relevant checkboxes being ticked, the occupations with their full hierarchies also appear above the list of occupations, so you can easily see what is already selected. I also updated the ‘Add’ and ‘Edit’ borrowing record pages so that whenever a borrower appears in the forms the normalised occupations feature also appears.
I also added in the option to view page images. Currently the only ledgers that have page images are the three Glasgow ones, but more will be added in due course. When viewing a page in a ledger that includes a page image you will see the ‘Page Image’ button above the table of records. Press on this and a new browser tab will open. It includes a link through to the full-size image of the page if you want to open this in your browser or download it to open in a graphics package. It also features the ‘zoom and pan’ interface that allows you to look at the image in the same manner as you’d look at a Google Map. You can also view this full screen by pressing on the button in the top right of the image.
Also this week I made further tweaks to the script I’d written to update lexeme start and end dates in the Historical Thesaurus based on citation dates in the OED. I’d sent a sample output of 10,000 rows to Fraser last week and he got back to me with some suggestions and observations. I’m going to have to rerun the script I wrote to extract the more than 3 million citation dates from the OED as some of the data needs to be processed differently, but as this script will take several days to run and I’m on holiday next week this isn’t something I can do right now. However, I managed to change the way the date matching script runs to fix some bugs and make the various processes easier to track. I also generated a list of all of the distinct labels in the OED data, with counts of the number of times these appear. Labels are associated with specific citation dates, thankfully. Only a handful are actually used lots of times, and many of the others appear to be used as a ‘notes’ field rather than as a more general label.
In addition to the above I also had a further conversation with Heather Pagan about the data management plan for the AND’s new proposal, responded to a query from Kathryn Cooper about the website I set up for her at the end of last year, responded to a couple of separate requests from post-grad students in Scottish Literature, spoke to Thomas Clancy about the start date for his Place-Names of Iona project, which got funded recently, helped with some issues with Matthew Creasy’s Scottish Cosmopolitanism website and spoke to Carole Hough about making a few tweaks to the Berwickshire Place-names website for REF.
I’m going to be on holiday for the next two weeks, so there will be no further updates from me for a while.
This was week 18 of Lockdown, which is now definitely easing here. I’m still working from home, though, and will be for the foreseeable future. I took Friday off this week, so it was a four-day week for me. I spent about half of this time on the Books and Borrowing project, during which time I returned to adding features to the content management system, after spending recent weeks importing datasets. I added a number of indexes to the underlying database which should speed up the loading of certain pages considerably. E.g. the browse books, borrowers and author pages. I then updated the ‘Books’ tab when viewing a library (i.e. the page that lists all of the book holdings in the library) so that it now lists the number of book holdings in the library above the table. The table itself now has separate columns for all additional fields that have been created for book holdings in the library and it is now possible to order the table by any of the headings (pressing on a heading a second time reverses the ordering). The count of ‘Borrowing records’ for each book in the table is now a button and pressing on it brings up a popup listing all of the borrowing records that are associated with the book holding record, and from this pop-up you can then follow a link to view the borrowing record you’re interested in. I then made similar changes to the ‘Borrowers’ tab when viewing a library (i.e. the page that lists all of the borrowers the library has). It also now displays the total number of borrowers at the top. This table already allowed the reordering by any column, so that’s not new, but as above, the ‘Borrowing records’ count is now a link that when clicked on opens a list of all of the borrowing records the borrower is associated with.
The big new feature I implemented this week was borrower cross references. These can be added via the ‘Borrowers’ tab within a library when adding or editing a borrower on this page. When adding or editing a borrower there is now a section of the form labelled ‘Cross-references to other borrowers’. If there are any existing cross references these will appear here, with a checkbox beside each that you can tick if you want to delete the cross reference (the user can tick the box then press ‘Edit’ to edit the borrower and the reference will be deleted). Any number of new cross references can be added by pressing on the ‘Add a cross-reference’ button (multiple times, if required). Doing so adds two fields to the form, one for a ‘description’, which is the text that shows how the current borrower links to the referenced borrowing record, and one for ‘referenced borrower’, which is an auto-complete. Type in a name or part of a name and any borrower that matches in any library will be listed. The library appears in brackets after the borrower’s name to help differentiate records. Select a borrower and then when the ‘Add’ or ‘Edit’ button is pressed for the borrower the cross reference will be made.
Cross-references work in both directions – if you add a cross reference from Borrower A to Borrower B you don’t then need to load up the record for Borrower B to add a reference back to Borrower A. The description text will sit between the borrower whose form you make the cross reference on and the referenced borrower you select, so if you’re on the edit form for Borrower A and link to Borrower B and the description is ‘is the son of’ then the cross reference will appear as ‘Borrower A is the son of Borrower B’. If you then view Borrower B the cross reference will still be written in this order. I also updated the table of borrowers to add in a new ‘X-Refs’ column that lists all cross-references for a borrower.
I spent the remainder of my working week completing smaller tasks for a variety of projects, such as updating the spreadsheet output of duplicate child entries for the DSL people, getting an output of the latest version of the Thesaurus of Old English data for Fraser, advising Eleanor Lawson on ‘.ac.uk’ domain names and having a chat with Simon Taylor about the pilot Place-names of Fife project that I worked on with him several years ago. I also wrote a Data Management Plan for a new AHRC proposal the Anglo-Norman Dictionary people are putting together, which involved a lengthy email correspondence with Heather Pagan at Aberystwyth.
Finally, I returned to the ongoing task of merging data from the Oxford English Dictionary with the Historical Thesaurus. We are currently attempting to extract citation dates from OED entries in order to update the dates of usage that we have in the HT. This process uses the new table I recently generated from the OED XML dataset which contains every citation date for every word in the OED (more than 3 million dates). Fraser had prepared a document listing how he and Marc would like the HT dates to be updated (e.g. if the first OED citation date is earlier than the HT start date by 140 years or more then use the OED citation date as the suggested change). Each rule was to be given its own type, so that we could check through each type individually to make sure the rules were working ok.
It took about a day to write an initial version of the script, which I ran on the first 10,000 HT lexemes as a test. I didn’t split the output into different tables depending on the type, but instead exported everything to a spreadsheet so Marc and Fraser could look through it.
In the spreadsheet if there is no ‘type’ for a row it means it didn’t match any of the criteria, but I included these rows anyway so we can check whether there are any other criteria the rows should match. I also included all the OED citation dates (rather than just the first and last) for reference. I noted that Fraser’s document doesn’t seem to take labels into consideration. There are some labels in the data, and sometimes there’s a new label for an OED start or end date when nothing else is different, e.g. htid 1479 ‘Shore-going’: This row has no ‘type’ but does have new data from the OED.
Another issue I spotted is that as the same ‘type’ variable is set when a start date matches the criteria and then when an end date matches the criteria, the ‘type’ as set during start date is then replaced with the ‘type’ for end date. I think, therefore, that we might have to split the start and end processes up, or append the end process type to the start process type rather than replacing it (so e.g. type 2-13 rather than type 2 being replaced by type 13). I also noticed that there are some lexemes where the HT has ‘current’ but the OED has a much earlier last citation date (e.g. htid 73 ‘temporal’ has 9999 in the HT but 1832 in the OED. Such cases are not currently considered.
Finally, according to the document, Antes and Circas are only considered for update if the OED and HT date is the same, but there are many cases where the start / end OED date is picked to replace the HT date (because it’s different) and it has an ‘a’ or ‘c’ and this would then be lost. Currently I’m including the ‘a’ or ‘c’ in such cases, but I can remove this if needs be (e.g. HT 37 ‘orb’ has HT start date 1601 (no ‘a’ or ‘c’) but this is to be replaced with OED 1550 that has an ‘a’. Clearly the script will need to be tweaked based on feedback from Marc and Fraser, but I feel like we’re finally making some decent progress with this after all of the preparatory work that was required to get to this point.
Next Monday is the Glasgow Fair holiday, so I won’t be back to work until the Tuesday.
Week 16 of Lockdown and still working from home. I continued working on the data import for the Books and Borrowers project this week. I wrote a script to import data from Haddington, which took some time due to the large number of additional fields in the data (15 across Borrowers, Holdings and Borrowings), but are executing it resulted in a further 5,163 borrowing records across 2 ledgers and 494 pages being added, including 1399 book holding records and 717 borrowers.
I then moved onto the datasets from Leighton and Wigtown. Leighton was a much smaller dataset, with just 193 borrowing records over 18 pages in one ledger and involving 18 borrowers and 71 books. As before, I have just created book holding records for these (rather than project-wide edition records), although in this case there are authors for books too, which I have also created. Wigtown was another smaller dataset. The spreadsheet has three sheets, the first is a list of borrowers, the second a list of borrowings and the third a list of books. However, no unique identifiers are used to connect the borrowers and books to the information in the borrowings sheet and there’s no other field that matches across the sheets to allow the data to be automatically connected up. For example, in the Books sheet there is the book ‘History of Edinburgh’ by author ‘Arnot, Hugo’ but in the borrowings tab author surname and forename are split into different columns (so ‘Arnot’ and ‘Hugo’ and book titles don’t match (in this case the book appears as simply ‘Edinburgh’ in the borrowings). Therefore I’ve not been able to automatically pull in the information from the books sheet. However, as there are only 59 books in the books sheet it shouldn’t take too much time to manually add the necessary data when created Edition records. It’s a similar issue with Borrowers in the first sheet – they appear with name in one column (e.g. ‘Douglas, Andrew’) but in the Borrowings sheet the names are split into separate forename and surname columns. There are also instances of people with the same name (e.g. ‘Stewart, John’) but without unique identifiers there’s no way to differentiate these. There are only 110 people listed in the Borrowers sheet, and only 43 in the actual borrowing data, so again, it’s probably better if any details that are required are added in manually.
I imported a total of 898 borrowing records for Wigtown. As there is no page or ledger information in the data I just added these all to one page in a made-up ledger. It does however mean that the page can take quite a while to load in the CMS. There are 43 associated borrowers and 53 associated books, which again have been created as Holding records only and have associated authors. However, there are multiple Book Items created for many of these 53 books – there are actually 224 book items. This is because the spreadsheet contains a separate ‘Volume’ column and a book may be listed with the same title but a different volume. In such cases a Holding record is made for the book (e.g. ‘Decline and Fall of Rome’) and an Item is made for each Volume that appears (in this case 12 items for the listed volumes 1-12 across the dataset). With these datasets imported I have now processed all of the existing data I have access to, other than the Glasgow Professors borrowing records, but these are still being worked on.
I did some other tasks for the project this week as well, including reviewing the digitisation policy document for the project, which lists guidelines for the team to follow when they have to take photos of ledger pages themselves in libraries where no professional digitisation service is available. I also discussed how borrower occupations will be handled in the system with Katie.
In addition to the Books and Borrowers project I found time to work on a number of other projects this week too. I wrote a Data Management Plan for an AHRC Networking proposal that Carolyn Jess-Cooke in English Literature is putting together and I had an email conversation with Heather Pagan of the Anglo-Norman Dictionary about the Data Management Plan she wants me to write for a new AHRC proposal that Glasgow will be involved with. I responded to a query about a place-names project from Thomas Clancy, a query about App certification from Brian McKenna in IT Services and a query about domain name registration from Eleanor Lawson at QMU. Also (outside of work time) I’ve been helping my brother-in-law set up Beacon Genealogy, through which he offers genealogy and family history research services.
Also this week I worked with Jennifer Smith to make a number of changes to the content of the SCOSYA website (https://scotssyntaxatlas.ac.uk/) to provide more information about the project for REF purposes and I added a new dataset to the interactive map of Burns Suppers that I’m creating for Paul Malgrati in Scottish Literature. I also went through all of the WordPress sites I manage and upgraded them to the most recent version of WordPress.
Finally, I spent some time writing scripts for the DSL people to help identify child entries in the DOST and SND datasets that haven’t been properly merged with main entries when exported from their editing software. In such cases the child entries have been added to the main entries, but then they haven’t been removed as separate entries in the output data, meaning the child entries appear twice. When attempting to process the SND data I discovered there were some errors in the XML file (mismatched tags) that prevented my script from processing the file, so I had to spend some time tracking these down and fixing them. But once this had been done my script could do through the entire dataset, look for an ID that appeared as a URL in one entry and as an ID of another entry and in such cases pull out the IDs and the full XML of each entry and export it into an HTML table. There were about 180 duplicate child entries in DOST but a lot more in SND (the DOST file is about 1.5mb, the SND one is about 50mb). Hopefully once the DSL people have analysed the data we can then strip out the unnecessary child entries and have a better dataset to import into the new editing system the DSL is going to be using.
This was week 15 of Lockdown, which I guess is sort of coming to an end now, although I will still be working from home for the foreseeable future and having to juggle work and childcare every day. I continued to work on the Books and Borrowing project for much of this week, this time focussing on importing some of the existing datasets from previous transcription projects. I had previously written scripts to import data from Glasgow University library and Innerpeffray library, which gave us 14,738 borrowing records. This week I began by focussing on the data from St Andrews University library.
The St Andrews data is pretty messy, reflecting the layout and language of the original documents, so I haven’t been able to fully extract everything and it will require a lot of manual correcting. However, I did manage to migrate all of the data to a test version of the database running on my local PC and then updated the online database to incorporate this data.
The data I’ve got are CSV and HTML representations of transcribed pages that come from an existing website with pages that look like this: https://arts.st-andrews.ac.uk/transcribe/index.php?title=Page:UYLY205_2_Receipt_Book_1748-1753.djvu/100. The links in the pages (e.g. Locks Works) lead through to further pages with information about books or borrowers. Unfortunately the CSV version of the data doesn’t include the links or the linked to data, and as I wanted to try and pull in the data found on the linked pages I therefore needed to process the HTML instead.
I wrote a script that pulled in all of the files in the ‘HTML’ directory and processed each in turn. From the filenames my script could ascertain the ledger volume, its dates and the page number. For example ‘Page_UYLY205_2_Receipt_Book_1748-1753.djvu_10.html’ is ledger 2 (1748-1753) page 10. The script creates ledgers and pages, and adds in the ‘next’ and ‘previous’ page links to join all the pages in a ledger together.
The actual data in the file posed further problems. As you can see from the linked page above, dates are just too messy to automatically extract into our strongly structured borrowed and returned date system. Often a record is split over multiple rows as well (e.g. the borrowing record for ‘Rollins belles Lettres’ is actually split over 3 rows). I could have just grabbed each row and inserted it as a separate borrowing record, which would then need to be manually merged, but I figured out a way to do this automatically. The first row of a record always appears to have a code (the shelf number) in the second column (e.g. J.5.2 for ‘Rollins’) whereas subsequent rows that appear to belong to the same record don’t (e.g. ‘on profr Shaws order by’ and ‘James Key’). I therefore set up my script to insert new borrowing records for rows that have codes, and to append any subsequent rows that don’t have codes to this record until a row with a code is reached again.
I also used this approach to set up books and borrowers too. If you look at the page linked to above again you’ll see that the links through to things are not categorised – some are links to books and others to borrowers, with no obvious way to know which is which. However, it’s pretty much always the case that it’s a book that appears in the row with the code and it’s people that are linked to in the other rows. I could therefore create or link to existing book holding records for links in the row with a code and create or link to existing borrower records for links in rows without a code. There are bound to be situations where this system doesn’t quite work correctly, but I think the majority of rows do fit this pattern.
The next thing I needed to do was to figure out which data from the St Andrews files should be stored as what in our system. I created four new ‘Additional Fields’ for St Andrews as follows:
- Original Borrowed date: This contains the full text of the first column (e.g. Decr 16)
- Code: This contains the full text of the second column (e.g. J.5.2)
- Original Returned date: This contains the full text of the fourth column (e.g. Jan. 5)
- Original returned text: This contains the full date of the fifth column (e.g. ‘Rollins belles Lettres V. 2d’)
In the borrowing table the ‘transcription’ field is set to contain the full text of the ‘borrowed’ column, but without links. Where subsequent rows contain data in this column but no code, this data is then appended to the transcription. E.g. the complete transcription for the third item on the page linked to above is ‘Rollins belles Lettres Vol 2<sup>d</sup> on profr Shaws order by James Key’.
The contents of all pages linked to in the transcriptions are added to the ‘editors notes’ field for future use if required. Both the page URL and the page content are included, separated by a bar (|) and if there are multiple links these are separated by five dashes. E.g. for the above the notes field contains:
‘Rollins_belles_Lettres| <p>Possibly: De la maniere d’enseigner et d’etuder les belles-lettres, Par raport à l’esprit & au coeur, by Charles Rollin. (A Amsterdam : Chez Pierre Mortier, M. DCC. XLV. ) <a href=”http://library.st-andrews.ac.uk/record=b2447402~S1″>http://library.st-andrews.ac.uk/record=b2447402~S1</a></p>
—– profr_Shaws| <p><a href=”https://arts.st-andrews.ac.uk/biographical-register/data/documents/1409683484″>https://arts.st-andrews.ac.uk/biographical-register/data/documents/1409683484</a></p>
—– James_Key| <p>Possibly James Kay: <a href=”https://arts.st-andrews.ac.uk/biographical-register/data/documents/1389455860″>https://arts.st-andrews.ac.uk/biographical-register/data/documents/1389455860</a></p>
As mentioned earlier, the script also generates book and borrower records based on the linked pages too. I’ve chosen to set up book holding rather than book edition records as the details are all very vague and specific to St Andrews. In the holdings table I’ve set the ‘standardised title’ to be the page link with underscores replaced with dashes (e.g. ‘Rollins belles Lettres’) and the page content is stored in the ‘editors notes’ field. One book item is created for each holding to be used to link to the corresponding borrowing records.
For borrowers a similar process is followed, with the link added to the surname column (e.g. Thos Duncan) and the page content added to the ‘editors notes’ field (e.g. <p>Possibly Thomas Duncan: <a href=”https://arts.st-andrews.ac.uk/biographical-register/data/documents/1377913372″>https://arts.st-andrews.ac.uk/biographical-register/data/documents/1377913372</a></p>’). All borrowers are linked to records as ‘Main’ borrowers.
During the processing I noticed that the fourth ledger had a slightly different structure to the others, with entire pages devoted to a particular borrower, whose name then appeared in a heading row in the table. I therefore updated my script to check for the existence of this heading row, and if it exists my script then grabs the borrower name, creates the borrower record if it doesn’t already exist and then links this borrower to every borrowing item found on the page. After my script had finished running we had 11147 borrowing records, 996 borrowers and 6395 book holding records for St Andrew in the system.
I then moved onto looking at the data for Selkirk library. This data was more nicely structured than the St Andrews data, with separate spreadsheets for borrowings, borrowers and books and borrowers and books connected to borrowings via unique identifiers. Unfortunately the dates were still transcribed as they were written rather than being normalised in any way, which meant it was not possible to straightforwardly generate structured dates for the records and these will need to be manually generated. The script I wrote to import the data took about a day to write, and after running it we had a further 11,431 borrowing records across two registers and 415 pages entered into our database.
As with St Andrews, I created book records as Holding records only (i.e. associated specifically with the library rather than being project-wide ‘Edition’ records. There are 612 Holding records for Selkirk. I also processed the borrower records, resulting in 86 borrower records being added. I added the dates as originally transcribed to an additional field named ‘Original Borrowed Date’ and the only other additional field is in the Holding records for ‘Subject’, that will eventually be merged with our ‘Genre’ when this feature becomes available.
Also this week I advised Katie on a file naming convention for the digitised images of pages that will be created for the project. I recommended that the filenames shouldn’t have spaces in them as these can be troublesome on some operating systems and that we’d want a character to use as a delimiter between the parts of the filename that wouldn’t appear elsewhere in the filename so it’s easy to split up the filename. I suggested that the page number should be included in the filename and that it should reflect the page number as it will be written into the database – e.g. if we’re going to use ‘r’ and ‘v’ these would be included. Each page in the database will be automatically assigned an auto-incrementing ID, and the only means of linking a specific page record in the database with a specific image will be via the page number entered when the page is created, so if this is something like ‘23r’ then ideally this should be represented in the image filename.
Katie had wondered about using characters to denote ledgers and pages in the filename (e.g. ‘L’ and ‘P’) but if we’re using a specific delimiting character to separate parts of the filename then using these characters wouldn’t be necessary and I suggested it would be better to not use ‘L’ as a lower case ‘l’ is very easy to confuse with a ‘1’ or a capital ‘I’ which might confuse future human users.
Instead I suggested using a ‘-‘ instead of spaces and a ‘_’ as a delimiter and pointed out that we should ensure that no other non-alphanumeric characters are ever used in the filename – no apostrophes, commas, colons, semi-colons, ampersands etc and to make sure the ‘-‘ is really a minus sign and not one of the fancy dashes (–) that get created by MS Office. This shouldn’t be an issue when entering a filename, but might be if a list of filenames is created in Word and then pasted into the ‘save as’ box, for example.
Finally, I suggested that it might be best to make the filenames entirely lower case, as some operating systems are case sensitive and if we don’t specify all lower case then there may be variation in the use of case. Following these guidelines the filenames would look something like this:
In addition to the Books and Borrowing project I worked on a number of other projects this week. I gave Matthew Creasy some further advice on using forums in his new project website, and ‘Scottish Cosmopolitanism at the Fin de Siècle’ website is now available here: https://scoco.glasgow.ac.uk/.
I also worked a bit more on using dates from the OED data in the Historical Thesaurus. Fraser had sent me a ZIP file containing the entire OED dataset as 240 XML files and I began analysing these to figure out how we’d extract these dates so that we could use them to update the dates associated with the lexemes in the HT. I needed to extract the quotation dates as these have ‘ante’ and ‘circa’ notes, plus labels. I noted that in addition to ‘a’ and ‘c’ a question mark is also used, somethings with an ‘a’ or ‘c’ and sometimes without. I decided to process things as follows:
- ?a will just be ‘a’
- ?c will just be ‘c’
- ? without an ‘a’ or ‘c’ will be ‘c’.
I also noticed that a date may sometimes be a range (e.g. 1795-8) so I needed to include a second date column in my data structure to accommodate this. I also noted that there are sometimes multiple Old English dates, and the contents of the ‘date’ tag vary depending on the date – sometimes the content is ‘OE’ and othertimes ‘lOE’ or ‘eOE’. I decided to process any OE dates for a lexeme as being 650 and to have only one OE date stored, so as to align with how OE dates are stored in the HT database (we don’t differentiate between date for OE words).
While running my date extraction script over one of the XML files I also noticed that there were lexemes in the OED data that were not present in the OED data we had previously extracted. This presumably means the dataset Fraser sent me is more up to date than the dataset I used to populate our online OED data table. This will no doubt mean we’ll need to update our online OED table, but as we link to the HT lexeme table using the OED catid, refentry, refid and lemmaid fields if we were to replace the online OED lexeme table with the data in these XML files the connections from OED to HT lexemes would be retained without issue (hopefully), but any matching processes we performed would need to be done again for the new lexemes.
I set my extraction script running on the OED XML files on Wednesday and processing took a long time. The script didn’t complete until sometime during Friday night, but after it had finished it had processed 238,699 categories, 754,285 lexemes, generating 3,893,341 date rows. It also found 4,062 new words in the OED data that it couldn’t process because they don’t exist in our OED lexeme database.
I also spent a bit more time working on some scripts for Fraser’s Scots Thesaurus project. The scripts now ignore ‘additional’ entries and only include ‘n.’ entries that match an HT ‘n’ category. Variant spellings are also removed (these were all tagged with <form> and I removed all of these). I also created a new field to store only the ‘NN_’ tagged words and remove all others.
The scripts generated three datasets, which I saved as spreadsheets for Fraser. The first (postagged-monosemous-dost-no-adds-n-only) contains all of the content that matches the above criteria. The second (postagged-monosemous-dost-no-adds-n-only-catheading-match) lists those lexemes where a postagged word fully matches the HT category heading. The final (postagged-monosemous-dost-no-adds-n-only-catcontents-match) lists those lexemes where a postagged word fully matches a lexeme in the HT category. For this table I’ve also added in the full list of lexemes for each HT category too.
I also spent a bit of time working on the Data Management Plan for the new project for Jane Stuart-Smith and Eleanor Lawson at QMU and arranged for a PhD student to get access to the TextGrid files that were generated for the audio records for the SCOTS Corpus project.
Finally, I investigated the issue the DSL people are having with duplicate child entries appearing in their data. This was due to something not working quite right in a script Thomas Widmann had written to extract the data from the DSL’s editing system before he left last year, and Ann had sent me some examples of where the issue was cropping up.
I have the data that was extracted from Thomas’s script last July as two XML files (dost.xml and snd.xml) and I looked through these for the examples Ann had sent. The entry for snd13897 contains the following URLs:
The first is the ID for the main entry and the other two are child entries. If I search for the second one (snds3788) this is the only occurrence of the ID in the file, as the child entry has been successfully merged. But if I search for the third one (sndns2217) I find a separate entry with this ID (with more limited content). The pulling in of data into a webpage in the V3 site uses URLs stored in a table linked to entry IDs. These were generated from the URLs in the entries in the XML file (see the <url> tags above). For the URL ‘sndns2217’ the query finds multiple IDs, one for the entry snd13897 and another for the entry sdnns2217. But it finds snd13897 first, so it’s the content of this entry that is pulled into the page.
The entry for dost16606 contains the following URLs:
(in addition to headword URLs). Searching for the second one discovers a separate entry with the ID dost50272 (with more limited content). As with SND, searching the URL table for this URL finds two IDs, and as dost16606 appears first this is the entry that gets displayed.
What we need to do is remove the child entries that still exist as separate entries in the data. To do this I could is write a script that would go through each entry in the dost.xml and snd.xml files. It would then pick out every <url> that is not the same as the entry ID and search the file to see if any entry exists with this ID. If it does then presumably this is a duplicate that should then be deleted. I’m waiting to hear back from the DSL people to see how we should proceed with this.
As you can no doubt gather from the above, this was a very busy week but I do at least feel that I’m getting on top of things again.
This was week 14 of Lockdown and I spent most of it continuing to work on the Books and Borrowing project. Last week I’d planned to migrate the CMS from my test server at Glasgow to the official project server at Stirling, but during the process some discrepancies between PHP versions on the servers meant that the code which worked fine at Glasgow was giving errors at Stirling. As mentioned in last week’s post, on the Stirling server calling a function while passing less than the required number of variables resulted in a fatal error, plus database ‘warnings’ (e.g. an empty string rather than a numeric zero being inserted into an integer field) were being treated as fatal errors too. It took most of Monday to go through my scripts and identify all the places such issues cropped up, but by the end of the day I had the CMS set up and fully usable at Stirling and had asked the team to start using it.
I then spent some further time working on the public website for the project, installing a theme, working with fonts and colour schemes, selecting header images, adding logos to the footer and other such matters. I made six different versions of the interface and emailed screenshots to the team for comment. We all agreed on the interface and I then made some further tweaks to it, during which time team member Kit Baston was adding content to the pages. On Thursday the website went live and you can access it here: https://borrowing.stir.ac.uk/. Here’s a screenshot too:
I also continued to make improvements to the CMS this week, adding new functionality to the pages for browsing book editions, book works and authors. The table of Book Works now includes a column listing the number of Holdings each Work is associated with and now includes the options of ordering the listed Works by any of the columns in the table. When a book work row is expanded and its associated editions loads in, this table also now features the number of holdings an edition is associated with and allows the table to be ordered by any of the columns. I then made the number of holdings and records listed for each Work and Edition a link (so long as the number is greater than 0). Pressing on the link brings up a popup that lists the holdings and records. Each item in the list features an ‘eye’ icon and pressing on this will take you to the record in question (either in the library’s list of holdings or the page that the borrowing record appears on) with the page opening at the item in question.
On Friday I had a Zoom call wit Project PI Katie Halsey and Co-I Matt Sangster to discuss my work on the project and to decide where I should focus my attention next. We agreed that it would be good to get all of the sample data into the system now, so that the team can see what’s already there and begin the process of merging records and rationalising the data. Therefore I’ll be spending a lot of next week writing import scripts for the remaining datasets.
I worked for a number of additional projects this week as well. On Tuesday I had a Zoom call with Jane Stuart-Smith, Eleanor Lawson of QMU and Joanne Cleland of Strathclyde to discuss a new project that they’re putting together. I can’t say too much about it at this stage, but I’ll probably be doing the technical work for the project, if it gets funding. I also spoke with Thomas Clancy about another place-names project that has been funded and I’ll need to adapt my existing place-names system for. This will probably be starting in September and involves a part of East Ayrshire. I also adding in some forum software to Matthew Creasy’s new project website that I recently put together for him. He’s hoping to launch this next week so will probably add in a link to it then.
I also managed to spend some time this week looking into the Historical Thesaurus’s new dates system. My scripts to generate the new HT date structure completed over the weekend and I then had to manually fix the 60 or so label errors that Fraser had previously identified in his spreadsheet. I then wrote a further script to check that the original fulldate, the new fulldate and a fulldate generated on the fly from the new date table all matched for each lexeme. This brought up about a thousand lexemes where the match wasn’t identical. Most of these were due to ‘b’ dates not being recorded in a consistent manner in the original data (sometimes two digits e.g. 1781/86 and sometimes one digit e.g. 1781/6). There were some other issues with dates that had both labels and slashes as connectors, whereby the label ended up associated with both dates rather than just one. There were also some issues with bracketed dates sometimes being recorded with the brackets and sometimes not, plus a few that had a dash before the date instead. I went through the 1000 or so rows and fixed the ones that actually needed fixing (maybe about 50). I then imported the new lexeme_dates table into the online database. There are 1,381,772 rows in it. I also attempted to import the updated lexeme database (which includes a new fulldate column plus new firstdate and lastdate fields). Unfortunately the file contains too much data to be uploaded and the process timed out. I contacted Arts IT Support and they managed to increase the execution time on the server and I was then able to get this second table uploaded too.
Fraser had sent around a document listing the next steps in the data update process and I read through this and began to think things through. Fraser noted that the unique date types list didn’t appear to include ‘a’ and ‘c’ for firstdates. I checked my script that generated the date types (way back in April last year) and spotted an error – the script was looking for a column called ‘oefirstdac’ where it should have been looking for ‘firstdac’. What this means is any lexeme that has an ‘a’ or ‘c’ with its first date has been rolled into the count for regular first dates, but it turns out that this is what Fraser wanted to happen anyway, so no harm was done there.
Before I can make a start on getting all HT lexemes that are XXXX-XXXX, OE-XXXX and XXXX-Current and are matched to an OED lexeme and grabbing the OED date information I’ll need to find a way to actually get the new OED date information. Fraser noted that we can’t just use the OED ‘sortdate’ and ‘enddate’ fields but instead need to use the first and last citation dates as these have ‘a’ and ‘c’. I’m going to need to get access to the most recent version of all of the OED XML files and to write a script that goes through all of the quotations data, such as:
<quotations><q year=”1200″><date>?c1200</date></q><q year=”1392″><date>a1393</date></q><q year=”1450″><date>c1450</date></q><q year=”1481″><date>1481</date></q><q year=”1520″><date>?1520</date></q><q year=”1530″><date>1530</date></q><q year=”1556″><date>1556</date></q><q year=”1608″><date>1608</date></q><q year=”1647″><date>1647</date></q><q year=”1690″><date>1690</date></q><q year=”1709″><date>1709</date></q><q year=”1728″><date>1728</date></q><q year=”1755″><date>1755</date></q><q year=”1804″><date>1804</date></q><q year=”1882″><date>1882</date></q><q year=”1967″><date>1967</date></q><q year=”2007″><date>2007</date></q></quotations>
And then picks out the first date and the last date, plus any ‘a’, ‘c’ and ‘?’ value. This is going to be another long process, but I can’t begin it until I can get my hands on the full OED dataset, which I don’t have with my at home.
This was week 13 of Lockdown, with still no end in sight. I spent most of my time on the Books and Borrowing project, as there is still a huge amount to do to get the project’s systems set up. Last week I’d imported several thousand records into the database and had given the team access to the Content Management System to test things out. One thing that cropped up was that the autocomplete that is used for selecting existing books, borrowers and authors was sometimes not working, or if it did work on selection of an item the script that then populates all of the fields about the book, borrower or author was not working. I’d realised that this was because there were invisible line break characters (\n or \r) in the imported data and the data is passed to the autocomplete via a JSON file. Line break characters are not allowed in a JSON file and therefore the autocomplete couldn’t access the data. I spent some time writing a script that would clean the data of all offending characters and after running this the autocomplete and pre-population scripts worked fine. However, a further issue cropped up with the text editors in the various forms in the CMS. These use the TinyMCE widget to allow formatting to be added to the text area, which works great. However, whenever a new line is created this adds in HTML paragraphs ( ‘<p></p>’, which is good) but the editor also adds a hidden line break character (‘\r’ or ‘\n’ which is bad). When this field is then used to populate a form via the selection of an autocomplete value the line break makes the data invalid and the form fails to populate. After identifying this issue I managed ensured all such characters are stripped out of any uploaded data and that fixed the issue.
I had to spend some time fixing a few more bugs that the team had uncovered during the week. The ‘delete borrower’ option was not appearing, even when a borrower was associated with no records, and I fixed this. There was also an issue with autocompletes not working in certain situations (e.g. when trying to add an existing borrower to a borrowing record that was initially created without a borrower). I tracked down and fixed these. Another issue involved the record page order incrementing whenever the record was edited, even when this had not been manually changed, while another involved book edition data not getting saved in some cases when a borrowing record was created. I tracked down and fixed these issues too.
With these fixes in place I then moved on to adding new features to the CMS, specifically facilities to add and browse the book works, editions and authors that are used across the project. Pressing on the ‘Add Book’ menu item nowloads a page through which you can choose to add a Book Work or a Book Edition (with associated Work, if required). You can also associate authors with the Works and Editions too. Pressing on the ‘Browse Books’ option now loads a page that lists all of the Book Works in a table, with counts of the number of editions and borrowing records associated with each. There’s also a row for all editions that don’t currently have a work. There are currently 1925 such editions so most of the data appears in this section, but this will change.
Through the page you can edit a work (including associating authors) by pressing on the ‘edit’ button. You can delete a work so long as it isn’t associated with an Edition. You can bring up a list of all editions in the work by pressing on the eye icon. Once loaded, the editions are displayed in a table. I may need to change this as there are so many fields relating to editions that the table is very wide. It’s usable if I make my browser take up the full width of my widescreen monitor, but for people using a smaller screen it’s probably going to be a bit unwieldy. From the list of editions you can press the ‘edit’ button to edit one of them – for example assigning one of the ‘no work’ editions to a work (existing or newly created via the edit form). You can also delete an edition if it’s not associated with anything. The Edition table includes a list of borrowing records, but I’ll also need to find a way to add in an option to display a list of all of the associated records for each, as I imagine this will be useful.
Pressing on the ‘Add Author’ menu item brings up a form allowing a new author to be added, which will then be available to associate with books throughout the CMS, while pressing on the ‘Browse Authors’ menu item brings up a list of authors. At the moment this table (and the book tables) can’t be reordered by their various columns. This is something else I still need to implement. You can delete an author if it’s not associated with anything and also edit the author details. As with the book tables I also need to add in a facility to bring up a list of all records the author is associated with, in addition to just displaying counts. I also noticed that there seems to be a bug somewhere that is resulting in blank authors occasionally being generated, and I’ll need to look into this.
I then spent some time setting up the project’s server, which is hosted at Stirling University. I was given access details by Stirling’s IT Support people and managed to sign into the Stirling VPN and get access to the server and the database. There was an issue getting write access to the server, but after that was resolved I was able to upload all of the CMS files, set up the WordPress instance that will be the main project website and migrate the database.
I was hoping I’d be able to get the CMS up and running on the new server without issue, but unfortunately this did not prove to be the case. It turns out that the Stirling server uses a different (and newer) version of the PHP scripting language than the Glasgow server and some of the functionality is different, for example on the Glasgow server you can call a function with less parameters than it is set up to require (e.g. addAuthor(1) when the function is set up to take 2 parameters (e.g.addAuthor(1,2)). The version on the Stirling server doesn’t allow this and instead the script breaks and a blank page is displayed. It took a bit of time to figure out what was going on, and now I know what the issue is I’m going to have to go through every script and check how every function is called, and this is going to be my priority next week.
I also spent a bit of time finalising the website for the project’s pilot project, which deals with borrowing records at Glasgow. This was managed by Matt Sangster, and he’d sent me a list of things we wanted to sort; I spent a few hours going through this, and we’re just about at the point where the website can be made publicly available.
I had intended to spend Friday working on the new way of managing dates for the Historical Thesaurus. The script I’d created to generate the dates for all 790,000-odd lexemes completed during last Friday night and over the weekend I wrote another script that would then shift the connectors up one (so a dash would be associated with the date before the dash rather than the one after it, for example). This script then took many hours to run. Unfortunately I didn’t get a chance to look further into this until Thursday, when I found a bit of time to analyse the output, at which point I realised that while the generation of the new fulldate field had worked successfully, the insertion of bracketed dates into the new dates table had failed, as the column was set as an integer and I’d forgotten to strip out the brackets. Due to this problem I had to set my scripts running all over again. The first one completed at lunchtime on Friday, but the second didn’t complete until Saturday so I didn’t manage to work on the HT this week. However, this did mean that I was able to return to a Scots Thesaurus data processing task that Fraser asked me to look into at the start of May, so it’s not all bad news.
Fraser’s task required me to set up the Stanford Part of Speech tagger on my computer, which meant configuring Java and other such tasks that took a bit of time. I then write a script that took the output of a script I’d written over a year ago that contained monosemous headwords in the DOST data, ran their definitions through the Part of Speech tagger and then outputted this to a new table. This may sound straightforward, but it took quite some time to get everything working, and then another couple of hours for the script to process around 3,000 definitions. But I was able to send the output to Fraser on Friday evening.
Also this week I gave advice to a few members of staff, such as speaking to Matthew Creasy about his new Scottish Cosmopolitanism project, Jane Stuart-Smith about a new project that she’s putting together with QMU, Heather Pagan of the Anglo-Norman Dictionary about a proposal she’s putting together, Rhona Alcorn about the Scots School Dictionary app and Gerry McKeever about publicising his interactive map.
This was week 12 of Lockdown and on Monday I arranged to get access to my office at work in order to copy some files from my work PC. There were some scripts that I needed for the Historical Thesaurus, Fraser’s Scots Thesaurus and the Books and Borrowing projects so I reckoned it was about time to get access. It all went pretty smoothly, thankfully. My train into Central was very quiet – I think there were only about five people in my carriage, and none of them were near me. I walked to the West End and called security to let them know I’d arrived, then got into my office and spent about an hour and a half copying files and doing some work tasks. It was a bit strange to be back in my office after so long, with my calendar still showing March. Once the files were all copied I left the building, checked out with security and walked back through a still deserted town to Central. My train carriage was completely empty on the way back home.
I spent most of the rest of the week continuing with my work on the Books and Borrowing project. My main task was importing sample data into the content management system. Matt had sent me the latest copy of the Glasgow Student data over the weekend, and once I had the data processing scripts from the PC at work I could then process his spreadsheet and upload it to the pilot project database. Processing the Glasgow Student data was not entirely straightforward as the transcriber had used Microsoft Office formatting in the spreadsheet cells to replicate features such as superscript text and strikethroughs. It is a bit of a pain to export an Excel spreadsheet as plain text while retaining such formatting, but thankfully I’d solved that issue previously and my script was able to take an Excel file that had been saved as HTML and then pick out the formatting to keep whilst ditching all of the horrible HTML formatting that Microsoft adds in to Office files that are saved in that format.
Once the Glasgow Student data had been uploaded to the pilot project website I could then migrate it to the Books and Borrowing data structure. It took the best part of a day to write a script that processed the data, dealing with issues like multiple book levels, additional fields and generating ledgers and pages. After the migration there were 3 ledgers, 403 pages and 8191 borrowing records, with associations to 832 borrowers and 1080 books. With this in place I then began to import sample data from a previous study of Innerpeffray library. This was also in a spreadsheet, but was structured very differently and I needed to write a separate data import script to process it. There were some additional complications due to the character encoding the spreadsheet uses, that resulting in lots of hidden special characters being embedded in the text when the spreadsheet was converted to a plain text file for upload. This really messed up the upload process and took some time to get to the bottom of. Also, there is variation in page numbering (e.g. sometimes ‘3r’, sometimes ‘3 r’) and this resulted in multiple pages being created for each variation before I spotted the issue. Also, the spreadsheet is not always listed in page order – there were records from earlier pages added in amongst later pages. This also messed up the upload process before I spotted the issue and updated my script to take this into consideration. There were also some issues of data failing to upload when it contained accented characters, but I think I got to the bottom of that.
As with the Glasgow data, I created editions from holdings. I did add in a check to see whether any of the Glasgow editions matched the titles of the Innerpeffray titles, and used the existing Glasgow edition if this situation arose, but due to the differences in transcription I don’t think any existing editions have been used. This will need some manual correction at some point. Similarly, there may be some existing Glasgow authors that might be used rather than repeating the same information from Innerpeffray but due to differences in transcription I don’t think this will have happened either. As before, author data has for now just been uploaded into the ‘surname’ field and will need to be manually split up further and some Glasgow and Innerpeffray authors will need to be merged. For example, in the Glasgow data we have ‘Cave, William, 1637-1713.’ Whereas in Innerpeffray we have ‘Cave, William, 1637-1713’. Because of the full stop at the end of the Glasgow author these have ended up being inserted as separate authors. After the upload process was complete there were 6550 borrowing records for Innerpeffray, split over 340 pages in one ledger. A total of 1017 unique borrowers and 840 unique book holdings were added to the library.
I created user accounts for the rest of the team to access the CMS and test things out once the sample data for these two libraries was in place. The project PI, Katie Halsey spotted an issue with the autocomplete for selecting an existing edition not working, so I spent some time investigating this. It turns out that there are more character encoding issues with the data that are resulting in the JSON file that is generated for use in the autocomplete failing to be valid. This is also happening with the AJAX script that populates the fields once an autocomplete option is selected. I only investigated this on Friday afternoon and didn’t have time to fix it, but I’m hoping that next week if I fix the character encoding issues and ensure all line break characters are removed from the data then things will be ok.
Other than the Books and Borrowing project, I spoke to Rhona Alcorn of the DSL this week to discuss timescales for DSL developments. I also fixed an issue with the Android version of the Scots School Dictionary app. I gave some advice to Cris Sarg, who is managing the data for the Glasgow Medical Humanities project, and I made some further tweaks to the ‘export data for publication’ facilities for Carole Hough’s REELS project.
I rounded off the week by working on sorting out the new way of storing dates for the Historical Thesaurus. Although we’d previously decided on a structure for the new dates system (which is much more rational and will allow labels to be associated with specific dates rather than the lexeme as a whole) I hadn’t generated the actual new date data. My earlier script (which I retrieved from my office on Monday) instead iterated through each lexeme, generated the new date information and only outputted data if the generated full date did not match the original full date. I’d saved this output as a spreadsheet and Fraser had gone through the rows and had identified any the needed fixed, updating the spreadsheet as required. I then wrote a script to fix the date columns that needed fixed in order for the new fulldate to be properly generated.
With that in place I then wrote a script to generate the new date information for each of the more than 700,000 lexemes in the system. I tried running this on the server initially, but this quickly timed out, meaning I had to run the script locally. I will then be able to import the table into the online database. The script took about 20 hours to run, but seems to have worked successfully, with almost 1.4 million date rows generated for the lexemes. Hopefully next week I’ll find the time to work on this some more.
I spent week 9 of Lockdown continuing to implement the content management system for the Books and Borrowing project. I was originally hoping to have completed an initial version of the system by the end of this week, but this was unfortunately not possible due to having to juggle work and home-schooling, commitments to other projects and the complexity of the project’s data. It took several days to complete the scripts for uploading a new borrowing record due to the interrelated nature of the data structure. A borrowing record can be associated with one or more borrowers, and each of these may be new borrower records or existing ones, meaning data needs to be pulled in via an autocomplete to prepopulate the section of the form. Books can also be new or existing records but can also have one or more new or existing book item records (as a book may have multiple volumes) and may be linked to one or more project-wide book edition records which may already exist or may need to be created as part of the upload process, and each of these may be associated with a new or existing top-level book work record. Therefore the script for uploading a new borrowing record needs to incorporate the ‘add’ and ‘edit’ functionality for a lot of associated data as well. However, as I have implemented all of these aspects of the system now it will make it quicker and easier to develop the dedicated pages for adding and editing borrowers and the various book levels once I move onto this. I still haven’t working on the facilities to add in book authors, genres or borrower occupations, which I intend to move onto once the main parts of the system are in place.
After completing the scripts for processing the display of the ‘add borrowing’ form and the storing of all of the uploaded data I moved onto the script for viewing all of the borrowing records on a page. Due to the huge number of potential fields I’ve had to experiment with various layouts, but I think I’ve got one that works pretty well, which displays all of the data about each record in a table split into four main columns (Borrowing, Borrower, Book Holding / Items, Book Edition / Works). I’ve also added in a facility to delete a record from the page. I then moved on to the facility to edit a borrowing record, which I’ve added to the ‘view’ page rather than linking out to a separate page. When the ‘edit’ button is pressed on for a record its row in the table is replace with the ‘edit’ form, which is identical in style and functionality to the ‘add’ form, but is prepopulated with all of the record’s data. As with the ‘add’ form, it’s possible to associated multiple borrowers and book items and editions, and also to manage the existing associations using this script. The processing of the form uses the same logic as the ‘add’ script so thankfully didn’t require much time to implement.
What I still need to do is add authors and borrower occupations to the ‘view page’, ‘add record’ and ‘edit record’ facilities, add the options to view / edit / add / delete a library’s book holdings and borrowers independently of the borrowing records, plus facilities to manage book editions / works, authors, genres and occupations at the top level as opposed to when working on a record. I also still need to add in the facilities to view / zoom / pan a page image and add in facilities to manage borrower cross-references. This is clearly quite a lot, but the core facilities of adding, editing and deleting borrowing, borrower and book records is now in place, which I’m happy about. Next week I’ll continue to work on the system ahead of the project’s official start date at the beginning on June.
Also this week I made a few tweaks to the interface for the Place-names of Mull and Ulva project, spoke to Matthew Creasy some more about the website for his new project, spoke to Jennifer Smith about the follow-on funding proposal for the SCOSYA project and investigated an issue that was affecting the server that hosts several project websites (basically it turned out that the server had run out of disk space).
I also spent some time working on scripts to process data from the OED for the Historical Thesaurus. Fraser is working on incorporating new dates from the OED and needs to work out which dates in the HT data we want to replace and which should be retained. The script makes groups of all of the distinct lexemes in the OED data. If the group has two or more lexemes it then checks that at least one of them is revised. It then makes subgroups of all of the lexemes that have the same date (so for example all the ‘Strike’ words with the same ‘sortdate’ and ‘lastdate’ are grouped together). If one word in the whole group is ‘revised’ and at least two words have the same date then the words with the same dates are displayed in the table. The script also checks for matches in the HT lexemes (based on catid, refentry, refid and lemmaid fields). If there is a match this data is also displayed. I then further refined the output based on feedback from Fraser, firstly highlighting in green those rows where at least two of the HT dates match, and secondly splitting the table into three separate tables, one with the green rows, one containing all other OED lexemes that have a matching HT lexeme and a third containing OED lexemes that (As of yet) do not have a matching HT lexeme.
This was week 8 of Lockdown and I spent the majority of it working on the content management system for the Books and Borrowing project. The project is due to begin at the start of June and I’m hoping to have the CMS completed and ready to use by the project team by then, although there is an awful lot to try and get into place. I can’t really go into too much detail about the CMS, but I have completed the pages to add a library and to browse a list of libraries with the option of deleting a library if it doesn’t have any ledgers. I’ve also done quite a lot with the ‘View library’ page. It’s possible to edit a library record, add a ledger and add / edit / delete additional fields for a library. You can also list all of the ledgers in a library with options to edit the ledger, delete it (if it contains no pages) and add a new page to it. You can also display a list of pages in a ledger, with options to edit the page or delete it (if it contains no records). You can also open a page in the ledger and browse through the next and previous pages.
At the moment I’m in the middle of creating the facility to add a new borrowing record to the page. This is the most complex part of the system as a record may have multiple borrowers, each of which may have multiple occupations, and multiple books, each of which may be associated with higher level book records. Plus the additional fields for the library need to be taken into consideration too. By the end of the week I was at the point of adding in an auto-complete to select an existing borrower record and I’ll continue with this on Monday.
In addition to the B&B project I did some work for other projects as well. For Thomas Clancy’s Place-names of Kirkcudbrightshire project (now renamed Place-names of the Galloway Glens) I had a few tweaks and updates to put in place before Thomas launched the site on Tuesday. I added a ‘Search place-names’ box to the right-hand column of every non-place-names page which takes you to the quick search results page and I added a ‘Place-names’ menu item to the site menu, so users can access the place-names part of the site. Every place-names page now features a sub-menu with access to the place-names pages (Browse, element glossary, advanced search, API, quick search). To return to the place-name introductory page you can press on the ‘Place-names’ link in the main menu bar. I had unfortunately introduced a bug to the ‘edit place-name’ page in the CMS when I changed the ordering of parishes to make KCB parishes appear first. This was preventing any place-names in BMC from having their cross references, feature type and parishes saved when the form was submitted. This has now been fixed. I also added Google Analytics to the site. The virtual launch on Tuesday went well and the site can now be accessed here: https://kcb-placenames.glasgow.ac.uk/.
I also added in links to the DSL’s email and Instagram accounts to the footer of the DSL site and added some new fields to the database and CMS of the Place-names of Mull and Ulva site. I also created a new version of the Burns Supper map for Paul Malgrati that included more data and a new field for video dimensions that the video overlay now uses. I also replied to Matthew Creasy about a query regarding the website for his new Scottish Cosmopolitanism project and a query from Jane Roberts about the Thesaurus of Old English and made a small tweak to the data of Gerry McKeever’s interactive map for Regional Romanticism.
This was the fifth week of lockdown and my first full week back after the Easter holidays, which as with previous weeks I needed to split between working and home-schooling my son. There was some issue with the database on the server that powers many of the project websites this week, meaning all of those websites stopped working. I had to spend some time liaising with Arts IT Support to get the issue sorted (as I don’t have the necessary server-level access to fix such matters) and replying to the many emails from the PIs of projects who understandably wanted to know why their website was offline. The server was unstable for about 24 hours, but has thankfully been working without issue since then.
Alison Wiggins got in touch with me this week to discuss the content management system for the Mary, Queen of Scots Letters sub-project, which I set up for her last year as part of her Archives and Writing Lives project. There is now lots of data in the system and Alison is editing it and wanted me to make some changes to the interface to make the process a little swifter. I changed how sorting works on the ‘browse documents’ and ‘browse parties’ pages. The pages are paginated and previously the sorted only affected the subset of records found on the current page rather than sorting the whole dataset. I updated this so that sorting now reorganises everything, and I also updated the ‘date sorting’ column so that it now uses the ‘sort_date’ field rather than the ‘display_date’ field. Alison had also noticed that the ‘edit party’ page wasn’t working and I discovered that there was a bug on this page that was preventing updates being saved in the database, which I fixed. I also created a new ‘Browse Collections’ page and added it to the top menu. This is a pretty simple page that lists the distinct collections alphabetically and for each lists their associated archives and documents, each with a link through to the relevant ‘edit’ page. Finally, I gave Alison some advice on editing the free-text fields, which use the TinyMCE editor, to strip out unwanted HTML that has been pasted into them and thought about how we might present this data to the public.
I also responded to a query from Matthew Creasy about the website for a new project he is working on. I set up a placeholder website for this project a couple of months ago and Matthew is now getting close to the point where he wants the website to go live. Gerry McKeever also go in touch with me to ask whether I would write some sections about the technology behind the interactive map I created for his Regional Romanticism project for a paper he is putting together. We talked a little about the structure of this and I’ll write the required sections when he needs them.
Other than these issues I spent the bulk of the week working on the Books and Borrowing project. Katie and Matt got back with some feedback on the data description document that I completed and sent to them before Easter and I spent some time going through this feedback and making an updated version of the document. After sending the document off I started working on a description of the content management system. This required a lot of thought and planning as I needed to consider how all of the data as defined in the document would be added, edited and deleted in the most efficient and easy to use manner. By the end of the week I’d written some 2,500 words about the various features of the CMS, but there is still a lot to do. I’m hoping to have a version completed and sent off to Katie and Matt early next week.
My other big task of the week was to work with the data for the Anglo-Norman Dictionary again. As mentioned previously, this project’s data and systems are in a real mess and I’m trying to get it all sorted along with the project’s Editor, Heather Pagan. Previously we’d figured out that there was a version of the AND data in a file called ‘all.xml’, but that it did not contain the updates to the online data from the project’s data management system and we instead needed to somehow extract the data relating to entries from the online database.
Back in February when looking through the documentation again I discovered where the entry data was located in a file called ‘entry_hash’ within the directory ‘/var/data’. I stated that the data was stored in a Berkeley DB and gave Heather a link to a place where the database files could be downloaded (https://www.oracle.com/database/technologies/related/berkeleydb-downloads.html)
I spent some time this week trying to access the data. It turns out that the download is not for a nice, standalone database program but is instead a collection of files that only seem to do anything when they are called from your own program written in something like C or Java. There was a command called ‘db_dump’ that would supposedly take the binary hash file and export it as plain text. This did work, in that the file could then be read in a text editor, but it was unfortunately still a hash file – just a series of massively long lines of numbers.
What I needed was just some way to view the contents of the file, and thankfully I came across this answer: https://stackoverflow.com/a/19793412 which suggests using the Python programming language to export the contents from a hash file. However, Python dropped support for the ‘dbhash’ function years ago so I had to track down and install a version of Python from 2010 for this to work. Also, I’ve not really used Python before so that took some getting used to. Thankfully with a bit of tweaking I was able to write a short Python script that appears to read through the ‘entry_hash’ file and output each line as plain text. I’m including the script here for future reference:
f = open(“testoutput.txt”,”w+”)
for k, v in dbhash.open(“entry_hash”).iteritems():
The resulting file includes XML entries and is 3,969,350 lines long. I sent this to Heather for her to have a look at, but Heather reckoned that some updates to the data were not present in this output. I wrote a little script that counts the number of <entry> elements in each of the XML files (all.xml and entry_hash.xml) and the former has 50426 entries while the latter has 53945. So the data extracted from the ‘entry_hash’ file is definitely not the same. The former file is about 111Mb in size while the latter is 133Mb so there’s definitely a lot more content, which I think is encouraging. Further investigation showed that the ‘all.xml’ file was actually generated in 2015 while the data I’ve exported is the ‘live’ data as it is now, which is good news. However, it would appear that data in the data management system that has not yet been published is stored somewhere else. As this represents two years of work it is data that we really need to track down.
I went back through the documentation of the old system, which really is pretty horrible to ready and unnecessarily complicated. Plus there are multiple versions of the documentation without any version control stating which is the most update to date version. I have a version in Word and a PDF containing images of scanned pages of a printout. Both differ massively without it being clear which superseded which. It turns out the scanned version is likely to be the most up to date, but of course being badly scanned images that are all wonky it’s not possible to search the text and attempting OCR didn’t work. After a lengthy email conversation with Heather we realised we would need to get back into the server to try and figure out where the data for the DMS was located. Heather needed to be in her office at work to do this and on Friday she managed to get access and via a Zoom call I was able to see the server and discuss the potential data locations with her. It looks like all of the data that has been worked on in the DMS but has yet to be integrated with the public site is located in a directory called ‘commit-out’. This contains more than 13,000 XML files, which I now have access to. If we can combine this with the data from ‘entry_hash’ and the data Heather and her colleague Geert have been working on for the letters R and S then we should have the complete dataset. Of course it’s not quite so simple as whilst looking through the server we realised that there are many different locations where a file called ‘entry_hash’ is found and no clear way of knowing which is the current version of the public data and which is just some old version that is not in use. What a mess. Anyway, progress has been made and our next step is to check that the ‘commit-out’ files do actually represent all of the changes made in the DMS and that the version of ‘entry_hash’ that I have so far extracted is the most up to date version.