This was my last week before the Christmas holidays, and it was a four-day week as I’d taken Friday off to use up some unspent holidays. Despite only being four days long it was a very hectic week, as I had lots of loose ends to tie up before the launch of the new Anglo-Norman Dictionary website on Wednesday. This included tweaking the appearance of ‘Edgloss’ tags to ensure they always have brackets (even if they don’t in the XML), updating the forms to add line breaks between parts of speech and updating the source texts pop-ups and source texts page to move the information about the DEAF website.
I also added in a lot of the ancillary page data, including the help text, various essays, the ‘history’ page, copyright and privacy pages, the memorial lectures and the multi-section ‘introduction to the AND’. I didn’t quite manage to get all of the links working in the latter and I’ll need to return to this next year. I also overhauled the homepage and footer, adding in the project’s Twitter feed, a new introduction and adding links to Twitter and Facebook to the footer.
I also identified and fixed an error with the label translations, which were sometimes displaying the wrong translation. My script that extracted the labels was failing to grab the sense ID for subsenses. This ID is only used to pull out the appropriate translation, but because of the failure the ID of the last main sense was being used instead. I therefore had to update my script and regenerate the translation data. I also updated the label search to add in citations as well as translations. This means the search results page can get very long as both labels and translations are applied at sense level, so we end up with every citation in a matching sense listed, but apparently this is what’s wanted.
I also fixed the display of ‘YBB’ sources, which for some unknown reason are handled differently to all other sources in the system and fixed the issue with deviant forms and their references and parts of speech.
On Wednesday we made the site live, replacing the old site with the new one, which you can now access here: https://anglo-norman.net/. It wasn’t entirely straightforward to get the DNS update working, but we got there in the end, and after making some tweaks to paths and adding in Google Analytics the site was ready to use, which is quite a relief. There is still a lot of work to do on the site, but I’m very happy with the progress I’ve made with the site since I began the redevelopment in October.
Also this week I set up a new website for phase two of the ‘Editing Burns for the 21st Century’ project and upgraded all of the WordPress sites I manage to the most recent version. I also arranged a meeting with Jane Stuart-Smith to discuss a new project in the New Year, replied to Kirsteen McCue about a proposal she’s finishing off, replied to Simon Taylor about a new place-name project he wants me to be involved with and replied to Carolyn Jess-Cooke about a project of hers that will be starting next year.
That’s all for 2020. Here’s hoping 2021 is not going to be quite so crazy!
I took Friday off again this week as I needed to go and collect a new pair of glasses from my opticians in the West End, which is quite a trek from my house. Although I’d taken the day off I ended up working for about three hours as on Thursday Fraser Dallachy emailed me to ask about the location of the semantically tagged EEBO dataset that we’d worked on a couple of years ago. I didn’t have this at home but I was fairly certain I had it on a computer in my office so I decided to take the opportunity to pop in and locate the data. I managed to find a 10Gb tar.gz file containing the data on my desktop PC, along with the unzipped contents (more than 25,000 files) in another folder. I’d taken an empty external hard drive with me and began the process of copying the data, which took hours. I’d also remembered that I’d developed a website where the tagged data could be searched and that this was on the old Historical Thesaurus server, but unfortunately it no longer seemed to be accessible. I also couldn’t seem to find the code or data for it on my desktop PC, but I remembered the previously I’d set up one of the four old desktop PCs I have sitting in my office as a server and the system was running on this. It took me a while to get the old PC connected and working, but I managed to get it to boot up. It didn’t have a GUI installed so everything needed to be done at the command line, but I located the code and the database. I had planned to copy this to a USB stick, but the server wasn’t recognising USB drives (in either NTFS or FAT format) so I couldn’t actually get the data off the machine. I decided therefore to install Ubuntu Linux on a bootable USB stick and to get the old machine to boot into this rather than run the operating system on the hard drive. Thankfully this worked and I could then access the PC’s hard drive from the GUI that ran from the USB stick. I was able to locate the code and the data and copy them onto the external hard drive, which I then left somewhere that Fraser would be able to access it. Not a bad bit of work for a supposed holiday.
As with previous week’s I split my time mostly between the Anglo-Norman Dictionary and the Dictionary of the Scots Language. For the AND I finally updated the user interface. I added in the AND logo and updated the colour schemes to reflect the colours used in the logo. I’m afraid the colours used in the logo seem to be straight out of a late 1990s website so unfortunately the new interface has that sort of feel about it too. The header area now has a white background as the logo needs a white background to work. The ‘quick search’ option is now better positioned and there is a new bar for the navigation buttons. The active navigation button and other site buttons are now the ‘A’ red, panels are generally the ‘N’ blue and the footer is the ‘D’ green. The main body is now slightly grey so that the entry area stands out from it. I replaced the header font (Cinzel) with a Cormorant Garamond as this more closely resembles the font used in the logo.
The left-hand panel has been reworked so that entries are smaller and their dates and right-aligned. I also added stripes to make it easier to keep your eye on an entry and its date. The fixed header that appears when you scroll down a longer entry now features the AND logo/ The ‘Top’ button that appears when you scroll down a long entry now appears to the right so it doesn’t interfere with the left-hand panel. The footer now only features the logos for Aberystwyth and AHRC and these appear on the right, with links to some pages on the left.
I have also updated the appearance of the ‘Try and Advanced Search’ button so it only appears on the ‘quick search’ results page (which is what should have happened originally). I also removed the semantic tags that were added from the XML but need to be edited out of the XML. I have also ticked a few more things off my ‘to do’ list, including replacing underscores with spaces in parts of speech and language tags and replacing ‘v.a.’ and ‘v.n’ as requested. I also updated the autocomplete styles (when you type into the quick search box) so they fit in with the site a bit better.
I then began looking into reordering the citations in the entries so they appear in date order within their senses, but I remembered that Geert wanted some dates to be batch processed and realised that this should be attempted first. I had a conversation with Geert about this, but the information he sent wasn’t well structured enough to be used and it looks like the batch updating of dates will need to wait until after the launch. Instead I moved on to updating the source text pop-ups in the entry. These feature links to the DEAF website and a link to search the AND entries for all others that feature the source.
On the old site the DEAF links linked through to another page on the old site that included the DEAF text and then linked through to the DEAF website. I figured it would be better to cut out this middle stage and link directly through to DEAF. This meant figuring out which DEAF page should be linked to and formatting the link so their page jumps to the right place. I also added in a note about the link under it.
This was pretty straightforward but the ‘AND Citations’ link was not. On the old site clicking on this link ran a search that displayed the citations. We had nothing comparable to this developed for the new site. So I needed to update the citation search to allow the user to search based on the sigla (source text). This in turn meant updating my citations table to add a field for holding the citation siglum and regenerating the citations and citation search words and then updating the API to allow a citation search to be limited by a siglum ID. I then updated the ‘Citations’ tab of the ‘Advanced Search’ page to add a new box for ‘citation siglum’. This is an autocomplete box – you type some text and a list of matching sigla are displayed, from which you can select one. This in turn meant updating the API to allow the sigla to be queried for this autocomplete. But for example type in ‘a-n’ into the box and a list of all sigla containing this are displayed. Select ‘A-N Falconry’ and you can then find all entries where this siglum appears. You can also combine this with citation text and date (although the latter won’t be much use).
I’ve also tweaked the search results tab on the entry page so that the up and down buttons don’t appear if you’re at the top or bottom of the results, and I’ve ensured if you’re looking at an entry towards the end of the results a sufficient number of results before the one you’re looking at appear. I’ve also ensured that the entry lemma and hom appear in the <title> of the web page (in the browser tab) so you can easily tell which tab contains which entry. Here’s a screenshot of the new interface:
For the DSL I spent some time answering emails about a variety of issues. I also completed my work on the issue of accents in the search, updating the search forms so that any accented characters that a user adds in are converted to their non-accented version before the search runs, ensuring someone searching for ‘Privacé’ will find all ‘privace’ in the full text. I also tweaked the wording of the search results to remove the ‘supplementary’ text from it as all supplementary items have now either been amalgamated or turned into main entries. I also put in redirects from all of the URLs for the deleted child entries to their corresponding main entries. This was rather time consuming to do as I needed to go through each deleted child entry, get each of its URLs, then get the main URL of the corresponding main entry, add these to a new database table and then add a new endpoint to the V4 API that accepts a child URL, then checks the database for any main URL and returns this. Then I needed to update the entry page so that the URL is passed to this new redirect checking API endpoint and if it matches a deleted item the page needs to redirect to the proper URL.
Also this week I had a conversation with Wendy Anderson about updates to the Mapping Metaphor website. I had thought these would just be some simple tweaks to the text of existing pages, but instead the site structure needs to be updated, which might prove to be tricky. I’m hoping to be able to find the time to do this next week.
Finally, I continued to work on the Burns Supper map, adding in the remaining filters. I also fixed a few dates, added in the introductory text and a ‘favicon’. I still need to work on the layout a bit, which I’ll hopefully do next week, but the bulk of the work for the map is now complete.
This was a four-day week for me as I had an optician’s appointment on Tuesday, and as my optician is over the other side of the city (handy for work, not so handy for working from home) I decided I’d just take the day off. I spent most of Monday working on the interactive map of Burns Suppers I’m developing with Paul Malgrati in Scottish Literature. This week I needed to update my interface to incorporate the large number of filters that Paul wants added to the map. In doing so I had to change the way the existing ‘period’ filter works to fit in with the new filter options, as previously the filter was being applied as soon as a period button was pressed. Now you need to press on the ‘apply filters’ button to see the results displayed, and this allows you to select multiple options without everything reloading as you work. There is also a ‘reset filters’ button which turns everything back on. Currently the ‘period’, ‘type of host’ and ‘toasts’ filters are in place, all as a series of checkboxes that allow you to combine any selections you want. Within a section (e.g. type of host) the selections are joined with an OR (e.g. ‘Burns Society’ OR ‘Church’). Between sections are joined with an AND (e.g. ‘2019-2020’ AND (‘Burns Society’ OR ‘Church’). For ‘toasts’ a location may have multiple toasts and the location will be returned if any of the selected toasts are associated with the location. E.g. if a location has ‘Address to a Haggis’ but not ‘Loyal Toast’ it will still be returned if you’ve selected both of these toasts. I also updated the introductory panel to make it much wider, so we can accommodate some text and also so there is more room for the filters. Even with this it is going to be a bit tricky to squeeze all of the filters in, but I’ll see what I can do next week.
I then turned my attention to the Dictionary of the Scots Language. Ann had noticed last week that the full text searches were ‘accent sensitive’ – i.e. entries containing accented characters would only be returned if your search term also contained the accented characters. This is not what we wanted and I spend a bit of time investigating how to make Solr ignore accents. Although I found a few articles dealing with this issue they were all for earlier versions of Solr and the process didn’t seem all that easy to set up, especially as I don’t have direct access to the sever that Solr resides on so tweaking settings is not an easy process. I decided instead to just strip out accents from the source data before it was ingested into Solr, which was a much more straightforward process and fitted in with another task I needed to tackle, which was to regenerate the DSL data from the old editing system, delete the duplicate child entries and set up a new version of the API to use this updated dataset. This process took a while to go through, but I go there in the end and we now have a new Solr collection set up for the new data, which has the accents and the duplicate child entries removed. I then updated one of our test front-ends to connect to this dataset so people can test it. However, whilst checking this I realised that stripped out the accents has introduced some unforeseen issues. The fulltext search is still ‘accent sensitive’, it’s just that I’ve replaced all accented characters with their non-accented equivalents. This means an accented search will no longer find anything. Therefore we’ll need to ensure that any accents in a submitted search string are also stripped out. In addition, stripping accents out of the Solr text also means that accents no longer appear in the snippets in the search results, meaning the snippets don’t fully reflect the actual content of the entries. This may be a larger issue and will need further discussion. I also wrote a little script to generate a list of entry IDs that feature an <su> inside a <cref> excluding those that feature </geo><su> or </geo> <su>. These entries will need to be manually updated to ensure the <su> tags don’t get displayed.
I spent the remainder of the week continuing with the redevelopment of the Anglo-Norman Dictionary site. My 51-item ‘to do’ list that I compiled last week has now shot up to 65 items, but thankfully during this week I managed to tick 20 of the items off. I have now imported all of the corrected data that the editors were working on, including not just the new ‘R’ records but corrections to other letters too. I also ensured that the earliest dates for entries now appear in the ‘search results’ and ‘log’ tabs in the left-hand panel, and these dates also appear in the fixed header that gets displayed when you scroll down long entries. I also added the date to the title in the main page.
I added content to the ‘cite this entry’ popup, which now contains the same citation options as found on the Historical Thesaurus site, and I added help text to the ‘browse’, ‘search results’ and ‘entry log’ pop-ups. I made the ‘clear search’ button right aligned and removed the part of speech tooltips as parts of speech appear all over the place and I thought it would confuse things if they had tooltips in the search results but not elsewhere. I also added a ‘Try an advanced search’ button at the top of the quick search results page.
I set up WordPress accounts for the editors and added the IP addresses used by the Aberystwyth VPN to our whitelist to ensure that the editors will be able to access the WordPress part of the site and I added redirects that will work from old site URLs once the new site goes live. I also put redirects in for all static pages that I could find too. We also now have the old site accessible via a subdomain so we should be able to continue to access the old site when we switch the main domain over to the new site.
I figured out why the entry for ‘ipocrite’ was appearing in all bold letters. This was caused by an empty XML tag and I updated the XSLT to ensure these don’t get displayed. I updated Variant/Deviant forms in the search results so they just have the title ‘Forms’ now and I think I’ve figured out why the sticky side panel wasn’t sticking and I think I’ve fixed it.
I updated my script that generates citation date text and regenerated all of the earliest date text to include the prefixes and suffixes and investigated the issue of commentary boxes was appearing multiple times. I checked all of the entries and there were about 5 that were like this, so I manually fixed them. I also ensured that when loading the advanced search after a quick search the ‘headword’ tab is now selected and the quick search text appears in the headword search box and I updated the label search so that label descriptions appear in a tooltip.
Also this week I participated in the weekly Iona Placenames Zoom call and made some tweaks to the content management systems for both Iona and Mull to added in researcher notes fields (English and Gaelic) to the place-name elements. I also had a chat with Wendy Anderson about making some tweaks to the Mapping Metaphor website for REF and spoke to Carole Hough about her old Cogtop site that is going to expire soon.
I spent two days each working on updates to the Dictionary of the Scots Language and the redevelopment of the Anglo-Norman Dictionary this week, with the remaining day spent on tasks for a few projects. For the DSL I fixed an issue with the way a selected item was being remembered from the search results. If you performed a search, then navigated to a page of results that wasn’t the first page then clicked on one of the results to view an entry this was setting a ‘current entry’ variable. This was used when loading the results from the ‘return to results’ button on the entry page to ensure that the page of the results that featured the entry you were looking at would be displayed. However, if you then clicked the ‘refine search’ button to return to the advanced search page this ‘current entry’ value would be retained, meaning that when you clicked the search button to perform a new search the results would load at whatever page the ‘current entry’ was located on, if it appeared in the new search results. As the ‘current entry’ would not necessarily be in the new results set the issue only cropped up every now and then. Thankfully having identified the issue it was easy to fix – whenever the search form loads as a result of a ‘refine search’ selection the ‘current entry’ variable is now cleared. It is still retained and used when you return to the search results from an entry page.
I also had a discussion with the editors about removing duplicate child entries from the data and grabbing an updated dataset from the old editing system one last time. Removing duplicate child entries will mean deleting several thousand entries and I was reluctant to do so on from the test systems we already have in case anything goes wrong, so I’ve decided to set up a new test version, which will be available via a new version of the API and will be accessible via one of the existing test front-ends I’ve created. I’ll be tackling this next week.
For the AND I focussed on importing more than 4,500 new or updated entries into the dictionary. In order to do this I needed to look at the new data and figure out how it differed structurally from the existing data, as previously the editors’ XML was passed through an online system that made changes to it before it was published on the old website. I discovered that there were five main differences:
- <main_entry> does not have an ID, or a ‘lead’ or a ‘unikey’. I’ll need to add in the ‘lead’ attribute as this is what controls that editor’s initials on the entry page and I decided to add in an ID as a means of uniquely identifying the XML, although there is already a new unique ID for each entry in the database. ‘unikey’ doesn’t seem to be used anywhere so I decided not to do anything about this.
- <sense> and <subsense> do not have IDs or ‘n’ numbers. The latter I already set up a script to generate so I could reuse that. The former aren’t used anywhere as each sense also has a <senseInfo> with another ID associated with it.
- <senseInfo> does not have IDs, ‘seq’ or ‘pos’. IDs here are important as they are used to identify senses / subsenses for the searches and I needed to develop a way of adding these in here. ‘seq’ does not appear to be used but ‘pos’ is. It’s used to generate the different part of speech sections between senses and in the ‘summary’ and this will need to be added in.
- <attestation> does not have IDs and these are needed as they are recorded in the translation data.
- <dateInfo> has an ID but the existing data from the DMS does not have IDs for <dateInfo>. I decided to retain these but they won’t be used anywhere.
I wrote a script that processed the XML to add in entry IDs, editor initials, sense numbers, senseInfo IDs, parts of speech and attestation IDs. This took a while to implement and test but it seemed to work successfully. After that I needed to work on a script that would import the data into my online system, which included regenerating all of the data used for search purposes, such as extracting cross references, forms, labels, citations and their dates, translations and earliest dates for entries. All seemed to work well and I made the new data available via the front-end for the editors to test out.
I also asked the editors about how to track different versions of data so we know which entries are new or updated as a result of the recent update. It turned out that there are six different statements that need to be displayed underneath entries depending on when the entries were published so I spent a bit of time applying these to entries and updating the entry page to display the notices.
After that I made a couple of tweaks to the new data (e.g. links to the MED were sometimes missing some information needed for the links to work) and discussed adding in commentaries with Geert. I then went through all of the emails to and from the editors in order to compile a big list of items that I still needed to tackle before we can launch the site. It’s a list that totals some 51 items, so I’m going to have my work cut out for me, especially as it is vital that the new site launches before Christmas.
The other projects I worked on this week included the interactive map of Burns Suppers for Paul Malgrati in Scottish Literature. Last week I’d managed to import the core fields from his gigantic spreadsheet and this week I made some corrections to the data and created the structure to hold the filters all of the filters that are in the data.
I wrote a script that goes through all of the records in the spreadsheet and stores the filters in the database when required. Note that a filter is only stored for a record when it’s not set to ‘NA’, which cuts down on the clutter. There are a total of 24,046 filters now stored in the system. The next stage will be to update the front-end to add in the options to select any combination of filter and to build the queries necessary to process the selection and return the relevant locations, which is something I aim to tackle next week.
Also this week I participated in the weekly Zoom call for the Iona place-names project and I updated the Iona CMS to strip out all of the non-Iona names and gave the team access to the project’s Content Management system. The project website also went live this week, although as of yet there is still not much content on it. It can be found here, though: https://iona-placenames.glasgow.ac.uk/
Also this week I helped to export some data for a member of the Berwickshire place-names project team, I responded to a query from Gerry McKeever about the St. Andrews data in the Books and Borrowing project’s database and I fixed an issue with Rob Maslen’s City of Lost Books site, which had somehow managed to lose its nice header font.
I spent a lot of this week continuing to work on the redevelopment of the Anglo-Norman Dictionary website, focussing on the search facilities. I made some tweaks to the citation search that I’d developed last week, ensuring that the intermediate ‘pick a form’ screen appears even if only one search word is returned and updating the search results to omit forms and labels but to include the citation dates and siglums, the latter opening up pop-ups as with the entry pages. I also needed to regenerate the search terms as I’d realised that due to a typo in my script a number of punctuation marks that should have been stripped out were remaining, meaning some duplicate forms were being listed, sometimes with a punctuation mark such as a colon and othertimes ‘clean’.
I also realised that I needed to update the way apostrophes were being handled. In my script these were just being stripped out, but this wasn’t working very well as forms like ‘s’oreille’ were then becoming ‘soreille’ when really it’s the ‘oreille’ part that’s important. However, I couldn’t just split words up on an apostrophe and use the part on the right as apostrophes appear elsewhere in the citations too. I managed to write a script that successfully split words on apostrophes and retained the sections on both sides as individual search word forms (if they are alphanumeric). Whilst writing this script I also fixed an issue with how the data stripped of XML tags is processed. Occasionally there are no spaces between a word and a tag that contains data, and when my script removed tags to generate the plain text required for extracting the search words this led to a word and the contents of the following tag being squashed together, resulting in forms such as ‘apresentDsxiii1’. By adding spaces between tags I managed to get around this problem.
With these tweaks in place I then moved onto the next advanced search option: the English translations. I extracted the translations from the XML and generated the unique words found in each (with a count of their occurrences), also storing the Sense IDs for the senses in which the translations were found so that I could connect the translations up to the citations found within the senses in order to enable a date search (i.e. limiting a search to only those translations that are in a sense that has a citation in a particular year or range of years). The search works in a similar way to the citation search, in that you can enter a search term (e.g. ‘bread’) and this will lead you to an intermediary page that lists all words in translations that match ‘bread’. You can then select one to view all of the entries with their translation that feature the word, with it highlighted. If you supply a year or a range of years then the search connects to the citations and only returns translations for senses that have a citation date in the specified year or range. This connects citations and translations via the ‘senseid’ in the XML. So for example, if you only want to find translations containing ‘bread’ that have a citation between 1350 and 1400 you can do so. There are still some tweaks that need to be done. For example, one inconsistency we might need to address is that the number in brackets in the intermediary page refers to the number of translations / citations the word is found in, but when you click through to the full results the ‘matched results’ number will likely be different because this refers to matched entries, and an entry may contain more than one matching translation / citation.
I then moved onto the final advanced search option, the label search. This proved to be a pretty tricky undertaking, especially when citation dates also have to be taken into consideration. I didn’t manage to get the search working this week, but I did get the form where you can build your label query up and running on the advanced search page. If you select the ‘Semantic & Usage Labels’ tab you should see a page with a ‘citation date’ box, a section on the left that lists the labels and a section on the right where your selection gets added. I considered using tooltips for the semantic label descriptions, but decided against it as tooltips don’t work so well on touchscreens and I thought the information would be pretty important to see. Instead the description (where available) appears in a smaller font underneath the label, with all labels appearing in a scrollable area. The number on the right is the number of senses (not entries) that have the label applied to them, as you can see in the following screenshot:
As mentioned above, things are seriously complicated by the inclusion of citation dates. Unlike with other search options, choosing a date or a range here affects the search options that are available. E.g. if you select the years 1405-1410 then the labels used in this period and the number of times they are used differs markedly from the full dataset. For this reason the ‘citation date’ field appears above the label section, and when you update the ‘citation date’ the label section automatically updates to only display labels and counts that are relevant to the years you have selected. Removing everything from the ‘citation date’ resets the display of labels.
When you find labels you want to search for pressing on the label area adds it to the ‘selected labels’ section on the right. Pressing on it a second time deselects the label and removes it from the ‘selected labels’ section. If you select more than one label then a Boolean selector appears between the selected label and the one before, allowing you to choose AND, OR, or NOT, as you can see in the above screenshot.
I made a start on actually processing the search, but it’s not complete yet and I’ll have to return to this next week. However, building complex queries is going to be tricky as without a formal querying language like SQL there are ambiguities that can’t automatically be sorted out by the interface I’m creating. E.g. how should ‘X AND Y NOT Z OR B’ be interpreted? Is it ‘(X AND Y) NOT (Z OR B)’ or ‘((X AND Y) NOT Z) OR B’ or ‘(X AND (Y NOT Z)) OR B’ etc. Each would give markedly different results. Adding more than two or possibly three labels is likely to lead to confusing results for people.
Other than working on the AND I spent some time this week working on the Place-names of Iona project. We had a team meeting on Friday morning and after that I began work on the interface for the website. This involved the usual installing a theme, customising fonts, selecting colour schemes, adding in logos, creating menus and an initial site structure. As with the Mull site, the Iona site is going to be bilingual (English and Gaelic) so I needed to set this up too. I also worked on the banner image, combining a lovely photo of Iona from Shutterstock with a map image from the NLS. It’s almost all in place now, but I’ll need to make a few further tweaks next week. I also set up the CMS for the project, as we have decided to not just share the Mull CMS. I migrated the CMS and all of its data across and then worked on a script that would pick out only those place-names from the Mull dataset that are of relevance to the Iona project. I did this by drawing a box around the island using this handy online interface: https://geoman.io/geojson-editor and then grabbing the coordinates. I needed to reverse the latitude and longitude of these due to GeoJSON using them the other way around to other systems, and then I plugged these into a nice little algorithm I discovered for working out which coordinates are within a polygon (see https://assemblysys.com/php-point-in-polygon-algorithm/). This resulted in about 130 names being identified, but I’ll need to tweak this next week to see if my polygon area needs to be increased.
For the remainder of the week I upgraded all of the WordPress sites I manage to the most recent version (I manage 39 such sites so this took a little while). I also helped Simon Taylor to access the Berwickshire and Kirkcudbrightshire place-names systems again and fixed an access issue with the Books and Borrowing CMS. I also looked into an issue with the DSL test sites as the advanced searches on each of these had stopped working. This was caused by an issue with the Solr indexing server that thankfully Arts IT Support were able to address.
Next week I’ll continue with the AND redevelopment and also return to working on the DSL for the first time in quite a while.
This was something of an odd week as I tested positive for Covid. I’m not entirely sure how I managed to get it, but I’d noticed on Friday last week that I’d lost my sense of taste and thought it would be sensible to get tested and the result came back positive. I’d been feeling a bit under the weather last week and this continued throughout this week too, but thankfully the virus never affected my chest or throat and I managed to more or less work all week. However, with our household in full-on in isolation our son was off school all week, and will be all next week, which did impact on the work I could do.
My biggest task of the week was to complete the work in preparation for the launch of the second edition of the Historical Thesaurus. This included fixing the full-size timelines to ensure that words that have been updated to have post-1945 end dates display properly. As we had changed the way these were stored to record the actual end date rather than ‘9999’ the end points of the dates on the timeline were stopping short and not having a pointy end to signify ‘current’. New words that only had post-1999 dates were also not displaying properly. Thankfully I managed to get these issues sorted. I also updated the search terms to fix some of the unusual characters that had not migrated over properly but had been replaced by question marks. I then updated the advanced search options to provide two checkboxes to allow a user to limit their search to new word or words that have been updated (or both), which is quite handy, as it means you can fine out all of the new words in a particular decade, for example all of the new words that have a first date some time in the 1980s:
I also tweaked the text that appears beside the links to the OED and added the Thematic Heading codes to the drop-down section of the main category. We also had to do some last-minute renumbering of categories, which affected several hundred categories and subcategories in ’01.02’ and manually moved a couple of other categories to new locations, and after that we were all set for the launch. The new second edition is now fully available, as you can see from the above link.
Other than I worked on a few other projects this week. I helped to migrate a WordPress site for Bryony Randall’s Imprints of New Modernist Editing project, which is now available here: https://imprintsarteditingmodernism.glasgow.ac.uk/ and responded to a query about software purchased from Lisa Kelly in TFTS.
I spent the rest of the week continuing with the redevelopment of the Anglo-Norman Dictionary website. I updated my script that extracts citations and their dates, which I’d started to work on last week. I figured out why my script was not extracting all citations (it was only picking out the citations form the first sense and subsense in each entry rather than all senses) and managed to get all citations out. With dates extracted for each entry I was then able to store the earliest date for each entry and update the ‘browse’ facility to display this date alongside the headword.
With this in place I moved on to looking at the advanced search options. I created the tab-based interface for the various advanced search options and implemented searches for headwords and citations. The headword search works in a very similar way to the quick search – you can enter a term and use wildcards or double quotes for an exact search. You can also combine this with a date search. This allows you to limit your results to only those entries that have a citation in the year or range of years you specify. I would imagine entering a range of years would be more useful than a single year. You can also omit the headword and just specify a citation year to find all entries with a citation in the year or range, e.g. all entries with a citation in 1210.
The citation search is also in place and this works rather differently. As mentioned in the overview document, this works in a similar (but not identical) way to the old ‘concordance search of citations’. You can search for a word or a part of a word using the same wildcards as for the headword and limiting your search to particular citation dates. When you submit the search this then loads an intermediary page that lists all of the word forms in citations that your search matches, plus a count of the number of citations each form is in. From this page you can then select a specific form and view the results. So, for example, a search for words beginning with ‘tre’ with a citation date between 1200 and 1250 lists 331 forms in citations will list all of the ‘tre’ words and you can then choose a specific form, e.g. ‘tref’ to see the results. The citation results include all of the citations for an entry that include the word, with the word highlighted in yellow. I still need to think about how this might work better, as currently there is no quick way to get back to the intermediary list of forms. But progress is being made.
I was back at work this week after having a lovely holiday the previous week. It was a pretty busy week, mostly spent continuing to work on the preparations for the second edition of the Historical Thesaurus, which needs to be launched before the end of the month. I updated the OED date extraction script that formats all of the OED dates as we need them in the HT, including making full entries, in the HT dates table, generating the ‘full date’ text string that gets displayed on the website and generating cached first and last dates that are used for searching. I’d somehow managed to get the plus and dash connectors the wrong way round in my previous version of the script (a plus should be used where there is a gap of more than 150 years, otherwise it’s a dash) so I fixed this. I also stripped out dates that were within a 150 year time span, which really helped to make the full date text more readable. I also updated the category browser so that the category’s thematic heading is displayed in the drop-down section.
Fraser had made some suggested changes to the script I’d written to figure out whether an OED lexeme was new or already in the system so I made some changes to this and regenerated the output. I also made further tweaks to the date extraction script so that we record the actual final date in the system rather than converting it to ‘9999’ and losing this information that will no doubt be useful in future. I then worked on the post-1999 lexemes, which followed a similar set of processes.
With this all in place I could then run a script that would actually import the new lexemes and their associated dates into the HT database. This included changelog codes, new search terms and new dates (cached firstdate and lastdate, fulldate and individual entries in the dates table). A total of 11116 new words were added, although I subsequently noticed there were a few duplicates that had slipped through the net. With these stripped out we had a total of 804,830 lexemes in the HT, and it’s great to have broken through the 800,000 mark. Next week I need to fix a few things (e.g. the fullsize timelines aren’t set up to cope with post-1945 dates that don’t end in ‘9999’ if they’re current) but we’re mostly now good to launch the second edition.
Also this week I worked on setting up a website for the ‘Symposium for Seventeenth-Century Scottish Literature’ for Roslyn Potter in Scottish Literature and set up a subdomain for an art catalogue website for Bryony Randall’s ‘Imprints of the New Modernist Editing’ project. I also helped Megan Coyer out with an issue she was having in transcribing multi-line brackets in Word and travelled to the University to collect a new, higher-resolution monitor and some other peripherals to make working from home more pleasant. I also fixed a couple of bugs in the Books and Borrowing CMS, including one that was resulting in BC dates of birth and death for authors being lost when data was edited. I also spent some time thinking about the structure for the Burns Correspondence system for Pauline Mackay, resulting in a long email with a proposed database structure. I met with Thomas Clancy and Alasdair Whyte to discuss the CMS for the Iona place-names project (it now looks like this is going to have to be a completely separate system from Alasdair’s existing Mull / Ulva system) and replied to Simon Taylor about a query he had regarding the Place-names of Fife data.
I also found some time to continue with the redevelopment of the Anglo-Norman Dictionary website. I updated the way cognate references were processed to enable multiple links to be displayed for each dictionary. I also added in a ‘Cite this entry’ button, which now appears in the top right of the entry that when clicked on opens a pop-up where citation styles will appear (they’re not there yet). I updated the left-hand panel to make it ‘sticky’: If you scroll down a long entry the panel stays visible on screen (unless you’re viewing on a narrow screen like a mobile phone in which case the left-hand panel appears full-width before the entry). I also added in a top bar that appears when you scroll down the screen that contains the site title, the entry headword and the ‘cite’ button. I then began working on extracting the citations, including their dates and text, which will be used for search purposes. I ran an extraction script that extracted about 60,0000 citations, but I released that this was not extracting all of the citations and further work will be required to get this right next week.
This was a pretty busy week, involving lots of different projects. I set up the systems for a new place-name project focusing on Ayrshire this week, based on the system that I initially developed for the Berwickshire project and has subsequently been used for Kirkcudbrightshire and Mull. It didn’t take too long to port the system over, but the PI also wanted the system to be populated with data from the GB1900 crowdsourcing project. This project has transcribed every place-name on the GB1900 Ordnance Survey maps across the whole of the UK and is an amazing collection of data totalling some 2.5 million names. I had previously extracted a subset of names for the Mull and Ulva project so thankfully had all of the scripts needed to get the information for Ayrshire. Unfortunately what I didn’t have was the data in a database, as I’d previously extracted it to my PC at work. This meant that I had to run the extraction script again on my home PC, which took about three days to work through all of the rows in the monstrous CSV file. Once this was complete I could then extract the names found in the Ayrshire parishes that the project will be dealing with, resulting in almost 4,000 place-names. However, this wasn’t the end of the process as while the extracted place-names had latitude and longitude they didn’t have grid references or altitude. My place-names system is set up to automatically generate these values and I could customise the scripts to automatically apply the generated data to each of the 4000 places. Generating the grid reference was pretty straightforward but grabbing the altitude was less so, as it involved submitting a query to Google Maps and then inserting the returned value into my system using an AJAX call. I ran into difficulties with my script exceeding the allowed number of Google Map queries and also the maximum number of page requests on our server, resulting in my PC getting blocked by the server and a ‘Forbidden’ error being displayed instead, but with some tweaking I managed to get everything working within the allowed limits.
I also continued to work on the Second Edition of the Historical Thesaurus. I set up a new version of the website that we will work on for the Second Edition, and created new versions of the database tables that this new site connects to. I also spent some time thinking about how we will implement some kind of changelog or ‘history’ feature to track changes to the lexemes, their dates and corresponding categories. I had a Zoom call with Marc and Fraser on Wednesday to discuss the developments and we realised that the date matching spreadsheets I’d generated last week could do with some additional columns from the OED data, namely links through to the entries on the OED website and also a note to say whether the definition contains ‘(a)’ or ‘(also’ as these would suggest the entry has multiple senses that may need a closer analysis of the dates.
I then started to update the new front-end to use the new date structure that we will use for the Second Edition (with dates stored in a separate date table rather than split across almost 20 different date fields in the lexeme table). I updated the timeline visualisations (mini and full) to use this new date table, and although this took quite some time to get my head around the resulting code is MUCH less complicated than the horrible code I had to write to deal with the old 20-odd date columns. For example, the code to generate the data for the mini timelines is about 70 lines long now as opposed to over 400 previously.
The timelines use the new data tables in the category browse and the search results. I also spotted some dates weren’t working properly with the old system but are working properly now. I then updated the ‘label’ autocomplete in the advanced search to use the labels in the new date table. What I still need to do is update the search to actually search for the new labels and also to search the new date tables for both ‘simple’ and ‘complex’ year searches. This might be a little tricky, and I will continue on this next week.
Also this week I gave Gerry McKeever some advice about preserving the data of his Regional Romanticism project, spoke to the DSL people about the wording of the search results page, gave feedback on and wrote some sections for Matthew Creasy’s Chancellor’s Fund proposal, gave feedback to Craig Lamont regarding the structure of a spreadsheet for holding data about the correspondence of Robert Burns and gave some advice to Rob Maslen about the stats for his ‘City of Lost Books’ blog. I also made a couple of tweaks to the content management system for the Books and Borrowers project based on feedback from the team.
I spent the remainder of the week working on the redevelopment of the Anglo-Norman dictionary. I updated the search results page to style the parts of speech to make it clearer where one ends and the next begins. I also reworked the ‘forms’ section to add in a cut-off point for entries that have a huge number of forms. In such cases the long list of cut off and an ellipsis is added in, together with an ‘expand’ button. Pressing on this scrolls down the full list of forms and the button is replaced with a ‘collapse’ button. I also updated the search so that it no longer includes cross references (these are to be used for the ‘Browse’ list only) and the quick search now defaults to an exact match search whether you select an item from the auto-complete or not. Previously it performed an exact match if you selected an item but defaulted to a partial match if you didn’t. Now if you search for ‘mes’ (for example) and press enter or the search button your results are for “mes” (exactly). I suspect most people will select ‘mes’ from the list of options, which already did this, though. It is also still possible to use the question mark wildcard with an ‘exact’ search, e.g. “m?s” will find 14 entries that have three letter forms beginning with ‘m’ and ending in ‘s’.
I also updated the display of the parts of speech so that they are in order of appearance in the XML rather than alphabetically and I’ve updated the ‘v.a.’ and ‘v.n.’ labels as the editor requested. I also updated the ‘entry’ page to make the ‘results’ tab load by default when reaching an entry from the search results page or when choosing a different entry in the search results tab. In addition, the search result navigation buttons no longer appear in the search tab if all the results fit on the page and the ‘clear search’ button now works properly. Also, on the search results page the pagination options now only appear if there is more than one page of results.
On Friday I began to process the entry XML for display on the entry page, which was pretty slow going, wading through the XSLT file that is used to transform the XML to HTML for display. Unfortunately I can’t just use the existing XSLT file from the old site because we’re using the editor’s version of the XML and not the system version, and the two are structurally very different in places.
So far I’ve been dealing with forms and have managed to get the forms listed, with grammatical labels displayed where available and commas separating forms and semi-colons separating groups of forms. Deviant forms are surrounded by brackets. Where there are lots of forms the area is cut off as with the search results. I still need to add in references where these appear, which is what I’ll tackle next week. Hopefully now I’ve started to get my head around the XML a bit progress with the rest of the page will be a little speedier, but there will undoubtedly be many more complexities that will need to be dealt with.
I was back at work this week after spending two weeks on holiday, during which time we went to Skye, Crinan and Peebles. It was really great to see some different places after being cooped up at home for 19 weeks and I feel much better for having been away. Unfortunately during this week I developed a pretty severe toothache and had to make an emergency appointment with the dentist on Thursday morning. It turns out I need root canal surgery and am now booked in to have this next Tuesday, but until then I need to try and just cope with the pain, which has been almost unbearable at times, despite regular doses of both ibuprofen and paracetamol. This did affect my ability work a bit on Thursday afternoon and Friday, but I managed to struggle through.
On my return to work from my holiday on Monday I spent some time catching up with emails that had accumulated whilst I was away, including replying to Roslyn Potter in Scottish Literature about a project website, replying to Jennifer Smith about giving a group of students access to the SCOSYA data and making changes to the Berwickshire Place-names website to make it more attractive to the REF reviewers based on feedback passed on by Jeremy Smith. I also created a series of high-resolution screenshots of the resource for Carole Hough for a publication, had an email chat with Luca Guariento about linked open data.
I also fixed some issues with the Galloway Glens projects that Thomas Clancy had spotted, including an issue with the place-name element page which was not ordering accented characters properly – all accented characters were being listed at the end rather than with their non-accented versions. It turned out that while the underlying database orders accented characters correctly, for the elements list I need to get a list of elements used in place-names and a list of elements used in historical forms and then I have to combine these lists and reorder the resulting single list. This part of the process was not dealing with all accented characters, only a limited set that I’d created for Berwickshire that also dealt with ashes and thorns. Instead I added in a function taken from WordPress that converts all accented characters to their unaccented equivalent for the purposes of ordering and this ensured the order of the elements list was correct.
The rest of my week was divided between three projects, the first of which was the Books and Borrowing project. For this I spent some time working with some of the digitised images of the register pages. We now have access to the images from Westerkirk library and in these records appear in a table that spreads across both recto and verso pages but we have images of the individual pages. The project RA who is transcribing the records is treating both recto and verso as a single ‘page’ in the system, which makes sense. We therefore need to stitch the r and v images together into on single image to be associated with this ‘page’. I downloaded all of the images and have found a way to automatically join two page images together. However, there is rather a lot of overlap in the images, meaning the book appears to have two joins and some columns are repeated. I could possibly try to automatically crop the images before joining them, but there is quite a bit of variation in the size of the overlap so this is never going to be perfect and may result in some information getting lost. The other alternative would be to manually crop and join the images, which I did some experimentation with. It’s still not perfect due to the angle of the page changing between shots, but it’s a lot better. The downside with this approach is that someone would have to do the task. There are about 230 images, so about 115 joins, each one taking 2-3 minutes to create, so maybe about 5 or so hours of effort. I’ve left it with the PI and Co-I to decide what to do about this. I also downloaded the images for Volume 1 of the register for Innerpeffray library and created tilesets for these that will allow the images to be zoomed and panned. I also fixed a bug relating to adding new book items to a record and responded to some feedback about the CMS.
My second major project of the week was the Anglo-Norman Dictionary. This week I began writing a high-level requirements document for the new AND website that I will be developing. This mean going through the existing site in detail and considering which features will be retained, how things might be handled better, and how I might develop the site. I made good progress with the document, and by the end of the week I’d covered the main site. Next week I need to consider the new API for accessing the data and the staff pages for uploading and publishing new or newly edited entries. I also responded to a few questions from Heather Pagan of the AND about the searches and read through and gave feedback on a completed draft of the AHRC proposal that the team are hoping to submit next month.
My final major project of the week was the Historical Thesaurus, for which I updated and re-executed by OED Date extraction script based on feedback from Fraser and Marc. It was a long and complicated process to update the script as there are literally millions of dates and some issues only appear a handful of times, so tracking them down and testing things is tricky. However, I made the following changes: I added a ‘sortdate_new’ column to the main OED lexeme table that holds the sortdate value from the new XML files, which may differ from the original value. I’ve done some testing and rather strangely there are many occasions where the new sortdate differs from the old, but the ‘revised’ flag is not set to ‘true’. I also updated the new OED date table to include a new column where the full date text is contained, as I thought this would be useful for tracing back issues. E.g. if the OED date is ‘?c1225’ this is stored here. The actual numeric year in my table now comes from the ‘year’ attribute in the XML instead. This always contains the numeric value in the OED date, e.g. <q year=”1330″><date>c1330</date></q>. New lexemes in the data are now getting added into the OED lexeme table and are also having their dates processed. I’ve added a new column called ‘newaugust2020’ to track these new lexemes. We’ll possibly have to try and match them up with existing HT lexemes at some point, unless we can consider them all to be ‘new’, meaning they’ll have no matches. The script also now stores all of the various OE dates, rather than one single OE date of 650 being added for all. I set the script running on Thursday and by Sunday it had finished executing, resulting in 3,912,109 being added and 4061 new words.
Week 19 of Lockdown, and it was a short week for me as the Monday was the Glasgow Fair holiday. I spent a couple of days this week continuing to add features to the content management system for the Books and Borrowing project. I have now implemented the ‘normalised occupations’ part of the CMS. Originally occupations were just going to be a set of keywords, allowing one or more keyword to be associated with a borrower. However, we have been liaising with another project that has already produced a list of occupations and we have agreed to share their list. This is slightly different as it is hierarchical, with a top-level ‘parent’ containing multiple main occupations. E.g. ‘Religion and Clergy’ features ‘Bishop’. However, for our project we needed a third hierarchical level do differentiate types of minister/priest, so I’ve had to add this in too. I’ve achieved this by means of a parent occupation ID in the database, which is ‘null’ for top-level occupations and contains the ID of the parent category for all other occupations.
I completed work on the page to browse occupations, arranging the hierarchical occupations in a nested structure that features a count of the number of borrowers associated with the occupation to the right of the occupation name. These are all currently zero, but once some associations are made the numbers will go up and you’ll be able to click on the count to bring up a list of all associated borrowers, with links through to each borrower. If an occupation has any child occupations a ‘+’ icon appears beside it. Press on this to view the child occupations, which also have counts. The counts for ‘parent’ occupations tally up all of the totals for the child occupations, and clicking on one of these counts will display all borrowers assigned to all child occupations. If an occupation is empty there is a ‘delete’ button beside it. As the list of occupations is going to be fairly fixed I didn’t add in an ‘edit’ facility – if an occupation needs editing I can do it directly through the database, or it can be deleted and a new version created. Here’s a screenshot showing some of the occupations in the ‘browse’ page:
I also created facilities to add new occupations. You can enter an occupation name and optionally specify a parent occupation from a drop-down list. Doing so will add the new occupation as a child of the selected category, either at the second level if a top level parent is selected (e.g. ‘Agriculture’) or at the third level if a second level parent is selected (e.g. ‘Farmer’). If you don’t include a parent the occupation will become a new top-level grouping. I used this feature to upload all of the occupations, and it worked very well.
I then updated the ‘Borrowers’ tab in the ‘Browse Libraries’ page to add ‘Normalised Occupation’ to the list of columns in the table. The ‘Add’ and ‘Edit’ borrower facilities also now feature ‘Normalised Occupation’, which replicates the nested structure from the ‘browse occupations’ page, only features checkboxes beside each main occupation. You can select any number of occupations for a borrower and when you press the ‘Upload’ or ‘Edit’ button your choice will be saved. Deselecting all ticked checkboxes will clear all occupations for the borrower. If you edit a borrower who has one or more occupations selected, in addition to the relevant checkboxes being ticked, the occupations with their full hierarchies also appear above the list of occupations, so you can easily see what is already selected. I also updated the ‘Add’ and ‘Edit’ borrowing record pages so that whenever a borrower appears in the forms the normalised occupations feature also appears.
I also added in the option to view page images. Currently the only ledgers that have page images are the three Glasgow ones, but more will be added in due course. When viewing a page in a ledger that includes a page image you will see the ‘Page Image’ button above the table of records. Press on this and a new browser tab will open. It includes a link through to the full-size image of the page if you want to open this in your browser or download it to open in a graphics package. It also features the ‘zoom and pan’ interface that allows you to look at the image in the same manner as you’d look at a Google Map. You can also view this full screen by pressing on the button in the top right of the image.
Also this week I made further tweaks to the script I’d written to update lexeme start and end dates in the Historical Thesaurus based on citation dates in the OED. I’d sent a sample output of 10,000 rows to Fraser last week and he got back to me with some suggestions and observations. I’m going to have to rerun the script I wrote to extract the more than 3 million citation dates from the OED as some of the data needs to be processed differently, but as this script will take several days to run and I’m on holiday next week this isn’t something I can do right now. However, I managed to change the way the date matching script runs to fix some bugs and make the various processes easier to track. I also generated a list of all of the distinct labels in the OED data, with counts of the number of times these appear. Labels are associated with specific citation dates, thankfully. Only a handful are actually used lots of times, and many of the others appear to be used as a ‘notes’ field rather than as a more general label.
In addition to the above I also had a further conversation with Heather Pagan about the data management plan for the AND’s new proposal, responded to a query from Kathryn Cooper about the website I set up for her at the end of last year, responded to a couple of separate requests from post-grad students in Scottish Literature, spoke to Thomas Clancy about the start date for his Place-Names of Iona project, which got funded recently, helped with some issues with Matthew Creasy’s Scottish Cosmopolitanism website and spoke to Carole Hough about making a few tweaks to the Berwickshire Place-names website for REF.
I’m going to be on holiday for the next two weeks, so there will be no further updates from me for a while.