Week Beginning 16th November 2020

I spent two days each working on updates to the Dictionary of the Scots Language and the redevelopment of the Anglo-Norman Dictionary this week, with the remaining day spent on tasks for a few projects.  For the DSL I fixed an issue with the way a selected item was being remembered from the search results.  If you performed a search, then navigated to a page of results that wasn’t the first page then clicked on one of the results to view an entry this was setting a ‘current entry’ variable.  This was used when loading the results from the ‘return to results’ button on the entry page to ensure that the page of the results that featured the entry you were looking at would be displayed. However, if you then clicked the ‘refine search’ button to return to the advanced search page this ‘current entry’ value would be retained, meaning that when you clicked the search button to perform a new search the results would load at whatever page the ‘current entry’ was located on, if it appeared in the new search results.  As the ‘current entry’ would not necessarily be in the new results set the issue only cropped up every now and then.  Thankfully having identified the issue it was easy to fix – whenever the search form loads as a result of a ‘refine search’ selection the ‘current entry’ variable is now cleared.  It is still retained and used when you return to the search results from an entry page.

I then investigated an issue with citation numbers that will need further input from the editors before moving onto implementing an option that allows you to show or hide the ‘browse’ panel on the right of entries.  I removed the animations when you show and hide either the ‘search’ or ‘browse’ columns as these were proving to be a bit clunky with so much content being shown or hidden.  I also had to rework how the hide button in the ‘search’ column functions in order to get the new options working, but hopefully this hasn’t introduced any issues.  Additionally, I had to make quite a few updates to the stylesheet and the JavaScript to get this to work, as the width of the ‘entry’ column now has double the number of possible values.  Also, I needed to ensure that the entry width with and without the ‘browse’ column worked at all screen dimensions as different styles are called at different screen widths.  Currently the choice of which columns are visible resets every time the entry page loads but I may make the system remember your choice during a session, meaning if you hide the browse column once it will stay hidden.  The change isn’t live yet but only works on one of our test sites, but once I get feedback from the editors I’ll apply it to the live site.

I also had a discussion with the editors about removing duplicate child entries from the data and grabbing an updated dataset from the old editing system one last time.  Removing duplicate child entries will mean deleting several thousand entries and I was reluctant to do so on from the test systems we already have in case anything goes wrong, so I’ve decided to set up a new test version, which will be available via a new version of the API and will be accessible via one of the existing test front-ends I’ve created.  I’ll be tackling this next week.

For the AND I focussed on importing more than 4,500 new or updated entries into the dictionary.  In order to do this I needed to look at the new data and figure out how it differed structurally from the existing data, as previously the editors’ XML was passed through an online system that made changes to it before it was published on the old website.  I discovered that there were five main differences:

  1. <main_entry> does not have an ID, or a ‘lead’ or a ‘unikey’. I’ll need to add in the ‘lead’ attribute as this is what controls that editor’s initials on the entry page and I decided to add in an ID as a means of uniquely identifying the XML, although there is already a new unique ID for each entry in the database.  ‘unikey’ doesn’t seem to be used anywhere so I decided not to do anything about this.
  2. <sense> and <subsense> do not have IDs or ‘n’ numbers. The latter I already set up a script to generate so I could reuse that.  The former aren’t used anywhere as each sense also has a <senseInfo> with another ID associated with it.
  3. <senseInfo> does not have IDs, ‘seq’ or ‘pos’. IDs here are important as they are used to identify senses / subsenses for the searches and I needed to develop a way of adding these in here.  ‘seq’ does not appear to be used but ‘pos’ is.  It’s used to generate the different part of speech sections between senses and in the ‘summary’ and this will need to be added in.
  4. <attestation> does not have IDs and these are needed as they are recorded in the translation data.
  5. <dateInfo> has an ID but the existing data from the DMS does not have IDs for <dateInfo>. I decided to retain these but they won’t be used anywhere.

I wrote a script that processed the XML to add in entry IDs, editor initials, sense numbers, senseInfo IDs, parts of speech and attestation IDs.  This took a while to implement and test but it seemed to work successfully.  After that I needed to work on a script that would import the data into my online system, which included regenerating all of the data used for search purposes, such as extracting cross references, forms, labels, citations and their dates, translations and earliest dates for entries.  All seemed to work well and I made the new data available via the front-end for the editors to test out.

I also asked the editors about how to track different versions of data so we know which entries are new or updated as a result of the recent update.  It turned out that there are six different statements that need to be displayed underneath entries depending on when the entries were published so I spent a bit of time applying these to entries and updating the entry page to display the notices.

After that I made a couple of tweaks to the new data (e.g. links to the MED were sometimes missing some information needed for the links to work) and discussed adding in commentaries with Geert.  I then went through all of the emails to and from the editors in order to compile a big list of items that I still needed to tackle before we can launch the site.  It’s a list that totals some 51 items, so I’m going to have my work cut out for me, especially as it is vital that the new site launches before Christmas.

The other projects I worked on this week included the interactive map of Burns Suppers for Paul Malgrati in Scottish Literature.  Last week I’d managed to import the core fields from his gigantic spreadsheet and this week I made some corrections to the data and created the structure to hold the filters all of the filters that are in the data.

I wrote a script that goes through all of the records in the spreadsheet and stores the filters in the database when required.  Note that a filter is only stored for a record when it’s not set to ‘NA’, which cuts down on the clutter.  There are a total of 24,046 filters now stored in the system.  The next stage will be to update the front-end to add in the options to select any combination of filter and to build the queries necessary to process the selection and return the relevant locations, which is something I aim to tackle next week.

Also this week I participated in the weekly Zoom call for the Iona place-names project and I updated the Iona CMS to strip out all of the non-Iona names and gave the team access to the project’s Content Management system.  The project website also went live this week, although as of yet there is still not much content on it.  It can be found here, though: https://iona-placenames.glasgow.ac.uk/

Also this week I helped to export some data for a member of the Berwickshire place-names project team, I responded to a query from Gerry McKeever about the St. Andrews data in the Books and Borrowing project’s database and I fixed an issue with Rob Maslen’s City of Lost Books site, which had somehow managed to lose its nice header font.

Week Beginning 9th November 2020

I took Friday off this week as I had a dentist’s appointment across town in the West End and I decided to take the opportunity to do some Christmas shopping whilst all the shops in Glasgow are still open (there’s some talk of us having greater Covid restrictions imposed in the next week or so).  I spent a couple of days this week working on the Dictionary of the Scots Language, a project I’ve been meaning to return to for many months but have been too busy with other work to really focus on.  Thankfully in November with the launch of the second edition of the Historical Thesaurus out of the way I have a bit of time to get back into the outstanding DSL issues.

Rhona Alcorn had sent a list of outstanding tasks a while back and I spent some time going through this and commenting on each item.  I then began to work through each item, starting with fixing cross references in our ‘V3’ test site (which features data that the editors have been working on in recent years).  Cross references appear differently in the XML for this version so I needed to update the XSLT in order to make them work correctly.  I then updated the full-text extraction script that prepares data for inclusion in the Solr search engine.  Previously this was stripping out all of the XML tags in order to leave the plain text, but unfortunately there were occasions where the entries contains words separated by tags but no spaces, meaning when the tags were removed the words ended up joined together.  I fixed this by adding a space character before every XML tag before the tags were stripped out.  This resulted in plain text that often contained multiple spaces between words, but thankfully Solr ignores these when it indexes the text.  I asked Raymond of Arts IT Support to upload the new text to the server and tested things out and all worked perfectly.

After this I moved on to creating a new ordering for the ‘browse’ feature.  This new ordering takes into consideration parts of speech and ensures that supplemental entries appear below main entries.  It also correctly positions entries beginning with a yogh.  I’d created a script to generate the new browse order many months ago, so I could just tweak this and then use it to update the database.  After that I needed to make some updates to the V2 and V3 front-ends to use the new ordering fields, which took a little time, but it seems to have worked successfully.  I may need to tweak the ordering further, but will await feedback before I make any changes.

I then moved on to investigating searches for accented characters, that were apparently not working correctly.  I noticed that the htaccess script was not set up to accept accented characters so I updated this.  However, the advanced headword search itself was finding forms with accented characters in them if the non-accented version was passed.  The ‘privace’ example was redirecting to the entry page as only one result was matched, but if you perform a search for ‘*vace’ it finds and displays the accented headword in both V2 and V3 but not the live site.  Therefore I think this issue is now sorted.  However, we should perhaps strip out accents from any submitted search terms as allowing accented characters to be submitted (e.g. for *vacé) gives the impression that we allow accented characters to be searched for distinctly from their unaccented versions and the results including both accented and unaccented might confuse people.

The last DSL issue I looked at involved hiding superscript characters in certain circumstances (after ‘geo’ tags in ‘cref’ tags).  There are 3093 SND entries that include the text ‘</geo><su>’ or ‘</geo> <su>’ and I updated the XSLT file that transforms the XML into HTML to deal with these.  Previously it transformed the <su> tag into the HTML superscript tag <sup>.  I’ve updated it so that it now checks to see what the tag’s preceding sibling is.  If it’s a <geo> tag it now adds the class ‘noSup’ to the generated <sup>.  Currently I’ve set <sup> elements with this class to have a pink background so the editors can check to see how the match is performing, and once they’re happy with it I can update the CSS to hide ‘noSup’ elements.

Other than DSL work I also spent some time continuing to work on the redevelopment of the Anglo-Norman Dictionary and completed an initial version of the label search that I began working on last week.  The search form as discussed last week hasn’t changed, but it’s now possible to submit the search, navigate through the search results, return to the search form to make changes to your selection and view entries.  I have needed to overhaul how the search page works to accommodate the label search, which required some pretty major changes behind the scenes, but hopefully none of the other searches will have been affected by this.  You can select a single label and search for that, e.g. ‘archit.’ and if you then refine your search you will see that the label is ‘remembered’ in the form so you can add to it or remove it, for example if you’re interested in all of the entries that are labelled ‘archit.’ and ‘mil’.  As mentioned last week, adding or changing a citation year resets the boxes as different labels are displayed depending on the years chosen.  The chosen year is remembered by the form if you choose to refine your search and the labels and selected labels and Booleans are pulled in alongside the remembered year.  So for example if you want to find entries that feature a sense labelled ‘agricultural’ or ‘bot.’ that have a citation between 1400 and 1410 you can do this.  On the entry page both semantic and usage labels are now links that lead through to the search results for the label in question.  I’ve currently given both label types a somewhat garish pink colour, but this can be changed, or we could use two different colours for the two types.

Other than these projects, I fixed an issue with the 18th century Glasgow borrowers site (https://18c-borrowing.glasgow.ac.uk/) and made some tweaks to the place-names of Iona site, fixing the banner and creating Gaelic versions of the pages and menu items.  The site is not live yet, but I’m pretty happy with how it’s looking.  Here’s an image of the banner I created:

Also this week I spoke to Kirsteen McCue about the project she’s currently preparing a proposal for and I created a new version of the Burns Suppers map for Paul Malgrati.  This was rather tricky as his data is contained in a spreadsheet that has more than 2,500 rows and more than 90 columns, and it took some time to process this in a way that worked, especially as some fields contained carriage returns which resulted in lines being split where they shouldn’t be when the data was exported.  However, I got there in the end, and next week I hope to develop the filters for the data.

Week Beginning 2nd November 2020

I spent a lot of this week continuing to work on the redevelopment of the Anglo-Norman Dictionary website, focussing on the search facilities.  I made some tweaks to the citation search that I’d developed last week, ensuring that the intermediate ‘pick a form’ screen appears even if only one search word is returned and updating the search results to omit forms and labels but to include the citation dates and siglums, the latter opening up pop-ups as with the entry pages.  I also needed to regenerate the search terms as I’d realised that due to a typo in my script a number of punctuation marks that should have been stripped out were remaining, meaning some duplicate forms were being listed, sometimes with a punctuation mark such as a colon and othertimes ‘clean’.

I also realised that I needed to update the way apostrophes were being handled.  In my script these were just being stripped out, but this wasn’t working very well as forms like ‘s’oreille’ were then becoming ‘soreille’ when really it’s the ‘oreille’ part that’s important.  However, I couldn’t just split words up on an apostrophe and use the part on the right as apostrophes appear elsewhere in the citations too.  I managed to write a script that successfully split words on apostrophes and retained the sections on both sides as individual search word forms (if they are alphanumeric).  Whilst writing this script I also fixed an issue with how the data stripped of XML tags is processed.  Occasionally there are no spaces between a word and a tag that contains data, and when my script removed tags to generate the plain text required for extracting the search words this led to a word and the contents of the following tag being squashed together, resulting in forms such as ‘apresentDsxiii1’.  By adding spaces between tags I managed to get around this problem.

With these tweaks in place I then moved onto the next advanced search option:  the English translations.  I extracted the translations from the XML and generated the unique words found in each (with a count of their occurrences), also storing the Sense IDs for the senses in which the translations were found so that I could connect the translations up to the citations found within the senses in order to enable a date search (i.e. limiting a search to only those translations that are in a sense that has a citation in a particular year or range of years).  The search works in a similar way to the citation search, in that you can enter a search term (e.g. ‘bread’) and this will lead you to an intermediary page that lists all words in translations that match ‘bread’.  You can then select one to view all of the entries with their translation that feature the word, with it highlighted.  If you supply a year or a range of years then the search connects to the citations and only returns translations for senses that have a citation date in the specified year or range.  This connects citations and translations via the ‘senseid’ in the XML.  So for example, if you only want to find translations containing ‘bread’ that have a citation between 1350 and 1400 you can do so.  There are still some tweaks that need to be done.  For example, one inconsistency we might need to address is that the number in brackets in the intermediary page refers to the number of translations / citations the word is found in, but when you click through to the full results the ‘matched results’ number will likely be different because this refers to matched entries, and an entry may contain more than one matching translation / citation.

I then moved onto the final advanced search option, the label search.  This proved to be a pretty tricky undertaking, especially when citation dates also have to be taken into consideration.  I didn’t manage to get the search working this week, but I did get the form where you can build your label query up and running on the advanced search page.  If you select the ‘Semantic & Usage Labels’ tab you should see a page with a ‘citation date’ box, a section on the left that lists the labels and a section on the right where your selection gets added.  I considered using tooltips for the semantic label descriptions, but decided against it as tooltips don’t work so well on touchscreens and I thought the information would be pretty important to see.  Instead the description (where available) appears in a smaller font underneath the label, with all labels appearing in a scrollable area.  The number on the right is the number of senses (not entries) that have the label applied to them, as you can see in the following screenshot:

As mentioned above, things are seriously complicated by the inclusion of citation dates.  Unlike with other search options, choosing a date or a range here affects the search options that are available.  E.g. if you select the years 1405-1410 then the labels used in this period and the number of times they are used differs markedly from the full dataset.  For this reason the ‘citation date’ field appears above the label section, and when you update the ‘citation date’ the label section automatically updates to only display labels and counts that are relevant to the years you have selected.  Removing everything from the ‘citation date’ resets the display of labels.

When you find labels you want to search for pressing on the label area adds it to the ‘selected labels’ section on the right.  Pressing on it a second time deselects the label and removes it from the ‘selected labels’ section.  If you select more than one label then a Boolean selector appears between the selected label and the one before, allowing you to choose AND, OR, or NOT, as you can see in the above screenshot.

I made a start on actually processing the search, but it’s not complete yet and I’ll have to return to this next week.  However, building complex queries is going to be tricky as without a formal querying language like SQL there are ambiguities that can’t automatically be sorted out by the interface I’m creating.  E.g. how should ‘X AND Y NOT Z OR B’ be interpreted?  Is it ‘(X AND Y) NOT (Z OR B)’ or ‘((X AND Y) NOT Z) OR B’ or ‘(X AND (Y NOT Z)) OR B’ etc.  Each would give markedly different results.  Adding more than two or possibly three labels is likely to lead to confusing results for people.

Other than working on the AND I spent some time this week working on the Place-names of Iona project.  We had a team meeting on Friday morning and after that I began work on the interface for the website.  This involved the usual installing a theme, customising fonts, selecting colour schemes, adding in logos, creating menus and an initial site structure.  As with the Mull site, the Iona site is going to be bilingual (English and Gaelic) so I needed to set this up too.  I also worked on the banner image, combining a lovely photo of Iona from Shutterstock with a map image from the NLS.  It’s almost all in place now, but I’ll need to make a few further tweaks next week.  I also set up the CMS for the project, as we have decided to not just share the Mull CMS.  I migrated the CMS and all of its data across and then worked on a script that would pick out only those place-names from the Mull dataset that are of relevance to the Iona project.  I did this by drawing a box around the island using this handy online interface: https://geoman.io/geojson-editor and then grabbing the coordinates.  I needed to reverse the latitude and longitude of these due to GeoJSON using them the other way around to other systems, and then I plugged these into a nice little algorithm I discovered for working out which coordinates are within a polygon (see https://assemblysys.com/php-point-in-polygon-algorithm/).  This resulted in about 130 names being identified, but I’ll need to tweak this next week to see if my polygon area needs to be increased.

For the remainder of the week I upgraded all of the WordPress sites I manage to the most recent version (I manage 39 such sites so this took a little while).  I also helped Simon Taylor to access the Berwickshire and Kirkcudbrightshire place-names systems again and fixed an access issue with the Books and Borrowing CMS.  I also looked into an issue with the DSL test sites as the advanced searches on each of these had stopped working.  This was caused by an issue with the Solr indexing server that thankfully Arts IT Support were able to address.

Next week I’ll continue with the AND redevelopment and also return to working on the DSL for the first time in quite a while.

Week Beginning 26th October 2020

This was something of an odd week as I tested positive for Covid.  I’m not entirely sure how I managed to get it, but I’d noticed on Friday last week that I’d lost my sense of taste and thought it would be sensible to get tested and the result came back positive.  I’d been feeling a bit under the weather last week and this continued throughout this week too, but thankfully the virus never affected my chest or throat and I managed to more or less work all week.  However, with our household in full-on in isolation our son was off school all week, and will be all next week, which did impact on the work I could do.

My biggest task of the week was to complete the work in preparation for the launch of the second edition of the Historical Thesaurus.  This included fixing the full-size timelines to ensure that words that have been updated to have post-1945 end dates display properly.  As we had changed the way these were stored to record the actual end date rather than ‘9999’ the end points of the dates on the timeline were stopping short and not having a pointy end to signify ‘current’.  New words that only had post-1999 dates were also not displaying properly.  Thankfully I managed to get these issues sorted.  I also updated the search terms to fix some of the unusual characters that had not migrated over properly but had been replaced by question marks.  I then updated the advanced search options to provide two checkboxes to allow a user to limit their search to new word or words that have been updated (or both), which is quite handy, as it means you can fine out all of the new words in a particular decade, for example all of the new words that have a first date some time in the 1980s:

 

https://ht.ac.uk/category-selection/?word=&label=&category=&year=&startf=1980&endf=1989&startl=&endl=&twoEdNew=Y

 

I also tweaked the text that appears beside the links to the OED and added the Thematic Heading codes to the drop-down section of the main category.  We also had to do some last-minute renumbering of categories, which affected several hundred categories and subcategories in ’01.02’ and manually moved a couple of other categories to new locations, and after that we were all set for the launch.  The new second edition is now fully available, as you can see from the above link.

Other than I worked on a few other projects this week.  I helped to migrate a WordPress site for Bryony Randall’s Imprints of New Modernist Editing project, which is now available here: https://imprintsarteditingmodernism.glasgow.ac.uk/ and responded to a query about software purchased from Lisa Kelly in TFTS.

I spent the rest of the week continuing with the redevelopment of the Anglo-Norman Dictionary website.  I updated my script that extracts citations and their dates, which I’d started to work on last week.  I figured out why my script was not extracting all citations (it was only picking out the citations form the first sense and subsense in each entry rather than all senses) and managed to get all citations out.  With dates extracted for each entry I was then able to store the earliest date for each entry and update the ‘browse’ facility to display this date alongside the headword.

With this in place I moved on to looking at the advanced search options. I created the tab-based interface for the various advanced search options and implemented searches for headwords and citations.  The headword search works in a very similar way to the quick search – you can enter a term and use wildcards or double quotes for an exact search.  You can also combine this with a date search.  This allows you to limit your results to only those entries that have a citation in the year or range of years you specify.  I would imagine entering a range of years would be more useful than a single year.  You can also omit the headword and just specify a citation year to find all entries with a citation in the year or range, e.g. all entries with a citation in 1210.

The citation search is also in place and this works rather differently.  As mentioned in the overview document, this works in a similar (but not identical) way to the old ‘concordance search of citations’.  You can search for a word or a part of a word using the same wildcards as for the headword and limiting your search to particular citation dates.  When you submit the search this then loads an intermediary page that lists all of the word forms in citations that your search matches, plus a count of the number of citations each form is in.  From this page you can then select a specific form and view the results.  So, for example, a search for words beginning with ‘tre’ with a citation date between 1200 and 1250 lists 331 forms in citations will list all of the ‘tre’ words and you can then choose a specific form, e.g. ‘tref’ to see the results. The citation results include all of the citations for an entry that include the word, with the word highlighted in yellow.  I still need to think about how this might work better, as currently there is no quick way to get back to the intermediary list of forms.  But progress is being made.