This was a four-day week for me as I had an optician’s appointment on Tuesday, and as my optician is over the other side of the city (handy for work, not so handy for working from home) I decided I’d just take the day off. I spent most of Monday working on the interactive map of Burns Suppers I’m developing with Paul Malgrati in Scottish Literature. This week I needed to update my interface to incorporate the large number of filters that Paul wants added to the map. In doing so I had to change the way the existing ‘period’ filter works to fit in with the new filter options, as previously the filter was being applied as soon as a period button was pressed. Now you need to press on the ‘apply filters’ button to see the results displayed, and this allows you to select multiple options without everything reloading as you work. There is also a ‘reset filters’ button which turns everything back on. Currently the ‘period’, ‘type of host’ and ‘toasts’ filters are in place, all as a series of checkboxes that allow you to combine any selections you want. Within a section (e.g. type of host) the selections are joined with an OR (e.g. ‘Burns Society’ OR ‘Church’). Between sections are joined with an AND (e.g. ‘2019-2020’ AND (‘Burns Society’ OR ‘Church’). For ‘toasts’ a location may have multiple toasts and the location will be returned if any of the selected toasts are associated with the location. E.g. if a location has ‘Address to a Haggis’ but not ‘Loyal Toast’ it will still be returned if you’ve selected both of these toasts. I also updated the introductory panel to make it much wider, so we can accommodate some text and also so there is more room for the filters. Even with this it is going to be a bit tricky to squeeze all of the filters in, but I’ll see what I can do next week.
I then turned my attention to the Dictionary of the Scots Language. Ann had noticed last week that the full text searches were ‘accent sensitive’ – i.e. entries containing accented characters would only be returned if your search term also contained the accented characters. This is not what we wanted and I spend a bit of time investigating how to make Solr ignore accents. Although I found a few articles dealing with this issue they were all for earlier versions of Solr and the process didn’t seem all that easy to set up, especially as I don’t have direct access to the sever that Solr resides on so tweaking settings is not an easy process. I decided instead to just strip out accents from the source data before it was ingested into Solr, which was a much more straightforward process and fitted in with another task I needed to tackle, which was to regenerate the DSL data from the old editing system, delete the duplicate child entries and set up a new version of the API to use this updated dataset. This process took a while to go through, but I go there in the end and we now have a new Solr collection set up for the new data, which has the accents and the duplicate child entries removed. I then updated one of our test front-ends to connect to this dataset so people can test it. However, whilst checking this I realised that stripped out the accents has introduced some unforeseen issues. The fulltext search is still ‘accent sensitive’, it’s just that I’ve replaced all accented characters with their non-accented equivalents. This means an accented search will no longer find anything. Therefore we’ll need to ensure that any accents in a submitted search string are also stripped out. In addition, stripping accents out of the Solr text also means that accents no longer appear in the snippets in the search results, meaning the snippets don’t fully reflect the actual content of the entries. This may be a larger issue and will need further discussion. I also wrote a little script to generate a list of entry IDs that feature an <su> inside a <cref> excluding those that feature </geo><su> or </geo> <su>. These entries will need to be manually updated to ensure the <su> tags don’t get displayed.
I spent the remainder of the week continuing with the redevelopment of the Anglo-Norman Dictionary site. My 51-item ‘to do’ list that I compiled last week has now shot up to 65 items, but thankfully during this week I managed to tick 20 of the items off. I have now imported all of the corrected data that the editors were working on, including not just the new ‘R’ records but corrections to other letters too. I also ensured that the earliest dates for entries now appear in the ‘search results’ and ‘log’ tabs in the left-hand panel, and these dates also appear in the fixed header that gets displayed when you scroll down long entries. I also added the date to the title in the main page.
I added content to the ‘cite this entry’ popup, which now contains the same citation options as found on the Historical Thesaurus site, and I added help text to the ‘browse’, ‘search results’ and ‘entry log’ pop-ups. I made the ‘clear search’ button right aligned and removed the part of speech tooltips as parts of speech appear all over the place and I thought it would confuse things if they had tooltips in the search results but not elsewhere. I also added a ‘Try an advanced search’ button at the top of the quick search results page.
I set up WordPress accounts for the editors and added the IP addresses used by the Aberystwyth VPN to our whitelist to ensure that the editors will be able to access the WordPress part of the site and I added redirects that will work from old site URLs once the new site goes live. I also put redirects in for all static pages that I could find too. We also now have the old site accessible via a subdomain so we should be able to continue to access the old site when we switch the main domain over to the new site.
I figured out why the entry for ‘ipocrite’ was appearing in all bold letters. This was caused by an empty XML tag and I updated the XSLT to ensure these don’t get displayed. I updated Variant/Deviant forms in the search results so they just have the title ‘Forms’ now and I think I’ve figured out why the sticky side panel wasn’t sticking and I think I’ve fixed it.
I updated my script that generates citation date text and regenerated all of the earliest date text to include the prefixes and suffixes and investigated the issue of commentary boxes was appearing multiple times. I checked all of the entries and there were about 5 that were like this, so I manually fixed them. I also ensured that when loading the advanced search after a quick search the ‘headword’ tab is now selected and the quick search text appears in the headword search box and I updated the label search so that label descriptions appear in a tooltip.
Also this week I participated in the weekly Iona Placenames Zoom call and made some tweaks to the content management systems for both Iona and Mull to added in researcher notes fields (English and Gaelic) to the place-name elements. I also had a chat with Wendy Anderson about making some tweaks to the Mapping Metaphor website for REF and spoke to Carole Hough about her old Cogtop site that is going to expire soon.
I spent two days each working on updates to the Dictionary of the Scots Language and the redevelopment of the Anglo-Norman Dictionary this week, with the remaining day spent on tasks for a few projects. For the DSL I fixed an issue with the way a selected item was being remembered from the search results. If you performed a search, then navigated to a page of results that wasn’t the first page then clicked on one of the results to view an entry this was setting a ‘current entry’ variable. This was used when loading the results from the ‘return to results’ button on the entry page to ensure that the page of the results that featured the entry you were looking at would be displayed. However, if you then clicked the ‘refine search’ button to return to the advanced search page this ‘current entry’ value would be retained, meaning that when you clicked the search button to perform a new search the results would load at whatever page the ‘current entry’ was located on, if it appeared in the new search results. As the ‘current entry’ would not necessarily be in the new results set the issue only cropped up every now and then. Thankfully having identified the issue it was easy to fix – whenever the search form loads as a result of a ‘refine search’ selection the ‘current entry’ variable is now cleared. It is still retained and used when you return to the search results from an entry page.
I also had a discussion with the editors about removing duplicate child entries from the data and grabbing an updated dataset from the old editing system one last time. Removing duplicate child entries will mean deleting several thousand entries and I was reluctant to do so on from the test systems we already have in case anything goes wrong, so I’ve decided to set up a new test version, which will be available via a new version of the API and will be accessible via one of the existing test front-ends I’ve created. I’ll be tackling this next week.
For the AND I focussed on importing more than 4,500 new or updated entries into the dictionary. In order to do this I needed to look at the new data and figure out how it differed structurally from the existing data, as previously the editors’ XML was passed through an online system that made changes to it before it was published on the old website. I discovered that there were five main differences:
- <main_entry> does not have an ID, or a ‘lead’ or a ‘unikey’. I’ll need to add in the ‘lead’ attribute as this is what controls that editor’s initials on the entry page and I decided to add in an ID as a means of uniquely identifying the XML, although there is already a new unique ID for each entry in the database. ‘unikey’ doesn’t seem to be used anywhere so I decided not to do anything about this.
- <sense> and <subsense> do not have IDs or ‘n’ numbers. The latter I already set up a script to generate so I could reuse that. The former aren’t used anywhere as each sense also has a <senseInfo> with another ID associated with it.
- <senseInfo> does not have IDs, ‘seq’ or ‘pos’. IDs here are important as they are used to identify senses / subsenses for the searches and I needed to develop a way of adding these in here. ‘seq’ does not appear to be used but ‘pos’ is. It’s used to generate the different part of speech sections between senses and in the ‘summary’ and this will need to be added in.
- <attestation> does not have IDs and these are needed as they are recorded in the translation data.
- <dateInfo> has an ID but the existing data from the DMS does not have IDs for <dateInfo>. I decided to retain these but they won’t be used anywhere.
I wrote a script that processed the XML to add in entry IDs, editor initials, sense numbers, senseInfo IDs, parts of speech and attestation IDs. This took a while to implement and test but it seemed to work successfully. After that I needed to work on a script that would import the data into my online system, which included regenerating all of the data used for search purposes, such as extracting cross references, forms, labels, citations and their dates, translations and earliest dates for entries. All seemed to work well and I made the new data available via the front-end for the editors to test out.
I also asked the editors about how to track different versions of data so we know which entries are new or updated as a result of the recent update. It turned out that there are six different statements that need to be displayed underneath entries depending on when the entries were published so I spent a bit of time applying these to entries and updating the entry page to display the notices.
After that I made a couple of tweaks to the new data (e.g. links to the MED were sometimes missing some information needed for the links to work) and discussed adding in commentaries with Geert. I then went through all of the emails to and from the editors in order to compile a big list of items that I still needed to tackle before we can launch the site. It’s a list that totals some 51 items, so I’m going to have my work cut out for me, especially as it is vital that the new site launches before Christmas.
The other projects I worked on this week included the interactive map of Burns Suppers for Paul Malgrati in Scottish Literature. Last week I’d managed to import the core fields from his gigantic spreadsheet and this week I made some corrections to the data and created the structure to hold the filters all of the filters that are in the data.
I wrote a script that goes through all of the records in the spreadsheet and stores the filters in the database when required. Note that a filter is only stored for a record when it’s not set to ‘NA’, which cuts down on the clutter. There are a total of 24,046 filters now stored in the system. The next stage will be to update the front-end to add in the options to select any combination of filter and to build the queries necessary to process the selection and return the relevant locations, which is something I aim to tackle next week.
Also this week I participated in the weekly Zoom call for the Iona place-names project and I updated the Iona CMS to strip out all of the non-Iona names and gave the team access to the project’s Content Management system. The project website also went live this week, although as of yet there is still not much content on it. It can be found here, though: https://iona-placenames.glasgow.ac.uk/
Also this week I helped to export some data for a member of the Berwickshire place-names project team, I responded to a query from Gerry McKeever about the St. Andrews data in the Books and Borrowing project’s database and I fixed an issue with Rob Maslen’s City of Lost Books site, which had somehow managed to lose its nice header font.
I took Friday off this week as I had a dentist’s appointment across town in the West End and I decided to take the opportunity to do some Christmas shopping whilst all the shops in Glasgow are still open (there’s some talk of us having greater Covid restrictions imposed in the next week or so). I spent a couple of days this week working on the Dictionary of the Scots Language, a project I’ve been meaning to return to for many months but have been too busy with other work to really focus on. Thankfully in November with the launch of the second edition of the Historical Thesaurus out of the way I have a bit of time to get back into the outstanding DSL issues.
Rhona Alcorn had sent a list of outstanding tasks a while back and I spent some time going through this and commenting on each item. I then began to work through each item, starting with fixing cross references in our ‘V3’ test site (which features data that the editors have been working on in recent years). Cross references appear differently in the XML for this version so I needed to update the XSLT in order to make them work correctly. I then updated the full-text extraction script that prepares data for inclusion in the Solr search engine. Previously this was stripping out all of the XML tags in order to leave the plain text, but unfortunately there were occasions where the entries contains words separated by tags but no spaces, meaning when the tags were removed the words ended up joined together. I fixed this by adding a space character before every XML tag before the tags were stripped out. This resulted in plain text that often contained multiple spaces between words, but thankfully Solr ignores these when it indexes the text. I asked Raymond of Arts IT Support to upload the new text to the server and tested things out and all worked perfectly.
After this I moved on to creating a new ordering for the ‘browse’ feature. This new ordering takes into consideration parts of speech and ensures that supplemental entries appear below main entries. It also correctly positions entries beginning with a yogh. I’d created a script to generate the new browse order many months ago, so I could just tweak this and then use it to update the database. After that I needed to make some updates to the V2 and V3 front-ends to use the new ordering fields, which took a little time, but it seems to have worked successfully. I may need to tweak the ordering further, but will await feedback before I make any changes.
I then moved on to investigating searches for accented characters, that were apparently not working correctly. I noticed that the htaccess script was not set up to accept accented characters so I updated this. However, the advanced headword search itself was finding forms with accented characters in them if the non-accented version was passed. The ‘privace’ example was redirecting to the entry page as only one result was matched, but if you perform a search for ‘*vace’ it finds and displays the accented headword in both V2 and V3 but not the live site. Therefore I think this issue is now sorted. However, we should perhaps strip out accents from any submitted search terms as allowing accented characters to be submitted (e.g. for *vacé) gives the impression that we allow accented characters to be searched for distinctly from their unaccented versions and the results including both accented and unaccented might confuse people.
The last DSL issue I looked at involved hiding superscript characters in certain circumstances (after ‘geo’ tags in ‘cref’ tags). There are 3093 SND entries that include the text ‘</geo><su>’ or ‘</geo> <su>’ and I updated the XSLT file that transforms the XML into HTML to deal with these. Previously it transformed the <su> tag into the HTML superscript tag <sup>. I’ve updated it so that it now checks to see what the tag’s preceding sibling is. If it’s a <geo> tag it now adds the class ‘noSup’ to the generated <sup>. Currently I’ve set <sup> elements with this class to have a pink background so the editors can check to see how the match is performing, and once they’re happy with it I can update the CSS to hide ‘noSup’ elements.
Other than DSL work I also spent some time continuing to work on the redevelopment of the Anglo-Norman Dictionary and completed an initial version of the label search that I began working on last week. The search form as discussed last week hasn’t changed, but it’s now possible to submit the search, navigate through the search results, return to the search form to make changes to your selection and view entries. I have needed to overhaul how the search page works to accommodate the label search, which required some pretty major changes behind the scenes, but hopefully none of the other searches will have been affected by this. You can select a single label and search for that, e.g. ‘archit.’ and if you then refine your search you will see that the label is ‘remembered’ in the form so you can add to it or remove it, for example if you’re interested in all of the entries that are labelled ‘archit.’ and ‘mil’. As mentioned last week, adding or changing a citation year resets the boxes as different labels are displayed depending on the years chosen. The chosen year is remembered by the form if you choose to refine your search and the labels and selected labels and Booleans are pulled in alongside the remembered year. So for example if you want to find entries that feature a sense labelled ‘agricultural’ or ‘bot.’ that have a citation between 1400 and 1410 you can do this. On the entry page both semantic and usage labels are now links that lead through to the search results for the label in question. I’ve currently given both label types a somewhat garish pink colour, but this can be changed, or we could use two different colours for the two types.
Other than these projects, I fixed an issue with the 18th century Glasgow borrowers site (https://18c-borrowing.glasgow.ac.uk/) and made some tweaks to the place-names of Iona site, fixing the banner and creating Gaelic versions of the pages and menu items. The site is not live yet, but I’m pretty happy with how it’s looking. Here’s an image of the banner I created:
Also this week I spoke to Kirsteen McCue about the project she’s currently preparing a proposal for and I created a new version of the Burns Suppers map for Paul Malgrati. This was rather tricky as his data is contained in a spreadsheet that has more than 2,500 rows and more than 90 columns, and it took some time to process this in a way that worked, especially as some fields contained carriage returns which resulted in lines being split where they shouldn’t be when the data was exported. However, I got there in the end, and next week I hope to develop the filters for the data.
I spent a lot of this week continuing to work on the redevelopment of the Anglo-Norman Dictionary website, focussing on the search facilities. I made some tweaks to the citation search that I’d developed last week, ensuring that the intermediate ‘pick a form’ screen appears even if only one search word is returned and updating the search results to omit forms and labels but to include the citation dates and siglums, the latter opening up pop-ups as with the entry pages. I also needed to regenerate the search terms as I’d realised that due to a typo in my script a number of punctuation marks that should have been stripped out were remaining, meaning some duplicate forms were being listed, sometimes with a punctuation mark such as a colon and othertimes ‘clean’.
I also realised that I needed to update the way apostrophes were being handled. In my script these were just being stripped out, but this wasn’t working very well as forms like ‘s’oreille’ were then becoming ‘soreille’ when really it’s the ‘oreille’ part that’s important. However, I couldn’t just split words up on an apostrophe and use the part on the right as apostrophes appear elsewhere in the citations too. I managed to write a script that successfully split words on apostrophes and retained the sections on both sides as individual search word forms (if they are alphanumeric). Whilst writing this script I also fixed an issue with how the data stripped of XML tags is processed. Occasionally there are no spaces between a word and a tag that contains data, and when my script removed tags to generate the plain text required for extracting the search words this led to a word and the contents of the following tag being squashed together, resulting in forms such as ‘apresentDsxiii1’. By adding spaces between tags I managed to get around this problem.
With these tweaks in place I then moved onto the next advanced search option: the English translations. I extracted the translations from the XML and generated the unique words found in each (with a count of their occurrences), also storing the Sense IDs for the senses in which the translations were found so that I could connect the translations up to the citations found within the senses in order to enable a date search (i.e. limiting a search to only those translations that are in a sense that has a citation in a particular year or range of years). The search works in a similar way to the citation search, in that you can enter a search term (e.g. ‘bread’) and this will lead you to an intermediary page that lists all words in translations that match ‘bread’. You can then select one to view all of the entries with their translation that feature the word, with it highlighted. If you supply a year or a range of years then the search connects to the citations and only returns translations for senses that have a citation date in the specified year or range. This connects citations and translations via the ‘senseid’ in the XML. So for example, if you only want to find translations containing ‘bread’ that have a citation between 1350 and 1400 you can do so. There are still some tweaks that need to be done. For example, one inconsistency we might need to address is that the number in brackets in the intermediary page refers to the number of translations / citations the word is found in, but when you click through to the full results the ‘matched results’ number will likely be different because this refers to matched entries, and an entry may contain more than one matching translation / citation.
I then moved onto the final advanced search option, the label search. This proved to be a pretty tricky undertaking, especially when citation dates also have to be taken into consideration. I didn’t manage to get the search working this week, but I did get the form where you can build your label query up and running on the advanced search page. If you select the ‘Semantic & Usage Labels’ tab you should see a page with a ‘citation date’ box, a section on the left that lists the labels and a section on the right where your selection gets added. I considered using tooltips for the semantic label descriptions, but decided against it as tooltips don’t work so well on touchscreens and I thought the information would be pretty important to see. Instead the description (where available) appears in a smaller font underneath the label, with all labels appearing in a scrollable area. The number on the right is the number of senses (not entries) that have the label applied to them, as you can see in the following screenshot:
As mentioned above, things are seriously complicated by the inclusion of citation dates. Unlike with other search options, choosing a date or a range here affects the search options that are available. E.g. if you select the years 1405-1410 then the labels used in this period and the number of times they are used differs markedly from the full dataset. For this reason the ‘citation date’ field appears above the label section, and when you update the ‘citation date’ the label section automatically updates to only display labels and counts that are relevant to the years you have selected. Removing everything from the ‘citation date’ resets the display of labels.
When you find labels you want to search for pressing on the label area adds it to the ‘selected labels’ section on the right. Pressing on it a second time deselects the label and removes it from the ‘selected labels’ section. If you select more than one label then a Boolean selector appears between the selected label and the one before, allowing you to choose AND, OR, or NOT, as you can see in the above screenshot.
I made a start on actually processing the search, but it’s not complete yet and I’ll have to return to this next week. However, building complex queries is going to be tricky as without a formal querying language like SQL there are ambiguities that can’t automatically be sorted out by the interface I’m creating. E.g. how should ‘X AND Y NOT Z OR B’ be interpreted? Is it ‘(X AND Y) NOT (Z OR B)’ or ‘((X AND Y) NOT Z) OR B’ or ‘(X AND (Y NOT Z)) OR B’ etc. Each would give markedly different results. Adding more than two or possibly three labels is likely to lead to confusing results for people.
Other than working on the AND I spent some time this week working on the Place-names of Iona project. We had a team meeting on Friday morning and after that I began work on the interface for the website. This involved the usual installing a theme, customising fonts, selecting colour schemes, adding in logos, creating menus and an initial site structure. As with the Mull site, the Iona site is going to be bilingual (English and Gaelic) so I needed to set this up too. I also worked on the banner image, combining a lovely photo of Iona from Shutterstock with a map image from the NLS. It’s almost all in place now, but I’ll need to make a few further tweaks next week. I also set up the CMS for the project, as we have decided to not just share the Mull CMS. I migrated the CMS and all of its data across and then worked on a script that would pick out only those place-names from the Mull dataset that are of relevance to the Iona project. I did this by drawing a box around the island using this handy online interface: https://geoman.io/geojson-editor and then grabbing the coordinates. I needed to reverse the latitude and longitude of these due to GeoJSON using them the other way around to other systems, and then I plugged these into a nice little algorithm I discovered for working out which coordinates are within a polygon (see https://assemblysys.com/php-point-in-polygon-algorithm/). This resulted in about 130 names being identified, but I’ll need to tweak this next week to see if my polygon area needs to be increased.
For the remainder of the week I upgraded all of the WordPress sites I manage to the most recent version (I manage 39 such sites so this took a little while). I also helped Simon Taylor to access the Berwickshire and Kirkcudbrightshire place-names systems again and fixed an access issue with the Books and Borrowing CMS. I also looked into an issue with the DSL test sites as the advanced searches on each of these had stopped working. This was caused by an issue with the Solr indexing server that thankfully Arts IT Support were able to address.
Next week I’ll continue with the AND redevelopment and also return to working on the DSL for the first time in quite a while.
This was something of an odd week as I tested positive for Covid. I’m not entirely sure how I managed to get it, but I’d noticed on Friday last week that I’d lost my sense of taste and thought it would be sensible to get tested and the result came back positive. I’d been feeling a bit under the weather last week and this continued throughout this week too, but thankfully the virus never affected my chest or throat and I managed to more or less work all week. However, with our household in full-on in isolation our son was off school all week, and will be all next week, which did impact on the work I could do.
My biggest task of the week was to complete the work in preparation for the launch of the second edition of the Historical Thesaurus. This included fixing the full-size timelines to ensure that words that have been updated to have post-1945 end dates display properly. As we had changed the way these were stored to record the actual end date rather than ‘9999’ the end points of the dates on the timeline were stopping short and not having a pointy end to signify ‘current’. New words that only had post-1999 dates were also not displaying properly. Thankfully I managed to get these issues sorted. I also updated the search terms to fix some of the unusual characters that had not migrated over properly but had been replaced by question marks. I then updated the advanced search options to provide two checkboxes to allow a user to limit their search to new word or words that have been updated (or both), which is quite handy, as it means you can fine out all of the new words in a particular decade, for example all of the new words that have a first date some time in the 1980s:
I also tweaked the text that appears beside the links to the OED and added the Thematic Heading codes to the drop-down section of the main category. We also had to do some last-minute renumbering of categories, which affected several hundred categories and subcategories in ’01.02’ and manually moved a couple of other categories to new locations, and after that we were all set for the launch. The new second edition is now fully available, as you can see from the above link.
Other than I worked on a few other projects this week. I helped to migrate a WordPress site for Bryony Randall’s Imprints of New Modernist Editing project, which is now available here: https://imprintsarteditingmodernism.glasgow.ac.uk/ and responded to a query about software purchased from Lisa Kelly in TFTS.
I spent the rest of the week continuing with the redevelopment of the Anglo-Norman Dictionary website. I updated my script that extracts citations and their dates, which I’d started to work on last week. I figured out why my script was not extracting all citations (it was only picking out the citations form the first sense and subsense in each entry rather than all senses) and managed to get all citations out. With dates extracted for each entry I was then able to store the earliest date for each entry and update the ‘browse’ facility to display this date alongside the headword.
With this in place I moved on to looking at the advanced search options. I created the tab-based interface for the various advanced search options and implemented searches for headwords and citations. The headword search works in a very similar way to the quick search – you can enter a term and use wildcards or double quotes for an exact search. You can also combine this with a date search. This allows you to limit your results to only those entries that have a citation in the year or range of years you specify. I would imagine entering a range of years would be more useful than a single year. You can also omit the headword and just specify a citation year to find all entries with a citation in the year or range, e.g. all entries with a citation in 1210.
The citation search is also in place and this works rather differently. As mentioned in the overview document, this works in a similar (but not identical) way to the old ‘concordance search of citations’. You can search for a word or a part of a word using the same wildcards as for the headword and limiting your search to particular citation dates. When you submit the search this then loads an intermediary page that lists all of the word forms in citations that your search matches, plus a count of the number of citations each form is in. From this page you can then select a specific form and view the results. So, for example, a search for words beginning with ‘tre’ with a citation date between 1200 and 1250 lists 331 forms in citations will list all of the ‘tre’ words and you can then choose a specific form, e.g. ‘tref’ to see the results. The citation results include all of the citations for an entry that include the word, with the word highlighted in yellow. I still need to think about how this might work better, as currently there is no quick way to get back to the intermediary list of forms. But progress is being made.
I was back at work this week after having a lovely holiday the previous week. It was a pretty busy week, mostly spent continuing to work on the preparations for the second edition of the Historical Thesaurus, which needs to be launched before the end of the month. I updated the OED date extraction script that formats all of the OED dates as we need them in the HT, including making full entries, in the HT dates table, generating the ‘full date’ text string that gets displayed on the website and generating cached first and last dates that are used for searching. I’d somehow managed to get the plus and dash connectors the wrong way round in my previous version of the script (a plus should be used where there is a gap of more than 150 years, otherwise it’s a dash) so I fixed this. I also stripped out dates that were within a 150 year time span, which really helped to make the full date text more readable. I also updated the category browser so that the category’s thematic heading is displayed in the drop-down section.
Fraser had made some suggested changes to the script I’d written to figure out whether an OED lexeme was new or already in the system so I made some changes to this and regenerated the output. I also made further tweaks to the date extraction script so that we record the actual final date in the system rather than converting it to ‘9999’ and losing this information that will no doubt be useful in future. I then worked on the post-1999 lexemes, which followed a similar set of processes.
With this all in place I could then run a script that would actually import the new lexemes and their associated dates into the HT database. This included changelog codes, new search terms and new dates (cached firstdate and lastdate, fulldate and individual entries in the dates table). A total of 11116 new words were added, although I subsequently noticed there were a few duplicates that had slipped through the net. With these stripped out we had a total of 804,830 lexemes in the HT, and it’s great to have broken through the 800,000 mark. Next week I need to fix a few things (e.g. the fullsize timelines aren’t set up to cope with post-1945 dates that don’t end in ‘9999’ if they’re current) but we’re mostly now good to launch the second edition.
Also this week I worked on setting up a website for the ‘Symposium for Seventeenth-Century Scottish Literature’ for Roslyn Potter in Scottish Literature and set up a subdomain for an art catalogue website for Bryony Randall’s ‘Imprints of the New Modernist Editing’ project. I also helped Megan Coyer out with an issue she was having in transcribing multi-line brackets in Word and travelled to the University to collect a new, higher-resolution monitor and some other peripherals to make working from home more pleasant. I also fixed a couple of bugs in the Books and Borrowing CMS, including one that was resulting in BC dates of birth and death for authors being lost when data was edited. I also spent some time thinking about the structure for the Burns Correspondence system for Pauline Mackay, resulting in a long email with a proposed database structure. I met with Thomas Clancy and Alasdair Whyte to discuss the CMS for the Iona place-names project (it now looks like this is going to have to be a completely separate system from Alasdair’s existing Mull / Ulva system) and replied to Simon Taylor about a query he had regarding the Place-names of Fife data.
I also found some time to continue with the redevelopment of the Anglo-Norman Dictionary website. I updated the way cognate references were processed to enable multiple links to be displayed for each dictionary. I also added in a ‘Cite this entry’ button, which now appears in the top right of the entry that when clicked on opens a pop-up where citation styles will appear (they’re not there yet). I updated the left-hand panel to make it ‘sticky’: If you scroll down a long entry the panel stays visible on screen (unless you’re viewing on a narrow screen like a mobile phone in which case the left-hand panel appears full-width before the entry). I also added in a top bar that appears when you scroll down the screen that contains the site title, the entry headword and the ‘cite’ button. I then began working on extracting the citations, including their dates and text, which will be used for search purposes. I ran an extraction script that extracted about 60,0000 citations, but I released that this was not extracting all of the citations and further work will be required to get this right next week.
This was a four-day week for me as I’d taken Friday off as it was an in-service day at my son’s school before next week’s half-term, which I’ve also taken off. I had rather a lot to try and get done before my holiday so it was a pretty intense week, split mostly between the Historical Thesaurus and the Anglo-Norman Dictionary.
For the Historical Thesaurus I continued with the preparations for the second edition, starting off by creating a little statistics page that lists all of the words and categories that have been updated for the second edition and the changelog code that have been applied to them. Marc had sent a list of all of the category number sequences that we have updated so I then spent a bit of time updating the database to apply the changelog codes to all of these categories. It turns out that almost 200,000 categories have been revised and relocated (out of about 235,000) so it’s pretty much everything. At our meeting last week we had proposed updating the ‘new OED words’ script I’d written last week to separate out some potential candidates into an eighth spreadsheet (these are words that have a slash in them, which now get split up on the slash and each part is compared against the HT’s search words table to see whether they already exist). Whilst working through some of the other tasks I realised that I hadn’t included the unique identifiers for OED lexemes in the output, which was going to make it a bit difficult to work with the files programmatically, especially since there are some occasions where the OED has two identical lexemes in a category. I therefore updated my script and regenerated the output to include the lexeme ID making it possible to differentiate identical lexemes and also making it easier to grab dates for the lexeme in question.
The issue of there being multiple identical lexemes in an OED category was a strange one. For example, one category had two ‘Amber pudding’ lexemes. I wrote a script that extracted all of these duplicates and there are possibly a hundred or so of them, and also other OED lexemes that appear to have no associated dates. I passed these over to Marc and Fraser for them to have a look at. After that I worked on a script to go through each of the almost 12,000 lexemes that we have identified as OED lexemes that are definitely not present in the HT data, extract their OED dates and then format these as HT dates.
The script generates date entries as they would be added to the HT lexeme dates table (used for timelines), the HT fulldate field (used for display) and the HT firstdate and lastdate fields (used for searching). Dates earlier than 1150 are stored as their actual values in the dates table, but are stored at ‘650’ in the ‘firstdate’ field and are displayed as ‘OE’ in the ‘fulldate. Dates after 1945 are stored as ‘9999’ in both the dates table and the ‘lastdate’ field. Where there is a ‘yearend’ in the OED date (i.e. the date is a range) this is stored as the ‘year_b’ in the HT date and appears after a slash in the ‘fulldate’, following the rules for slashes. If the date is the last date then the ‘year_b’ is used as the HT lastdate. If the ‘year_b’ is after 1945 but the ‘year’ isn’t then ‘9999’ is used. So for example ‘maiden-skate’ has a last date of ‘1880/1884’, which appears in the ‘fulldate’ as ‘1880/4’ and the ‘lastdate’ is ‘1884’. Where there is a gap of more than 150 years between dates the connector between dates is a dash and where the gap is less then this it is a plus. One thing that needed further work was how we handle multiple post 1945 dates. In my initial script if there are multiple post 1945 dates then only one of these is carried over as an HT date, and it’s set to ‘9999’. The is because all post-1945 dates are stored as ‘9999’ and having several of these didn’t seem to make sense and confused the generation of the fulldate. There was also an issue with some OED lexemes only having dates after 1945. In my first version of the script these ended up with only one HT date entry of 9999 and 9999 as both firstdate and lastdate, and a fulldate consisting of just a dash, which was not right. After further discussion with Marc I updated the script so that in such cases the date information that is carried over is the first date (even if it’s after 1945) and a dash to show that it is current. For example, ‘ecoregion’ previously had a ‘full date’ of ‘-‘, one HT date of ‘9999’ and a start date of ‘9999’ and in the updated output has a full date of ‘1962-‘, two HT dates and a start date of 1962. Where a lexeme has a single date this also now has a specific end date rather than it being ‘9999’. I passed the output of the script over the Marc and Fraser for them to work with whilst I was on holiday.
For the Anglo-Norman Dictionary I continued to work on the entry page. I added in the cognate references (i.e. references to other dictionaries), which proved to be rather tricky due to the way they have been structured in the Editors’ XML files (in the current live site the cognate references are stored in a separate hash file and are somehow injected into the entry page when it is generated, but we wanted to rationalise this so that the data that appears on the site is all contained in the Editors’ XML where possible). The main issue was with how the links to other online dictionaries were stored, as it was not entirely clear how to generate actual links to specific pages in these resources from them. This was especially true for links to FEW (I have no idea what FEW stands for as the acronym doesn’t appear to be expanded anywhere, even on the FEW website).
They appear in the Editors’ XML like this:
<FEW_refs siglum=”FEW” linkable=”yes”><link_form>A</link_form><link_loc>24,1a</link_loc></FEW_refs>
Which ends up linking to here:
<FEW_refs siglum=”FEW” linkable=”yes”> <link_form>posse</link_form><link_loc>9,231b</link_loc> </FEW_refs>
Which ends up linking to here:
Based on this my script for generating links needed to:
- Store the base URL https://apps.atilf.fr/lecteurFEW/lire/volume
- Split the <link_loc> on the comma
- multiply the part before the comma by 10 (so 24 becomes 240, 9 becomes 90 etc)
- strip out any non-numeric character from the part after the comma (i.e. getting rid of ‘a’ and ‘b’)
- generate the full URL, such as https://apps.atilf.fr/lecteurFEW/lire/volume/240/page/1 using these two values.
After discussion with Heather and Geert at the AND it turned out to be even more complicated than this, as some of the references are further split into subvolumes using a slash and a Roman numeral, so we have things like ‘15/i,108b’ which then needs to link to https://apps.atilf.fr/lecteurFEW/lire/volume/151/page/108. It took some time to write a script that could cope with all of these quirks, but I got there in the end.
Also this week I updated the citation dates so they now display their full information with ‘MS:’ where required and superscript text. I then finished work on the commentaries, adding in all of the required formatting (bold, italic, superscript etc) and links to other AND entries and out to other dictionaries. Where the commentaries are longer than a few lines they are cut off and an ‘expand’ button is shown. I also updated the ‘Results’ tab so it shows you the number of results in the tab header and have added in the ‘entry log’ feature that tracks which entries you have looked at in a session. The number of these also appears in the tab header and I’m personally finding it a very useful feature as I navigate around the entries for test purposes. The log entries appear in the order you opened them and there is no scrolling of entries as I would imagine most people are unlikely to have more than 20 or so listed. You can always clear the log by pressing on the ‘Clear’ button. I also updated the entry page so that the cross references in the ‘browse’ now work. If the entry has a single cross reference then this is automatically displayed when you click on its headword in the ‘browse’, with a note at the top of the page stating it’s a cross reference. If the entry has multiple cross references these are not all displayed but instead links to each entry are displayed. There are two reasons for this: Firstly, displaying multiple entries can result in long and complicated pages that may be hard to navigate; secondly, the entry page as it currently stands was designed to display one entry, and uses HTML IDs to identify certain elements. An HTML ID must be unique on a page so if multiple entries were displayed things would break. There is still a lot of work to do on the site, but the entry page is at least nearing completion. Below is a screenshot showing the entry log, the cognate references and the commentary complete with formatting and the ‘Expand’ option:
I did also work on some other projects this week as well. For Books and Borrowing I set up a user account for a volunteer and talked her through getting access to the system. For the Mull / Ulva site I automatically generated historical forms for all of the place-names that had come from the GB1900 crowdsourced data. These are now associated with the ‘OS 6 inch 2nd edn’ source and about 1670 names have been updated, although many of these are abbreviations like ‘F.P.’. I also updated the database and the CMS to incorporate a new field for deciding which ‘front-end’ the place-name will be displayed on. This is a drop-down list that can be selected when adding or editing a place-name, allowing you to choose from ‘Both’, ‘Mull / Ulva only’ and ‘Iona only’. There is still a further option for stating whether the place-name appears on the website or not (‘on website: Y/N’) so it will be possible to state that a place-name is associated with one project but shouldn’t appear on that project’s website. I also updated the search option on the ‘Browse placenames’ page to allow a user to limit the displayed placenames to those that have ‘front-end display’ set to one of the options. Currently all place-names are set to ‘Mull / Ulva only’. With this all in place I then created user accounts for the CMS for all of the members of the Iona project team who will be using this CMS to work with the data. I also made a few further tweaks to the search results page of the DSL. After all of this I was very glad to get away for a holiday.
I was off on Monday this week for the September Weekend holiday. My four working days were split across many different projects, but the main ones were the Historical Thesaurus and the Anglo-Norman Dictionary.
For the HT I continued with the preparations for the second edition. I updated the front-end so that multiple changelog items are now checked for and displayed (these are the little tooltips that say whether a lexeme’s dates have been updated in the second edition). Previously only one changelog was being displayed but this approach wasn’t sufficient as a lexeme may have a changed start and end date. I also fixed a bug in the assigning of the ‘end date verified as after 1945’ code, which was being applied to some lexemes with much earlier end dates. My script set the type to 3 in all cases where the last HT date was 9999. What it needed to do was to only set it to type 3 if the last HT date was 9999 and the last OED date was after 1945. I wrote a little script to fix this, which affected about 7,400 lexemes.
I also wrote a script to check off a bunch of HT and OED categories that had been manually matched by an RA. I needed to make a few tweaks to the script after testing it out, but after running it on the data we had a further 846 categories matched up, which is great. Fraser had previously worked on a document listing a set of criteria for working out whether an OED lexeme was ‘new’ or not (i.e. unlinked to an HT lexeme). This was a pretty complicated document with many different stages, and the output of the various stages needing to be outputted into seven different spreadsheets and it took quite a long time to write and test a script that would handle all of these stages. However, I managed to complete work on it and after a while it finished executing and resulted in the 7 CSV files, one for each code mentioned in the document. I was very glad that I had my new PC as I’m not sure my old one could have coped with it – for the Levenshtein tests data every word in the HT had to be stored in memory throughout the script’s execution, for example. On Friday I had a meeting with Marc and Fraser where we discussed the progress we’d been making and further tweaks to the script were proposed that I’ll need to implement next week.
For the Anglo-Norman Dictionary I continued to work on the ‘Entry’ page, implementing a mixture of major features and minor tweaks. I updated the way the editor’s initials were being displayed as previously these were the initials of the editor who made the most recent update in the changelog where what was needed were the initials of the person who created the record, contained in the ‘lead’ attribute of the main entry. I also attempted to fix an issue with references in the entry that were set to ‘YBB’. Unlike other references, these were not in the data I had as they were handled differently. I thought I’d managed to fix this, but it looks like ‘YBB’ is used to refer to many different sources so can’t be trusted to be a unique identifier. This is going to need further work.
Minor tweaks included changing the font colour of labels, making the ‘See Also’ header bigger and clearer, removing the final semi-colon from lists of items, adding in line breaks between parts of speech in the summary and other such things. I then spent quite a while integrating the commentaries. These were another thing that weren’t properly integrated with the entries but were added in as some sort of hack. I decided it would be better to have them as part of the editors’ XML rather than attempting to inject them into the entries when they were requested for display. I managed to find the commentaries in another hash file and thankfully managed to extract the XML from this using the Python script I’d previously written for the main entry hash file. I then wrote a script that identified which entry the commentary referred to, retrieved the entry and then inserted the commentary XML into the middle of it (underneath the closing </head> element.
It took somewhat longer than I expected to integrate the data as some of the commentaries contained Greek, and the underlying database was not set up to handle multi-byte UTF-8 characters (which Greek are), meaning these commentaries could not be added to the database. I needed to change the structure of the database and re-import all of the data as simply changing the character encoding of the columns gave errors. I managed to complete this process and import the commentaries and then begin the process of making them appear in the front-end. I still haven’t completely finished this (no formatting or links in the commentaries are working yet) and I’ll need to continue with this next week.
Also this week I added numbers to the senses. This also involved updating the editor’s XML to add a new ‘n’ attribute to the <sense> tag, e.g. <sense id=”AND-201-47B626E6-486659E6-805E33CE-A914EB1F-S001″ n=”1″>. As with the current site, the senses reset to 1 when a new part of speech begins. I also ensured that [sic] now appears, as does the language tag, with a question mark if the ‘cert’ attribute is present and not 100. Uncertain parts of speech are also now visible too (again if ‘cert’ is present and not 100), I increased the font size of the variant forms and citation dates are now visible. There is still a huge amount of work to do, but progress is definitely being made.
Also this week I reviewed the transcriptions from a private library that we are hoping to incorporate into the Books and Borrowing project and tweaked the way ‘additional fields’ are stored to enable the Ras to enter HTML characters into them. I also created a spreadsheet template for a recording the correspondence of Robert Burns for Craig Lamont and spoke to Eila Williamson about the design of the new Names Studies website. I updated the text on the homepage of this site, which Lorna Hughes sent me and gave some advice to Luis Gomes about a data management plan he is preparing. I also updated the working on the search results page for ‘V3’ of the DSL to bring it into line with ‘V2’ and participated in a Zoom call for the Iona project where we discussed the new website and images that might be used in the design.
I’d taken Thursday and Friday off this week as it was the Glasgow September Weekend holiday, meaning this was a three-day week for. It was a week where focussing on any development tasks was rather tricky as I had four Zoom calls and a dentist’s appointment on the other side of the city during my three working days.
On Monday I had a call with the Historical Thesaurus people to discuss the ongoing task of integrating content from the OED for the second edition. There’s still rather a lot to be done for this, and we’re needing to get it all complete during October, so things are a little stressful. After the meeting I made some further updates to the display of icons signifying a second edition update. I updated the database and front-end to allow categories / subcats to have a changelog (in addition to words). These appear in a transparent circle with a white border and a white number, right aligned. I also updated the display of the icon for words. These also appear as a transparent circle, right aligned, but have the teal colour for a border and the number. I also realised I hadn’t added in the icons for words in subcats, so put these in place too.
After that I set about updated the dates of HT lexemes based on some rules that Fraser had developed. I created and ran scripts that updated the start dates of 91,364 lexemes based on OED dates and then ran a further scrip that updated the end dates of 157,156 lexemes. These took quite a while to run (the latter I was dealing with during my time off) but it’s good that progress is being made.
My second Zoom call of the week was for the Books and Borrowing project, and was with the project PI and Co-I and someone who is transcribing library records from a private library that we’re now intending to incorporate into the project’s system. We discussed the data and the library and made a plan for how we’re going to work with the data in future. My third and fourth Zoom call were for the new Place-names of Iona project that is just starting up. It was a good opportunity to meet the rest of the project team (other than the RA who has yet to be appointed) and discuss how and when tasks will be completed. We’ve decided that we’ll use the same content management system as the one I already set up for the Mull and Ulva project, as this already includes Iona data from the GB1900 project. I’ll need to update the system so that we can differentiate place-names that should only appear on the Iona front-end, the Mull and Ulva front-end or both. This is because for Iona we are going to be going into much more detail, down to individual monuments and other ‘microtoponyms’ whereas the names in the Mull and Ulva project are much more high level.
For the rest of my available time this week I made some further updates to the script I wrote last week for Fraser’s Scots Thesaurus project, ordering the results by part of speech and ensuring that hyphenated words are properly searched for (as opposed to being split into separate words joined by an ‘or’). I also spent some time working for the DSL people, firstly updating the text on the search results page and secondly tracking down the certificate for the Android version of the School Dictionary app. This was located on my PC at work, so I had arranged to get access to my office whilst I was already in the West End for my dentist’s appointment. Unfortunately what I thought was the right file turned out to be the certificate for an earlier version of the app, meaning I had to travel all the way back to my office again later in the week (when I was on holiday) to find the correct file.
I also managed to find a little time to continue to work on the new Anglo-Norman Dictionary site, continuing to work on the display of the ‘entry’ page. I updated my XSLT to ensure that ‘parglosses’ are visible and that cross reference links now appear. Explanatory labels are also now in place. These currently appear with a grey background but eventually these will be links to the label search results page. Semantic labels are also now in place and also currently have a grey background but will be links through to search results. However, the System XML notes whether certain semantic labels should be shown or not. So, for example <label type=”sem” show=”no”>med.</label> doesn’t get shown. Unfortunately there is nothing comparable in the Editors’ XML (it’s just <semantic value=”med.”/>) so I can’t hide such labels. Finally, the initials of the editor who made the last update now appear in square brackets to the right of the end of the entry.
Also, my new PC was delivered on Thursday and I spent a lot of time over the weekend transferring all of my data and programs across from my old PC.
This was another busy week involving lots of projects. For the Books and Borrowing project I wrote an import script to process the Glasgow Professors borrowing records, comprising of more than 7,000 rows in a spreadsheet. It was tricky to integrate this with the rest of the project’s data and it took about a day to write the necessary processing scripts. I can only run the scripts on the real data in the evening as I need to take the CMS offline to do so, otherwise changes made to the database whilst I’m integrating the data will be lost and unfortunately it took three attempts to get the import to work properly. There are a few reasons why this data has been particularly tricky. Firstly, it needs to be integrated with existing Glasgow data, rather than being a ‘fresh’ upload to a new library. This caused some problems as my scripts that match up borrowing records and borrowers were getting confused with the existing Student borrowers. Secondly, the spreadsheet order was not in page order for each register – the order appears to have been ‘10r’, ‘10v’, then ‘11r’ etc then after ‘19v’ came ‘1r’. This is presumably to do with Excel ordering numbers as text. I tried reordering on the ‘sort order’ column but this also ordered things weirdly (all the numbers beginning with 1, then all the numbers beginning with 2 etc). I tried changing the data type of this field to a number rather than text but that just resulted in Excel giving errors in all of the fields. What this meant was I needed to sort the data in my own script before I could use it (otherwise the ‘next’ and ‘previous’ page links would all have been wrong), and it took time to implement this. However, I got there in the end.
I also continued working on the Historical Thesaurus database and front-end to allow us to use the new date fields and to enable us to keep track of what lexemes and categories had been updated in the new Second Edition. I have now fully migrated my second edition test site to using the new date system, including the advanced search for labels and both the ‘simple’ and ‘advanced’ date search. I have also now created the database structure for dealing with second edition updates. As we agreed at the last team meeing, the lexeme and category tables have been updated to each have two new fields – ‘last_updated’, which holds a human-readable date (YYYY-MM-DD) that will be automatically populated when rows are updated and ‘changelogcode’ which holds the ID of the row in the new ‘changelog’ table that applies to the lexeme or category. This new table consists of an ID, a ‘type’ (lexeme or category) and the text of the changelog. I’ve created two changelogs for test purposes: ‘This word was antedated in the second edition’ and ‘This word was postdated in the second edition’. I’ve realised that this structure means only one changelog can be associated with a lexeme, with a new one overwriting the old one. A more robust system would record all of the changelogs that have been applied to a lexeme or category and the dates these were applied, and depending on what Marc and Fraser think I may update the system with an extra joining table that would allow this papertrail to be recorded.
For now I’ve updated two lexemes in category 1 to use the two changelogs for test purposes. I’ve updated the category browser in the front end to add in a ‘2’ in a circle where ‘second edition’ changelog IDs are present. These have tooltips that when hovered over display the changelog text and the following screenshot demonstrates:
I haven’t added these circles to the search results yet or the full timeline visualisations, but it is likely that they will need to appear there too.
I also spent some time working on a new script for Fraser’s Scots Thesaurus project. This script allows a user to select an HT category to bring back all of the words contained in it. It then queries the DSL for each of these words and returns a list of those entries that contain at least two of the category’s words somewhere in the entry text. The script outputs the name of the category that was searched for, a list of returned HT words so you can see exactly what is being searched for, and the DSL entries that feature at least two of the words in a table that contains fields such as source dictionary, parts of speech, a link through to the DSL entry, headword etc. I may have to tweak this further next week, but it seems to be working pretty well.
I spent most of the rest of the week working on the redevelopment of the Anglo-Norman Dictionary. We had a bit of a shock at the start of the week because the entire old site was offline and inaccessible. It turned out that the domain name subscription had expired, and thankfully it was possible to renew it and the site became available again. I spent a lot of time this week continuing to work on the entry page, trying to untangle the existing XSLT script and work out how to apply the necessary rules to the editors’ version of the XML, which differs from the system version of the XML that was generated by an incomprehensible and undocumented series of processes in the old system.
I started off with the references located within the variant forms. In the existing site these link through to source texts, with information appearing in a pop-up when the reference is clicked on. To get these working I needed to figure out where the list of source texts was being stored and also how to make the references appear properly. The Editors’ XML and the System XML differ in structure, and only the latter actually contains the text that appears as the link. So, for example, while the latter has:
<cit> <bibl siglum=”Secr_waterford1″ loc=”94.787″><i>Secr</i> <sc>waterford</sc><sup>1</sup> 94.787</bibl> </cit>
The former only has:
<varref> <reference><source siglum=”Secr_waterford1″ target=””><loc>94.787</loc></source></reference> </varref>
This meant that the text to display and its formatting (<i>Secr</i> <sc>waterford</sc><sup>1</sup>) is not available to me. Thankfully I managed to track down an XML file that contained the list of texts, which contained this formatting and also all of the information that should appear in the pop-up that is opened when the link is clicked on, e.g.
<item id=”Secr_waterford1″ cits=”552″>
<siglum><i>Secr</i> <span class=”sc”>WATERFORD</span><sup>1</sup></siglum>
<bibl>Yela Schauwecker, <i>Die Diätetik nach dem ‘Secretum secretorum’ in der Version
von Jofroi de Waterford: Teiledition und lexikalische Untersuchung</i>, Würzburger medizinhistorische Forschungen 92, Würzburg, 2007
<date>c.1300 (text and MS)</date>
I then turned my attention cognate references section, and there were also some issues here with the Editors’ XML not including information that is in the system XML. The structure of the cognate references in the system XML is like this:
<xr_group type=”cognate” linkable=”yes”> <xr><ref siglum=”FEW” target=”90/page/231″ loc=”9,231b”>posse</ref></xr> </xr_group>
Note that there is a ‘target’ attribute that provides a link. The Editor’s XML does not include this information – here’s the same reference:
<FEW_refs siglum=”FEW” linkable=”yes”> <link_form>posse</link_form><link_loc>9,231b</link_loc> </FEW_refs>
There’s nothing in there that I can use to ascertain the correct link to add in. However, I have found a ‘hash’ file called ‘cognate_hash’ that when extracted I found contains a list of cognate references and targets. These don’t include entry identifiers so I’m not sure how they were connected to entries, but by combining the ‘siglum’ and the ‘loc’ it looks like it might be possible to find the target, e.g:
<xr_group type=”cognate” linkable=”yes”>
<ref siglum=”FEW” target=”90/page/231″ loc=”*9,231b”>posse</ref>
I’m not sure why there’s an asterisk, though. I also found another hash file called ‘commentary_hash’ that I guess contains the commentaries that appear in some entries but not in their XML. We’ll probably need to figure out whether we want to properly integrate these with the editor’s XML as well.
I completed work on the ‘cognate references’ section, omitting the links out for now (I’ll add these in later) and then moved on to the ‘summary’ box that contains links through to lower sections of the entry. Unfortunately the ‘sense’ numbers are something else that are not present in any form in the Editor’s XML. In the System XML each entry has a number, e.g. ‘<sense n=”1″>’ but in the Editor’s XML there is no such number. I spent quite a bit of time trying to increment a number in XSLT and apply it to each sense but it turns out you can’t increment a number in XSLT, even though there are ‘for’ loops where such an incrementing number would be easy to implement in other languages.
I still need to add in the non-locution xrefs, labels and some other things, but overall I’m very happy with the progress I’ve made this week. Below is an example of an entry in the old site, with the entry as it currently looks in the new test site I’m working on (be aware that the new interface is only a placeholder). Before: