Week Beginning 18th October 2021

I was back at work this week after having a lovely holiday in Northumberland last week.  I spent quite a bit of time in the early part of the week catching up with emails that had come in whilst I’d been off.  I fixed an issue with Bryony Randall’s https://imprintsarteditingmodernism.glasgow.ac.uk/ site, which was put together by an external contractor, but I have now inherited.  The site menu would not update via the WordPress admin interface and after a bit of digging around in the source files for the theme it would appear that the theme doesn’t display a menu anywhere, that is the menu which is editable from the WordPress Admin interface is not the menu that’s visible on the public site.  That menu is generated in a file called ‘header.php’ and only pulls in pages / posts that have been given one of three specific categories: Commissioned Artworks, Commissioned Text or Contributed Text (which appear as ‘Blogs’).  Any post / page that is given one of these categories will automatically appear in the menu.  Any post / page that is assigned to a different category or has no assigned category doesn’t appear.  I added a new category to the ‘header’ file and the missing posts all automatically appeared in the menu.

I also updated the introductory texts in the mockups for the STAR websites and replied to a query about making a place-names website from a student at Newcastle.  I spoke to Simon Taylor about a talk he’s giving about the place-name database and gave him some information on the database and systems I’d created for the projects I’ve been involved with.  I also spoke to the Iona Place-names people about their conference and getting the website ready for this.

I also had a chat with Luca Guariento about a new project involving the team from the Curious Travellers project.  As this is based in Critical Studies Luca wondered whether I’d write the Data Management Plan for the project and I said I would.  I spent quite a bit of time during the rest of the week reading through the bid documentation, writing lists of questions to ask the PI, emailing the PI, experimenting with different technologies that the project might use and beginning to write the Plan, which I aim to complete next week.

The project is planning on running some pre-digitised images of printed books through an OCR package and I investigated this.  Google owns and uses a program called Tesseract to run OCR for Google Books and Google Docs and it’s freely available (https://opensource.google/projects/tesseract).  It’s part of Google Docs – if you upload an image of text into Google Drive then open it in Google Docs the image will be automatically OCRed.  I took a screenshot of one of the Welsh tour pages (https://viewer.library.wales/4690846#?c=&m=&s=&cv=32&manifest=https%3A%2F%2Fdamsssl.llgc.org.uk%2Fiiif%2F2.0%2F4690846%2Fmanifest.json&xywh=-691%2C151%2C4725%2C3632) and cropped the text and then opened it in Google Docs and even on this relatively low resolution image the OCR results are pretty decent.  It managed to cope with most (but not all) long ‘s’ characters and there are surprisingly few errors – ‘Englija’ and ‘Lotty’ are a couple and have been caused by issues with the original print quality.  I’d say using Tesseract is going to be suitable for the project.

I spent a bit of time working on the Speak For Yersel project.  We had a team meeting on Thursday to go through in detail how one of the interactive exercises will work.  This one will allow people to listed to a sound clip and then relisten to it in order to click whenever they hear something that identifies the speaker as coming from a particular location.  Before the meeting I’d prepared a document giving an overview of the technical specification of the feature and we had a really useful session discussing the feature and exactly how it should function.  I’m hoping to make a start on a mockup of the feature next week.

Also for the project I’d enquired with Arts IT Support as to whether the University held a license for ArcGIS Online, which can be used to publish maps online.  It turns out that there is a University-wide license for this which is managed by the Geography department and a very helpful guy called Craig MacDonell arranged for me and the other team members to be set up with accounts for it.  I spent a bit of time experimenting with the interface and managed to publish a test heatmap based on data from SCOSYA.  I can’t get it to work directly with the SCOSYA API as it stands, but after exporting and tweaking one of the sets of rating data as a CSV I pretty quickly managed to make a heatmap based on the ratings and publish it: https://glasgow-uni.maps.arcgis.com/apps/instant/interactivelegend/index.html?appid=9e61be6879ec4e3f829417c12b9bfe51 This is just a really simple test, but we’d be able to embed such a map in our website and have it pull in data dynamically from CSVs generated in real-time and hosted on our server.

Also this week I had discussions with the Dictionaries of the Scots Language people about how dates will be handled.  Citation dates are being automatically processed to add in dates as attributes that can then be used for search purposes.  Where there are prefixes such as ‘a’ and ‘c’ the dates are going to be given ranges based on values for these prefixes.  We had a meeting to discuss the best way to handle this.  Marc had suggested that having a separate prefix attribute rather than hard coding the resulting ranges would be best.  I agreed with Marc that having a ‘prefix’ attribute would be a good idea, not only because it means we can easily tweak the resulting date ranges at a later point rather than having them hard-coded, but also because it then gives us an easy way to identify ‘a’, ‘c’ and ‘?’ dates if we ever want to do this.  If we only have the date ranges as attributes then picking out all ‘c’ dates (e.g. show me all citations that have a date between 1500 and 1600 that are ‘c’) would require looking at the contents of each date tag for the ‘c’ character which is messier.

A concern was raised that not having the exact dates as attributes would require a lot more computational work for the search function, but I would envisage generating and caching the full date ranges when the data is imported into the API so this wouldn’t be an issue.  However, there is a potential disadvantage to not including the full date range as attributes in the XML, and this is that if you ever want to use the XML files in another system and search the dates through it the full ranges would not be present in the XML so would require processing before they could be used.  But whether the date range is included in the XML or not I’d say it’s important to have the ‘prefix’ as an attribute, unless you’re absolutely sure that being able to easily identify dates that have a particular prefix isn’t important.

We decided that prefixes would be stored as attributes and that the date ranges for dates with a prefix would be generated whenever the data is exported from the DSL’s editing system, meaning editors wouldn’t have to deal with noting the date ranges and all the data would be fully usable without further processing as soon as it’s exported.

Also this week I was given access to a large number of images of registers from the Advocates Library that had been digitised by the NLS.  I downloaded these, batch processed them to add in the register numbers as a prefix to the filenames, uploaded the images to our server, created register records for each register and page records for each page.  The registers, pages and associated images can all now be accessed via our CMS.

My final task of the week was to continue work on the Anglo-Norman Dictionary.  I completed work on the script identifies which citations have varlists and which may need to have their citation date updated based on one of the forms in the varlist.  What the script does is to retrieve all entries that have a <varlist> somewhere in them.  It then grabs all of the forms in the <head> of the entry.  It then goes through every attestation (main sense and subsense plus locution sense and subsense) and picks out each one that has a <varlist> in it.

For each of these it then extracts the <aform> if there is one, or if there’s not then it extracts the final word before the <varlist>.  It runs a Levenshtein test on this ‘aform’ to ascertain how different it is from each of the <head> forms, logging the closest match (0 = exact match of one form, 1 = one character different from one of the forms etc).  It then picks out each <ms_form> in the <varlist> and runs the same Levenshtein test on each of these against all forms in the <head>.

If the score for the ‘aform’ is lower or equal to the lowest score for an <ms_form> then the output is added to the ‘varlist-aform-ok’ spreadsheet.  If the score for one of the <ms_form> words is lower than the ‘aform’ score the output is added to the ‘varlist-vform-check’ spreadsheet.

My hope is that by using the scores we can quickly ascertain which are ok and which need to be looked at by ordering the rows by score and dealing with the lowest scores first.  In the first spreadsheet there are 2187 rows that have a score of 0.  This means the ‘aform’ exactly matches one of the <head> forms.  I would imagine that these can safely be ignored.  There are a further 872 that have a score of 1, and we might want to have a quick glance through these to check they can be ignored, but I suspect most will be fine.  The higher the score the greater the likelihood that the ‘aform’ is not the form that should be used for dating purposes and one of the <varlist> forms should instead.  These would need to be checked and potentially updated.

The other spreadsheet contains rows where a <varlist> form has a lower score than the ‘aform’ – i.e. one of the <varlist> forms is closer to one of the <head> forms than the ‘aform’ is.  These are the ones that are more likely to have a date that needs updated. The ‘Var forms’ column lists each var form and its corresponding score.  It is likely that the var form with the lowest score is the form that we would need to pick the date out for.

In terms of what the editors could do with the spreadsheets:  My plan was that we’d add an extra column to note whether a row needs updated or not – maybe called ‘update’ – and be left blank for rows that they think look ok as they are and containing a ‘Y’ for rows that need to be updated.  For such rows they could manually update the XML column to add in the necessary date attributes.  Then I could process the spreadsheet in order to replace the quotation XML for any attestations that needs updated.

For the ‘vform-check’ spreadsheet I could update my script to automatically extract the dates for the lowest scoring form and attempt to automatically add in the required XML attributes for further manual checking, but I think this task will require quite a lot of manual checking from the onset so it may be best to just manually edit the spreadsheet here too.

Week Beginning 4th October 2021

I spent a fair amount of time on the new ‘Speak for Yersel’ project this week, reading through materials produced by similar projects, looking into ArcGIS Online as a possible tool to use to create the map-based interface and thinking through some of the technical challenges the project will face.  I also participated in a project Zoom call on Thursday where we discussed the approaches we might take and clarified the sorts of outputs the project intends to produce.

I also had further discussions with the Sofia from the Iona place-names project about their upcoming conference in December and how the logistics for this might work, as it’s going to be an online-only conference.  I had a Zoom call with Sofia on Thursday to go through these details, which really helped us to shape up a plan.  I also dealt with a request from another project that wants to set up a top-level ‘ac.uk’ domain, which makes three over the past couple of weeks, and make a couple of tweaks to the text of the Decadence and Translation website.

I had a chat with Mike Black about the new server that Arts IT Support are currently setting up for the Anglo-Norman Dictionary and had a chat with Eleanor Lawson about adding around 100 or so Gaelic videos to the Seeing Speech resource on a new dedicated page.

For the Books and Borrowing project I was sent a batch of images of a register from Dumfries Presbytery Library and I needed to batch process them in order to fix the lighting levels and rename them prior to upload.  It took me a little time to figure out how to run a batch process in the ancient version of Photoshop I have.  After much hopeless Googling I found some pages from ‘Photoshop CS2 For Dummies’ on Google Books that discussed Photoshop Actions (see https://books.google.co.uk/books?id=RLOmw2omLwgC&lpg=PA374&dq=&pg=PA332#v=onepage&q&f=false) which made me realise the ‘Actions’, which I’d failed to find in any of the menus, were available via the tabs on the right of the screen, and I could ‘record’ and action via this.  After running the images through the batch I uploaded them to the server and generated the page records for each corresponding page in the register.

I spent the rest of the week working on the Anglo-Norman Dictionary, considering how we might be able to automatically fix entries with erroneous citation dates caused by a varlist being present in the citation with a different date that should be used instead of the main citation date.  I had been wondering whether we could use a Levenshtein test (https://en.wikipedia.org/wiki/Levenshtein_distance) to automatically ascertain which citations may need manual editing, or even as a means of automatically adding in the new tags after testing.  I can already identify all entries that feature a varlist, so I can create a script that can iterate through all citations that have a varlist in each of these entries. If we can assume that the potential form in the main citation always appears as the word directly before the varlist then my script can extract this form and then each <ms_form> in the <varlist>.  I can also extract all forms listed in the <head> of the XML.

So for example for https://anglo-norman.net/entry/babeder my script would extract the term ‘gabez’ from the citation as it is the last word before <varlist>.  It would then extract ‘babedez’ and ‘bauboiez’ from the <varlist>.  There is only one form for this entry: <lemma>babeder</lemma> so this would get extracted too.  The script would then run a Levenshtein test on each possible option, comparing them to the form ‘babeder’, the results of which would be:

gabez: 4

babedez: 1

bauboiez: 4

The script would then pick out ‘babedez’ as the form to use (only one character different to the form ‘babeder’) and would then update the XML to note that the date from this <ms_form> is the one that needs to be used.

With a more complicated example such as https://anglo-norman.net/entry/bochet_1 that has multiple forms in <head> the test would be run against each and the lowest score for each variant would be used.  So for example for the citation where ‘buchez’ is the last word before the <varlist> the two <ms_form> words would be extracted (huchez and buistez) and these plus ‘buchez’ would be compared against every form in <head>, with the overall lowest Leveshtein score getting logged.  The overall calculations in this case would be:

buchez:

bochet = 2

boket = 4

bouchet = 2

bouket = 4

bucet = 2

buchet = 1

buket = 3

bokés = 5

boketes = 5

bochésç = 6

buchees = 2

huchez:

bochet = 3

boket = 5

bouchet = 3

bouket = 5

bucet = 3

buchet = 2

buket = 4

bokés = 6

boketes = 6

bochésç = 7

buchees = 3

buistez:

bochet = 5

boket = 5

bouchet = 5

bouket = 5

bucet = 4

buchet = 4

buket = 4

bokés = 6

boketes = 4

bochésç = 8

buchees = 4

Meaning ‘buchez’ would win with a score of 1 and in this case no <varlist> form would therefore be marked.  If the main citation form and a varlist form both have the same lowest score then I guess we’d set it to the main citation form ‘winning’, although in such cases the citation could be flagged for manual checking.  However, this algorithm does entirely depend on the main citation form being the word before the <varlist> tag and the editor confirmed that this is not always the case, but despite this I think the algorithm could correctly identify the majority of cases, and if the output was placed in a CSV it would then be possible for someone to quickly check through each citation and tick off those that should be automatically updated and manually fix the rest.  I made a start on the script that would work through all of the entries and output the CSV during the remainder of the week, but didn’t have the time to finish it.  I’m going to be on holiday next week but will continue with this when I return.

 

Week Beginning 27th September 2021

I had two Zoom calls on Monday this week.  The first was with the Burns people to discuss the launch of the website for the ‘letters and poems’ part of ‘Editing Burns’, to complement the existing ‘Prose and song’ website (https://burnsc21.glasgow.ac.uk/).  The new website will launch in January with some video content and blogs, plus I will be working on a content management system for managing the network of Burns’ letter correspondents, which I will put together some time in November, assuming the team can send me on some sample data by then.  This system will eventually power the ‘Burns letter writing trail’ interactive maps that I’ll create for the new site sometime next year.

My second Zoom call was for the Books and Borrowing project to discuss adding data from a new source to the database.  The call gave us an opportunity to discuss the issues with the data that I’d highlighted last week.  It was good to catch up with the team again and to discuss the issues with the researcher who had originally prepared the spreadsheet containing the data.  We managed to address all of the issues and the researcher is going to spend a bit of time adapting the spreadsheet before sending it to me to be batch uploaded into our system.

I spent some further time this week investigating the issue of some of the citation dates in the Anglo-Norman Dictionary being wrong, as discussed last week.  The issue affects some 4309 entries where at least one citation features the form only in a variant text.  This means that the citation date should not be the date of the manuscript in the citation, but the date when the variant of the manuscript was published.  Unfortunately this situation was never flagged in the XML, and there was never any means of flagging the situation.  The variant date should only ever be used when the form of the word in the main manuscript is not directly related to the entry in question but the form in the variant text is.  The problem is it cannot be automatically ascertained when the form in the main manuscript is the relevant one and when the form in the variant text is as there is so much variation in forms.

For example, the entry https://anglo-norman.net/entry/bochet_1 there is a form ‘buchez’ in a citation and then two variant texts for this where the form is ‘huchez’ and ‘buistez’.  None of these forms are listed in the entry’s XML as variants so it’s not possible for a script to automatically deduce which is the correct date to use (the closest is ‘buchet’).  In this case the main citation form and its corresponding date should be used.  Whereas in the entry https://anglo-norman.net/entry/babeder the main citation form is ‘gabez’ while the variant text has ‘babedez’ and so this is the form and corresponding date that needs to be used.  It would be difficult for a script to automatically deduce this.  In this case a Levenstein test (which test how many letters need to be changed to turn one string into another) could work, but this would still need to be manually checked.

The editor wanted me to focus on those entries where the date issue affects the earliest date for an entry, as these are the most important as the issue results in an incorrect date being displayed for the entry in the header and the browse feature.  I wrote a script that finds all entries that feature ‘<varlist’ somewhere in the XML (the previously exported 4309 entries).  It then goes through all attestations (in all sense, subsense and locution sense and subsense sections) to pick out the one with the earliest date, exactly as the code for publishing an entry does.  What it then does is checks the quotation XML for the attestation with the earliest date for the presence of ‘<varlist’ and if it finds this it outputs information for the entry, consisting of the slug, the earliest date as recorded in the database, the earliest date of the attestation as found by the script, the ID of the  attestation and then the XML of the quotation.  The script has identified 1549 entries that have a varlist in the earliest citation, all of which will need to be edited.

However, every citation has a date associated with it and this is used in the advanced search where users have the option to limit their search to years based on the citation date.  Only updating citations that affect the entry’s earliest date won’t fix this, as there will still be many citations with varlists that haven’t been updated and will still therefore use the wrong date in the search.  Plus any future reordering of citations would require all citations with varlists to be updated to get entries in the correct order.  Fixing the earliest citations with varlists in entries based on the output of my script will fix the earliest date as used in the header of the entry and the ‘browse’ feature only, but I guess that’s a start.

Also this week I sorted out some access issues for the RNSN site, submitted the request for a new top-level ‘ac.uk’ domain for the STAR project and spent some time discussing the possibilities for managing access to videos of the conference sessions for the Iona place-names project.  I also updated the page about the Scots Dictionary for Schools app on the DSL website (https://dsl.ac.uk/our-publications/scots-dictionary-for-schools-app/) after it won the award for ‘Scots project of the year’.

I also spent a bit of time this week learning about the statistical package R (https://www.r-project.org/).  I downloaded and installed the package and the R Studio GUI and spent some time going through a number of tutorials and examples in the hope that this might help with the ‘Speak for Yersel’ project.

For a few years now I’ve been meaning to investigate using a spider / radar chart for the Historical Thesaurus, but I never found the time.  I unexpectedly found myself with some free time this week due to ‘Speak for Yersel’ not needing anything from me yet so I thought I’d do some investigation.  I found a nice looking d3.js template for spider / radar charts here: http://bl.ocks.org/nbremer/21746a9668ffdf6d8242  and set about reworking it with some HT data.

My idea was to use the chart to visualise the distribution of words in one or more HT categories across different parts of speech in order to quickly ascertain the relative distribution and frequency of words.  I wanted to get an overall picture of the makeup of the categories initially, but to then break this down into different time periods to understand how categories changed over time.

As an initial test I chose the categories 02.04.13 Love and 02.04.14 Hatred, and in this initial version I looked only at the specific contents of the categories – no subcategories and no child categories.  I manually extracted counts of the words across the various parts of speech and then manually split them up into words that were active in four broad time periods: OE (up to 1149), ME (1150-1449), EModE (1450-1799) and ModE (1800 onwards) and then plotted them on the spider / radar chart, as you can see in this screenshot:

You can quickly move through the different time periods plus the overall picture using the buttons above the visualisation, and I think the visualisation does a pretty good job of giving you a quick and easy to understand impression of how the two categories compare and evolve over time, allowing you to see, for example, how the number of nouns and adverbs for love and hate are pretty similar in OE:

but by ModE the number of nouns for Love have dropped dramatically, as have the number of adverbs for Hate:

We are of course dealing with small numbers of words here, but even so it’s much easier to use the visualisation to compare different categories and parts of speech than it is to use the HT’s browse interface.  Plus if such a visualisation was set up to incorporate all words in child categories and / or subcategories it could give a very useful overview of the makeup of different sections of the HT and how they develop over time.

There are some potential pitfalls to this visualisation approach, however.  The scale used currently changes based on the largest word count in the chosen period, meaning unless you’re paying attention you might get the wrong impression of the number of words.  I could change it so that the scale is always fixed as the largest, but that would then make it harder to make out details in periods that have much fewer words.  Also, I suspect most categories are going to have many more nouns than other parts of speech, and a large spike of nouns can make it harder to see what’s going on with the other axes.  Another thing to note is that the order of the axes is fairly arbitrary but can have a major impact on how someone may interpret the visualisation.  If you look at the OE chart the ‘Hate’ area looks massive compared to the ‘Love’ area, but this is purely because there is only one ‘Love’ adjective compared to 5 for ‘Hate’.  If the adverb axis had come after the noun one instead the shapes of ‘Love’ and ‘Hate’ would have been more similar.  You don’t necessarily appreciate on first glance that ‘Love’ and ‘Hate’ have very similar numbers of nouns in OE, which is concerning.  However, I think the visualisations have a potential for the HT and I’ve emailed the other HT people to see what they think.

 

Week Beginning 20th September 2021

This was a four-day week for me as I’d taken Friday off.  I went into my office at the University on Tuesday to have my Performance and Development Review with my line-manager Marc Alexander.  It was the first time I’d been at the University since before the summer and it felt really different to the last time – much busier and more back to normal, with lots of people in the building and a real bustle to the West End.  My PDR session was very positive and it was great to actually meet a colleague in person again – the first time I’d done so since the first lockdown began.  I spent the rest of the day trying to get my office PC up to date after months of inaction.  One of the STELLA apps (the Grammar one) had stopped working on iOS devices, seemingly because it was still a 32-bit app, and I wanted to generate a new version of it.  This meant upgrading MacOS on my dual-boot PC, which I hadn’t used for years and was very out of date.  I’m still not actually sure whether the Mac I’ve got will support a version of MacOS that will allow me to engage in app development, as I need to incrementally upgrade the MacOS version, which takes quite some time, and by the end of the day there were still further updates required.  I’ll need to continue with this another time.

I spent quite a bit of the remainder of the week working on the new ‘Speak for Yersel’ project.  We had a team meeting on Monday and a follow-up meeting on Wednesday with one of the researchers involved in the Manchester Voices project (https://www.manchestervoices.org/) who very helpfully showed us some of the data collection apps they use and some of the maps that they generate.  It gave us a lot to think about, which was great.  I spent some further time looking through other online map examples, such as the New York Times dialect quiz (https://www.nytimes.com/interactive/2014/upshot/dialect-quiz-map.html) and researching how we might generate the maps we’d like to see.  It’s going to take quite a bit more research to figure out how all of this is going to work.

Also this week I spoke to the Iona place-names people about how their conference in December might be moved online and fixed a permissions issue with the Imprints of New Modernist Editing website and discussed the domain name for the STAR project with Eleanor Lawson.  I also had a chat with Luca Guariento about the restrictions we have on using technologies on the servers in the College of Arts and how this might be addressed.

I also received a spreadsheet of borrowing records covering five registers for the Books and Borrowing project and went through it to figure out how the data might be integrated with our system.  The biggest issue is figuring out which page each record is on.  In the B&B system each borrowing record must ‘belong’ to a page, which in turn ‘belongs’ to a register.  If a borrowing record has no page it can’t exist in the system.  In this new data only three registers have a ‘Page No.’ column and not every record in these registers has a value in this column.  We’ll need to figure out what can be done about this, because as I say, having a page is mandatory in the B&B system.  We could use the ‘photo’ column as this is present across all registers and every row.  However, I noticed that there are multiple photos per page, e.g. for SL137144 page 2 has 2 photos (4538 and 4539) so photo IDs don’t have a 1:1 relationship with pages.  If we can think of a way to address the page issue then I should be able to import the data.

Finally, I continued to work on the Anglo-Norman Dictionary project, fixing some issues relating to yoghs in the entries and researching a potentially large issue relating to the extraction of earliest citation dates.  Apparently there are a number of cases when the date for a citation that should be used is not the date as coded in the date section of the citation’s XML, but should instead be a date taken from a manuscript containing a variant form within the citation.  The problem is there is no flag to state when this situation occurs, instead it occurs whenever the form of the word in the citation is markedly different within the citation but similar in the variant text.  It seems unlikely that an automated script would be able to ascertain when to use the variant date as there is just so much variation between the forms.  This will need some further investigation, which I hope to be able to do next week.

Week Beginning 13th September 2021

This week I attended a team meeting for the STAR project (via Teams) where we discussed how the project is progressing in these early weeks, including the various mockups I’d made for the main and academic sites.  We decided to mix and match various elements from the different mockups and I spent a bit of time making a further two possibilities based on our discussions.  I also had an email conversation with Jennifer Smith and E Jamieson about some tweaks to how data are accessed in the Scots Syntax Atlas, but during our discussions it turned out that what they wanted was already possible with the existing systems, so I didn’t need to do anything, which was great.  I also had a chat with Marc Alexander about my PDR, which I will be having next week, actually in person which will be the first time I’ve seen anyone from work in the flesh since the first lockdown began.  Also this week I read through all of the documentation for the ‘Speak for Yersel’ project, which begins this month.  We have a project meeting via Zoom next Monday, and I’ll be developing the systems and website for the project later on this year.

I spent the rest of the week continuing to work on the Anglo-Norman Dictionary site.  There was an issue with the server during the week and it turned out that database requests from the site to the AND API were being blocked at server level.  I had to speak to Arts IT Support about this and thankfully all was reinstated.  It also transpired that the new server we’d ordered for the AND had been delivered in August and I had to do some investigation to figure out what had happened to it.  Hopefully Arts IT Support will be able to set it up in the next couple of weeks and we’ll be able to migrate the AND site and all of its systems over to the new hardware soon after.  This should hopefully help with both the stability of the site and its performance.

I also made a number of updates to both the systems and the content of the Textbase this week, based on feedback I received last week.  Geert wanted one of the texts to be split into two individual texts so that they aligned with the entries in the bibliography, and it took a bit of time to separate them out.  This required splitting the XML files and also updating all of the page records and search data relating to the pages.  Upon completion it all seems to be working fine, both for viewing and searching.  I also updated a number of the titles of the Textbase texts, removed the DEAF link sections from the various Textbase pages and ensured that the ‘siglum’ (the link through to the AND bibliography with an abbreviated form of text’s title) appeared in both the list of texts within the search form and in the text names that appear in the search results.  I also changed all occurrences of ‘AND source’ to ‘AND bibliography’ for consistency and removed one of the texts that had somehow been added to the Textbase when it shouldn’t have (probably because someone many moons ago had uploaded its XML file to the text directory temporarily and had then forgotten to remove it).

Week Beginning 6th September 2021

I spent more than a day this week preparing my performance and development review form.  It’s the first time there’s been a PDR since before covid and it took some time to prepare everything.  Thankfully this blog provides a good record of everything I’ve done so I could base my form almost entirely on the material found here, which helped considerably.

Also this week I investigated and fixed an issue with the SCOTS corpus for Wendy Anderson.  One of the transcriptions of two speakers had the speaker IDs the wrong way round compared to the IDs in the metadata.  This was slightly complicated to sort out as I wasn’t sure whether it was better to change the participant metadata to match the IDs used in the text or vice-versa.  It turned out to be very difficult to change the IDs in the metadata as they are used to link numerous tables in the database, so instead I updated the text that’s displayed.  Rather strangely, the ‘download plan text’ file contained different incorrect IDs.  I fixed this as well, but it does make me worry that the IDs might be off in other plain text transcriptions too.  However, I looked at a couple of others and they seem ok, though, so perhaps it’s an isolated case.

I was contacted this week by a lecturer in English Literature who is intending to put a proposal together for a project to transcribe an author’s correspondence, and I spent some time writing a lengthy email with home helpful advice.  I also spoke to Jennifer Smith about her ‘Speak for Yersel’ project that’s starting this month, and we arranged to have a meeting the week after next.  I also spent quite a bit of time continuing to work on mockups for the STAR project’s websites based on feedback I’d received on the mockups I completed last week.  I created another four mockups with different colours, fonts and layouts, which should give the team plenty of options to decide from.  I also received more than a thousand new page images of library registers for the Books and Borrowing project and processed these and uploaded them to the server.  I’ll need to generate page records for them next week.

Finally, I continued to make updates to the Textbase search facilities for the Anglo-Norman Dictionary.  I updated genre headings to make them bigger and bolder, with more of a gap between the heading and the preceding items.  I also added a larger indent to the items within a genre and reordered the genres based on a new suggested order.  For each book I included the siglum as a link through to the book’s entry on the bibliography page and in the search results where a result’s page has an underscore in it the reference now displays volume and page number (e.g. 3_801 displays as ‘Volume 3, page 801’).  I updated the textbase text page so that page dividers in the continuous text also display volume and page in such cases.

Highlighted terms in the textbase text page no longer have padding around them (which was causing what looked like spaces when the term appears mid-word).  The text highlighting is unfortunately a bit of a blunt instrument, as one of the editors discovered by searching for the terms ‘le’ and fable’:  term 1 is located and highlighted first, then term 2 is.  In this example the first term is ‘le’ and the second term is ‘fable’.  Therefore the ‘le’ in ‘fable’ is highlighted during the first sweep and then ‘fable’ itself isn’t highlighted as it has already been changed to have the markup for the ‘le’ highlighting added to it and no longer matches ‘fable’.  Also, ‘le’ is matching some HTML tags buried in the text (‘style’), which is then breaking the HTML, which is why some HTML is getting displayed.  I’m not sure much can be done about any of this without a massive reworking of things, but it’s only an issue when searching for things like ‘le’ rather than actual content words so hopefully it’s not such a big deal.

The editor also wondered whether it would be possible to add in an option for searching and viewing multiple terms altogether but this would require me to rework the entire search and it’s not something I want to tackle if I can avoid it.  If a user wants to view the search results for different terms they can select two terms then open the full results in a new tab, repeating the process for each pair of terms they’re interested in, switching from tab to tab as required. Next week I’ll need to rename some of the textbase texts and split one of the texts into two separate texts, which is going to require me to regenerate the entire dataset.

Week Beginning 30th August 2021

This week I completed work on the proximity search of the Anglo-Norman textbase.  Thankfully the performance issues I’d feared might crop up haven’t occurred at all.  The proximity search allows you to search for term 1 up to 10 words to the left or right of term 2 using ‘after’ or ‘before’.  If you select ‘after or before’ then (as you might expect) the search looks 10 words in each direction.  This ties in nicely with the KWIC display, which displays 10 words either side of your term.  As mentioned last week, unless you search for exact terms (surrounded by double quotes) you’ll reach an intermediary page that lists all possible matching forms for terms 1 and 2.  Select one of each and you can press the ‘Continue’ button to perform the actual search.  What this does is finds all occurrences of term 2 (term 2 is the fixed anchor point, it’s term 1 that can be variable in position), then for each it checks the necessary words before or after (or before and after) the term for the presence of term 1.  When generating the search words I generated and stored the position the word appears on the page, which made it relatively easy to pinpoint nearby words.  What is trickier is dealing with words near the beginning or the end of a page, as in such cases the next or previous page must also then be looked at.  I hadn’t previously generated a total count of the number of words on a page, which was needed to ascertain whether a word was close to the end of the page, so I ran a script that generated and stored the word count for each page.  The search seems to be working as it should for words near the beginning and end of a page.

The results page is displayed in the same way as the regular search, complete with KWIC and sorting options.  Both terms 1 and 2 are bold, and if you sort the results the relevant numbered word left or right of term 2 is highlighted, as with the regular search.  When you click through to the actual text all occurrences of both term 1 and term 2 are highlighted (not just those in close proximity), but the page centres on the part of the text that meets the criteria, so hopefully this isn’t a problem – it is quite useful to see other occurrences of the terms after all.  There are still some tweaks I need to make to the search based on feedback I received during the week, and I’ll look at these next week, but on the whole the search facility (and the textbase facility in general) is just about ready to launch, which is great as it’s the last big publicly facing feature of the AND that I needed to develop.

Also this week I spent some time working on the Books and Borrowing project.  I created a new user account for someone who will be working for the project and I also received the digitised images for another library register, this time from the NLS.  I downloaded these and then uploaded them to the server, associating the images with the page records that were already in the system.  The process was a little more complicated and time consuming than I’d anticipated as the register has several blank pages in it that are not in our records but have been digitised.  Therefore the number of page images didn’t match up with the number of pages, plus page images were getting associated with the wrong page.  I had to manually look through the page images and delete the blanks, but I was still off by one image.  I then had to manually check through the contents of the images to compare them with the transcribed text to see where the missing image should have gone.  Thankfully I managed to track it down and reinstate it (it had one very faint record on it, which I hadn’t noticed when viewing and deleting blank thumbnails).  With that in place all images and page records aligned and I could made the associations in the database.  I also sent Gerry McKeever the zipped up images (several gigabytes) for a couple of the St Andrews registers as he prefers to have the complete set when working on the transcriptions.

I had a meeting with Gerry Carruthers and Pauline McKay this week to discuss further developments of the ‘phase 2’ Burns website, which they are hoping to launch in the new year, and also to discuss the hosting of the Scottish theatre studies journal that Gerry is sorting out.

I spent the rest of the week working on mockups for the two websites for the STAR speech and language therapy project.  Firstly there’s the academic site.  The academics site is going to sit alongside Seeing Speech and Dynamic Dialects, and as such it should have the same interface as these sites.  Therefore I’ve made a site that is pretty much identical in terms of the overall theme.  I added in a new ‘site tab’ for the site that sits at the top of the page and have added in the temporary logo as a site logo and favicon (the latter may need a dark background to make it stand out).  I created menu items for all of the items in Eleanor Lawson’s original mockup image.  These all work – leading to empty pages for now and added the star logo to the ‘Star in-clinic’ menu item as in the mockup too.  In the footer I made a couple of tweaks to the layout – the logos are all centre aligned and have a white border.  I added in the logo for Strathclyde and have only included the ESRC logo, but can add others in if required.  The actual content of the homepage is identical to Seeing Speech for now – I haven’t changed any images or text.

For the clinic website I’ve taken Eleanor’s mockup as a starting point again and have so far made two variations.  I will probably work on at least one more different version (with multiple variations) next week.  I haven’t added in the ‘site tabs’ to either version as I didn’t want to clutter things up, and I’m imagining that there will be a link somewhere to the STAR academic site for those that want it, and from there people would be able to find Seeing Speech and Dynamic Dialects.  The first version of the mockup has a top-level menu bar (we will need such a menu listing the pages the site features otherwise people may get confused) then the main body of the page is the blue, as in the mockup.  I used the same logo and the font for the header is this Google font: https://fonts.google.com/?query=rampart+one&preview.text=STAR%20Speech%20and%20Language%20Therapy&preview.text_type=custom.  Other headers on the page use this font: https://fonts.google.com/specimen/Annie+Use+Your+Telescope?query=annie&preview.text=STAR%20Speech%20and%20Language%20Therapy&preview.text_type=custom.  I added in a thick dashed border under the header.  The intro text is just some text I’ve taken from one of the Seeing Speech pages, and the images are still currently just the ones in the mockup.  Hovering over an image causes the same dashed border to appear.  The footer is a kind of pink colour, which is supposed to suggest those blue and pink rubbers you used to get in schools.

The second version uses the ‘rampart one’ font just for ‘STAR’ in the header, with the other font used for the rest of the text.  The menu bar is moved to underneath the header and the dashed line is gone.  The main body of the page is white rather than continuing the blue of the header and ‘rampart one’ is used as the in-page headers.  The images now have rounded edges, as do the text blocks in the images.  Hovering over an image brings up a red border, the same shade as used in the active menu item.  The pink footer has been replaced with the blue from the navbar.  Both versions are ‘responsive’ and work on all screen sizes.

I’ll be continuing to work on the mockups next week.

Week Beginning 23rd August 2021

This week I completed work on a first version of the textbase search facilities for the Anglo-Norman Dictionary.  I’ve been working on this over the past three weeks and it’s now fully operational, quick to use and does everything that was required of it.  I completed work on the KWIC ordering facilities, adding in a drop-down list that enables the user to order the results either by the term or any word to the left or right of the term.  When results are ordered by a word to the left or right of the search term that word is given a yellow highlight so you can easily get your eye on the word that each result is being ordered by.  I ran into a few difficulties with the ordering, for example accented initial characters were being sorted after ‘z’, and upper case characters were all sorted before lower case characters, but I’ve fixed these issues.  I also updated the textbase page so that when you load a text from the results a link back to the search results appears at the top of the page.  You can of course just use the ‘back’ button to return to the search results. Also, all occurrences of the search term throughout the text are highlighted in yellow.  There are possibly some further enhancements that could be made here (e.g. we could have a box that hovers on the screen like the ‘Top’ button that contains a summary of your search and a link back to the results, or options to load the next or previous result) but I’ll leave things as they are for now as what’s there might be good enough.  I also fixed some bugs that were cropping up, such as an exact search term not appearing in the search box when you return to refine your results (caused by double quotes needing to be changed to the code ‘%22’).

I then began thinking about the development of a proximity search for the textbase.  As with the old site, this will allow the user to enter two search terms and specify the maximum number of words before or after the first term the second one appears.  The results will then be displayed in a KWIC form with both terms highlighted.  It took quite some time to think through the various possibilities for this feature.  The simplest option from a technical point of view would be to process the first term as with the regular search, retrieve the KWIC for each result and then search this for the second term.  However, this wouldn’t allow the user to search for an exact match for the second term, or use wildcards, as the KWIC only contains the full text as written, complete with punctuation.  Instead I decided to make the proximity search as similar to and as consistent with the regular textbase search as possible.  This means the user will be able to enter the two terms with wildcards and two lists of possible exact matches will be displayed, from which the user can select term 1 and term 2.  Then at this point the exact matches for term 1 will be returned and in each case a search will be performed to see whether term 2 is found however number of words specified before or after term 1.  This will rely on the ‘word order’ column that I already added to the database, but will involve some complications when term 1 is near the very start or end of a page (as the search will then need to look at the preceding or following page).  I ran a few tests of this process directly via the database and it seemed to work ok, but I’ll just need to see whether there are any speed issues when running such queries on potentially thousands of results.

With this possible method in place I began working on a new version of the textbase search page that will provide both the regular concordance search and the new proximity search.  As with the advanced search on the AND website, these will be presented on one page in separate tabs, and this required much reworking of the existing page and processing scripts.  I had to ensure that HTML elements that previously used IDs but would need to be replicated in each tab and would therefore no longer be unique would still be valid.  This meant some major reworking of the genre and book selection options, both in the HTML and in the JavaScript that handles the selection and deselection.  I also had to ensure that the session variables relating to the search could handle multiple types of search and that links would return the user to the correct type of search.  By the end of the week I had got a search form for the proximity search in place, with facilities to limit the search to specific texts or genres and options to enter two terms, the maximum number of words between the terms and whether term 1 should appear before or after term 2 (or either).  Next week I’ll need to update the API to provide the endpoint to actually run such a search.

Also this week I had an email from Bryony Randall about her upcoming exhibition for her New Modernist Editing project.  The exhibition will feature a live website (https://www.blueandgreenproject.com/) running on a tablet in the venue and Bryony was worried that the wifi at the venue wouldn’t be up to scratch.  She asked whether I could create a version of the site that would run locally without an internet connection, and I spent some time working on this.

Looking at the source of the website it would appear to have been constructed using the online website creation platform https://www.wix.com/.  I’d never used this before, but it will have an admin interface where you can create and manage pages and such things.  The resulting website is integrated with the online Wix platform and (after a bit of Googling) it looked like there isn’t a straightforward way to export pages created using Wix for use elsewhere.  However, the site only consisted of 20 or so static pages (i.e. no interactive elements other than links to other pages) so I thought it would be possible to just save each page as HTML, go through each of the files and update the links and then the resulting pages could potentially run directly in a browser.  However, after trying this would I realised that there were some issues.  Looking at the source there are numerous references to externally hosted scripts and files, such as JavaScript files, fonts and images that were not downloaded when the webpage was saved and these would all be inaccessible if the internet connection was lost, which would likely result in a broken website.  I also realised that the HTML generated by WIX is a pretty horrible tangled mess, and getting this to work nicely would take a lot of work.  I therefore decided to just create a replica of the site from scratch using Bootstrap.

However, it was only after this point that I was informed that the local site would need to run on a tablet rather than a full PC.  The tablet is an Android one, which seriously complicates matters as unlike a proper computer, Android imposes restrictions on what you can and can’t do, and one of the things you can’t do is run locally hosted websites in the browser.  I tried several approaches to get my test site working on my Android phone and with all of the straightforward ways I can get the HTML file to load into the browser, but not any associated files – no images, stylesheets or JavaScript.  This is obviously not acceptable.  I did manage to get it to work, but only by using an app that runs a server on the device and by using absolute file references to the IP address the server app uses in the files (relative file references just did not work).  The app I used was called Simple HTTP Server (https://play.google.com/store/apps/details?id=jp.ubi.common.http.server) and once configured it worked pretty well.

I continued to work on my replica of the site, getting all of the content transferred over.  This took longer than I anticipated, as some of the pages are quite complicated (artworks including poetry, images, text and audio) but I managed to get everything done before the end of the week.  In the end it turned out that the wifi at the venue was absolutely fine so my replica site wasn’t needed, but it was still a good opportunity to learn about hosting a site on an Android device and to hone my Bootstrap skills.

Also this week I helped Katie Halsey of the Books and Borrowing project with a query about access to images, had a look through the final version of Kirsteen McCue’s AHRC proposal and spoke to Eleanor Lawson about creating some mockups of the interface to the STAR project websites, which I will start on next week.

Week Beginning 16th August 2021

I continued to work on the new textbase search facilities for the Anglo-Norman Dictionary this week.  I completed work on the required endpoints for the API, creating the facilities that would process a search term (with optional wildcards), limit the search to selected books and or genres and return either full search results in the case of an exact search for a term or a list of possible matching terms and the number of occurrences of each term.  I then worked on the front-end to enable a query to be processed and submitted to the API based on the choices made by the user.

By default any text entered will match any term that contains the text – e.g. enter ‘jour’ (without apostrophes) and you’ll find all forms containing the characters ‘jour’ anywhere e.g. ‘adjourner’, ‘journ’.  If you want to do an exact match you have to use double quotes – “jour”.  You can also use an asterisk at the beginning or end to match forms starting or ending with the term – ‘jour*’ and ‘*jour’ or an asterisk at both ends ‘*jour*’ will only find forms that contain the term somewhere in the middle.  You can also use a question mark wildcard to denote any single character, e.g. ‘am?n*’ will find words beginning ‘aman’, ‘amen’ etc.

If your selected form in your selected books / genres matches multiple forms then an intermediary page bringing up a list of matching forms and a count of the number of times each form appears will be displayed.  This is the same as how the ‘translation’ advanced search works, for example, and I wanted to maintain a consistent way of doing things across the site.  Select a specific form and the actual occurrences of each item in the texts will appear.  Above this list is a ‘Select another form’ button that returns you to the intermediary page.  If your search only brings back one form the intermediary page is skipped, and as all selection options appear in the URL it’s possible to bookmark / cite the search results too.

Whilst working on this I realised that I’d need to regenerate the data, as it became clear that many words have been erroneously joined together due to there being no space between words when one tag is closed and a following one is opened.  When the tags are then stripped out the forms get squashed together, which has led to some crazy forms such as ‘amendeezreamendezremaundez’.  Previously I’d not added spaces between tags as I was thinking that a space would have to be ended before a closing tag (e.g. ‘</’ becomes ‘ </’) and this would potentially mess up words that have tags in them, such as superscript tags in names like McDonald.  However, I realised I could instead do a find and replace to add spaces between a closing tag and an opening tag (‘><’ becomes ‘> <’, which would not mess up individual tags within words and wouldn’t have any further implications as I strip out all additional spaces when processing the texts for search purposes anyway.

I also decided that I should generate the ‘key-word in context’ (KWIC) for each word and store this in the database.  I was going to generate this on the fly every time a search results page was displayed but it seems more efficient to generate and store this once rather than do it every time.  I therefore updated by data processing script to generate the KWIC for each of the 3.5 million words as they were extracted from the texts.  This took some time to both implement and execute.  I decided to pull out the 10 words on either side of the term, which used the ‘word order’ column that gets generated as each page is processed.  Some complications were introduced in cases where the term is either before the tenth word on the page or there are less than ten words after the term on the page.  I such cases the script needed to look at the page before or after the current page in order to pull out the words and fill out the KWIC with the appropriate words on the other pages.

With the updates to data processing in place and a fair bit of testing of the KWIC facility carried out, I re-ran my scripts to regenerate the data and all looked good.  However, after inserting the KWIC data the querying of the tables slowed to a crawl.  On my local PC queries which were previously taking 0.5 seconds were taking more than 10 seconds, while on the server execution time was almost 30 seconds.  It was really baffling as the only difference was the search words table now had two additional fields (KWIC left and KWIC right), neither of which were being queried or returned in the query.  It seemed really strange that adding new columns could have such an effect if they were not even being used in a query.  I had to spend quite a bit of time investigating this, including looking at MySQL settings such as key buffer size and trying again to change storage engines, switching from MyISAM to InnoDB and back again to see what was going on.  Eventually I looked again at the indexes I’d created for the table, and decided to delete them and start over, in case this somehow jump-started the search speed.  I previously had the ‘word stripped’ column indexed in a multiple column index with page ID and word type (either main page or textual apparatus).  Instead I created an index of the ‘word stripped’ column on its own, and this immediately boosted performance.  Queries that were previously taking close to 30 seconds to execute on the server were now taking less than a second.  It was such a relief to have figured out what the issue was, as I had been considering whether my whole approach would need to be dropped and replaced by something completely different.

As I now had a useable search facility I continued to develop the front-end that would use this facility.  Previously the exact match for a term was bringing up just the term in question and a link through to the page the term appeared on, but now I could begin to incorporate the KWIC text too.  My initial idea was to use a tabular layout, with each word of the KWIC in a different column, with a clickable table heading that would allow the data to be ordered by any of the columns (e.g. order the data alphabetically by the first word to the left of the term).  However, after creating such a facility I realised it didn’t work very well.  The text just didn’t scan very well due to columns having to be the width of whatever the longest word in the column was, and the text just took up too much horizontal space.  Instead, I decided to revert to using an unordered list, with the KWIC left and KWIC right in separate spans, with KWIC left right aligned to push it up against the search term no matter what the length of the KWIC left text.  I split the KWIC text up into individual words and stored this in an array to enable each search result to be ordered by any word in the KWIC, and began working on a facility to change the order using a select box above the search results.  This is as far as I got this week, but I’m pretty confident that I’ll get things finished next week.  Here’s a screenshot of how the KWIC looks so far:

Also this week I had an email conversation with the other College of Arts developers about professional web designers after Stevie Barrett enquired about them, arranged to meet with Gerry Carruthers to discuss the journal he would like us to host, gave some advice to Thomas Clancy about mailing lists and spoke to Joanna Kopaczyk about a website she would like to set up for a conference she’s organising next year.

Week Beginning 9th August 2021

I’d taken last week off as our final break of the summer, and we spent it on the Kintyre peninsula.  We had a great time and were exceptionally lucky with the weather.  The rains began as we headed home and I returned to a regular week of work.  My major task for the week was to begin work on the search facilities for the Anglo-Norman Dictionary’s textbase, a collection of almost 80 lengthy texts for which I had previously created facilities to browse and view texts.  The editors wanted me to replicate the search options that were available through the old site, which enabled a user to select which texts to search (either individual texts or groups of texts arranged by genre), enter a single term to search (either a full match or partial match at the beginning or end of a word), select a specific term from a list of possible matches and then view each hit via a keyword in context (KWIC) interface, showing a specific number of words before and after the hit, with a link through to the full text opened at that specific point.

This is a pretty major development and I decided initially that I’d have two major tasks to tackle.  I’d have to categorise the texts by their genre and I’d have to research how best to handle full text searching including limiting to specific texts, KWIC and reordering KWIC, and linking through to specific pages and highlighting the results.  I reckoned it was potentially going to be tricky as I don’t have much experience with this kind of searching.  My initial thought was to see whether Apache Solr might be able to offer the required functionality.  I used this for the DSL’s advanced search, which searches the full text of the entries and returns snippets featuring the word, with the word highlighted and the word then highlighted throughout the entry when an entry in the results is loaded (e.g. https://dsl.ac.uk/results/dreich/fulltext/withquotes/both/).  This isn’t exactly what is required here, but I hoped that there might be further options I can explore.  Failing that I wondered whether I could repurpose the code for the Scottish Corpus of Texts and Speech.  I didn’t create this site, but I redeveloped it significantly a few years ago and may be able to borrow parts from the concordance search. E.g. https://scottishcorpus.ac.uk/advanced-search/ and select ‘general’ then ‘word search’ then ‘word / phrase (concordance)’ then search for ‘haggis’ and scroll down to the section under the map.  When opening a document you can then cycle through the matching terms, which are highlighted, e.g. https://scottishcorpus.ac.uk/document/?documentid=1572&highlight=haggis#match1.

After spending some further time with the old search facility and considering the issues I realised there are a lot of things to be considered regarding preparing the texts for search purposes.  I can’t just plug the entire texts in as only certain parts of them should be used for searching – no front or back matter, no notes, textual apparatus or references.  In addition, in order to properly ascertain which words follow on from each other all XML tags need to be removed too, and this introduces issues where no space has been entered between tags but a space needs to exist between the contents of the tags, e.g. ‘dEspayne</item><item>La charge’ would otherwise become ‘dEspayneLa charge’.

As I’d need to process the texts no matter which search facility I end up using I decided to focus on this first, and set up some processing scripts and a database on my local PC to work with the texts.  Initially I managed to extract the page contents for each required page, remove notes etc and strip the tags and line breaks so that the page content is one continuous block of text.

I realised that the old search seems to be case sensitive, which doesn’t seem very helpful.  E.g. search for ‘Leycestre’ and you find nothing – you need to enter ‘leycestre’, even though all 264 occurrences actually have a capital L.  I decided to make the new search case insensitive – so searching for ‘Leycestre’, ‘leycestre’ or ‘LEYCESTRE’ will bring back the same results.  Also, the old search limits the keyword in context display to pages.  E.g. the first ‘Leycestre’ hit has no text after it as it’s the last word on the page.  I’m intending to take the same approach as I’m processing text on a page-by-page basis.  I may be able to fill out the KWIC with text from the preceding / subsequent page if you consider this to be important, but it would be something I’d have to add in after the main work is completed.  The old search also limits the KWIC to text that’s on the same line, e.g. in a search for ‘arcevesque’ the result ‘L’arcevesque puis metre en grant confundei’ has no text before because it’s on a different line (it also chops off the end of ‘confundeisun’ for some reason).  The new KWIC will ignore breaks in the text (other than page breaks) when displaying the context.  I also realised that I need to know what to do about words that have apostrophes in them.  The old search splits words on the apostrophe, so for example you can search for arcevesque but not l’arcevesque.  I’m intending to do the same.  The old search retains both parts before and after the apostrophe as separate search terms, so for example in “qu’il” you can search for “qu” and “il” (but not “qu’il”).

After some discussions with the editor, I updated my system to include textual apparatus, stored in a separate field to the main page text.  With all of the text extracted I decided that I’d just try and make my own system initially, to see whether it would be possible.  I therefore created a script that would take each word from the extracted page and textual apparatus fields and store this in a separate table, ensuring that words with apostrophes in them are split into separate words and for search purposes all non-alphanumeric characters are removed and the text is stored as lower-case.  I also needed to store the word as it actually appears in the text, the word order on the page and whether the word is a main page word or in the textual apparatus.  This is because after finding a word I’ll need to extract those around it for the KWIC display.  After running my script I ended up with around 3.5 million rows in the ‘words’ table, and this is where I ran into some difficulties.

I ran some test queries on the local version of the database and all looked pretty promising, but after copying the data to the server and running the same queries it appeared that the server is unusably slow.  On my desktop a query  to find all occurrences of ‘jour’, with the word table joined to the page table and then to the text table completed in less than 0.5 seconds but on the server the same query took more than 16 seconds, so about 32 times slower.  I tried the same query a couple of times and the results are roughly the same each time.  My desktop PC is a Core i5 with 32GB of RAM, and the database is running on an NVMe M.2 SSD, which no doubt makes things quicker, but I wouldn’t expect it to be 32 times quicker.

I then did some further experiments with the server.  When I query the table containing the millions of rows on its own the query is fast (much less than a second).  I added a further index to the column that is used for the join to the page table (previously it was indexed, but in combination with other columns) and then when limiting the query to just these two tables the query runs at a fairly decent speed (about 0.5 seconds).  However, the full query involving all three tables still takes far too long, and I’m not sure why.  It’s very odd as there are indexes on the joining columns and the additional table is not big – it only has 77 rows.  I read somewhere that ordering the results by a column in the joined table can make things slower, as can using descending order on a column, so I tried updating the ordering but this has had no effect.  It’s really weird – I just can’t figure out why adding the table has such a negative effect on the performance and I may end up just having to incorporate some of the columns from the text table into the page table, even though it will mean duplicating data.  I also still don’t know why the performance is so different on my local PC either.

One final thing I tried was to change the database storage type.  I noticed that the three tables were set to use MyISAM storage rather than InnoDB, which the rest of the database was set to.  I migrated the tables to InnoDB in the hope that this might speed things up, but it’s actually slowed things down, both on my local PC and the server.  The two-table query now takes several seconds while the three-table query now takes about the same, so is quicker, but still too slow.  On my desktop PC the speed has doubled to about 1 second.  I therefore reverted back to using MyISAM.

I decided to leave the issue of database speed at that point and to focus on other things instead.  I added a new ‘genre’ column to the texts and added in the required categorisation.  I then updated the API to add in this new column and updated the ‘browse’ and ‘view’ front-ends so that genre now gets displayed.  I then began work on the front-end for the search, focussing on the options for listing texts by genre and adding in the options to select / deselect specific texts or entire genres of text.  This required quite a bit of HTML, JavaScript and CSS work and made a nice change from all of the data processing.  By the end of the week I’d completed work on the text selection facility, and next week I’ll tackle the actual processing of the search, at which point I’ll know whether my database way of handling things will be sufficiently speedy.

Also this week I had a chat with Eleanor Lawson about the STAR project that has recently begun.  There was a project meeting last week that unfortunately I wasn’t able to attend due to my holiday, so we had an email conversation about some of the technical issues that were raised at the meeting, including how it might be possible to view videos side by side and how a user may choose to select multiple videos to be played automatically one after the other.

I also fixed a couple of minor formatting issues for the DSL people and spoke to Katie Halsey, PI of the Books and Borrowing project about the development of the API for the project and the data export facilities.  I also received further feedback from Kirsteen McCue regarding the Data Management Plan for her AHRC proposal and went through this, responding to the comments and generating a slightly tweaked version of the plan.