Week Beginning 20th July 2020

Week 19 of Lockdown, and it was a short week for me as the Monday was the Glasgow Fair holiday.  I spent a couple of days this week continuing to add features to the content management system for the Books and Borrowing project.  I have now implemented the ‘normalised occupations’ part of the CMS.  Originally occupations were just going to be a set of keywords, allowing one or more keyword to be associated with a borrower.  However, we have been liaising with another project that has already produced a list of occupations and we have agreed to share their list.  This is slightly different as it is hierarchical, with a top-level ‘parent’ containing multiple main occupations. E.g. ‘Religion and Clergy’ features ‘Bishop’.  However, for our project we needed a third hierarchical level do differentiate types of minister/priest, so I’ve had to add this in too.  I’ve achieved this by means of a parent occupation ID in the database, which is ‘null’ for top-level occupations and contains the ID of the parent category for all other occupations.

I completed work on the page to browse occupations, arranging the hierarchical occupations in a nested structure that features a count of the number of borrowers associated with the occupation to the right of the occupation name.  These are all currently zero, but once some associations are made the numbers will go up and you’ll be able to click on the count to bring up a list of all associated borrowers, with links through to each borrower.  If an occupation has any child occupations a ‘+’ icon appears beside it.  Press on this to view the child occupations, which also have counts.  The counts for ‘parent’ occupations tally up all of the totals for the child occupations, and clicking on one of these counts will display all borrowers assigned to all child occupations.  If an occupation is empty there is a ‘delete’ button beside it.  As the list of occupations is going to be fairly fixed I didn’t add in an ‘edit’ facility – if an occupation needs editing I can do it directly through the database, or it can be deleted and a new version created.  Here’s a screenshot showing some of the occupations in the ‘browse’ page:

I also created facilities to add new occupations.  You can enter an occupation name and optionally specify a parent occupation from a drop-down list.  Doing so will add the new occupation as a child of the selected category, either at the second level if a top level parent is selected (e.g. ‘Agriculture’) or at the third level if a second level parent is selected (e.g. ‘Farmer’).  If you don’t include a parent the occupation will become a new top-level grouping.  I used this feature to upload all of the occupations, and it worked very well.

I then updated the ‘Borrowers’ tab in the ‘Browse Libraries’ page to add ‘Normalised Occupation’ to the list of columns in the table.  The ‘Add’ and ‘Edit’ borrower facilities also now feature ‘Normalised Occupation’, which replicates the nested structure from the ‘browse occupations’ page, only features checkboxes beside each main occupation.  You can select any number of occupations for a borrower and when you press the ‘Upload’ or ‘Edit’ button your choice will be saved.  Deselecting all ticked checkboxes will clear all occupations for the borrower.  If you edit a borrower who has one or more occupations selected, in addition to the relevant checkboxes being ticked, the occupations with their full hierarchies also appear above the list of occupations, so you can easily see what is already selected. I also updated the ‘Add’ and ‘Edit’ borrowing record pages so that whenever a borrower appears in the forms the normalised occupations feature also appears.

I also added in the option to view page images.  Currently the only ledgers that have page images are the three Glasgow ones, but more will be added in due course.  When viewing a page in a ledger that includes a page image you will see the ‘Page Image’ button above the table of records.  Press on this and a new browser tab will open.  It includes a link through to the full-size image of the page if you want to open this in your browser or download it to open in a graphics package.  It also features the ‘zoom and pan’ interface that allows you to look at the image in the same manner as you’d look at a Google Map.  You can also view this full screen by pressing on the button in the top right of the image.

Also this week I made further tweaks to the script I’d written to update lexeme start and end dates in the Historical Thesaurus based on citation dates in the OED.  I’d sent a sample output of 10,000 rows to Fraser last week and he got back to me with some suggestions and observations.  I’m going to have to rerun the script I wrote to extract the more than 3 million citation dates from the OED as some of the data needs to be processed differently, but as this script will take several days to run and I’m on holiday next week this isn’t something I can do right now.  However, I managed to change the way the date matching script runs to fix some bugs and make the various processes easier to track.  I also generated a list of all of the distinct labels in the OED data, with counts of the number of times these appear.  Labels are associated with specific citation dates, thankfully.  Only a handful are actually used lots of times, and many of the others appear to be used as a ‘notes’ field rather than as a more general label.

In addition to the above I also had a further conversation with Heather Pagan about the data management plan for the AND’s new proposal, responded to a query from Kathryn Cooper about the website I set up for her at the end of last year, responded to a couple of separate requests from post-grad students in Scottish Literature, spoke to Thomas Clancy about the start date for his Place-Names of Iona project, which got funded recently, helped with some issues with Matthew Creasy’s Scottish Cosmopolitanism website and spoke to Carole Hough about making a few tweaks to the Berwickshire Place-names website for REF.

I’m going to be on holiday for the next two weeks, so there will be no further updates from me for a while.

Week Beginning 29th June 2020

This was week 15 of Lockdown, which I guess is sort of coming to an end now, although I will still be working from home for the foreseeable future and having to juggle work and childcare every day.  I continued to work on the Books and Borrowing project for much of this week, this time focussing on importing some of the existing datasets from previous transcription projects.  I had previously written scripts to import data from Glasgow University library and Innerpeffray library, which gave us 14,738 borrowing records.  This week I began by focussing on the data from St Andrews University library.

The St Andrews data is pretty messy, reflecting the layout and language of the original documents, so I haven’t been able to fully extract everything and it will require a lot of manual correcting.  However, I did manage to migrate all of the data to a test version of the database running on my local PC and then updated the online database to incorporate this data.

The data I’ve got are CSV and HTML representations of transcribed pages that come from an existing website with pages that look like this: https://arts.st-andrews.ac.uk/transcribe/index.php?title=Page:UYLY205_2_Receipt_Book_1748-1753.djvu/100.  The links in the pages (e.g. Locks Works) lead through to further pages with information about books or borrowers.  Unfortunately the CSV version of the data doesn’t include the links or the linked to data, and as I wanted to try and pull in the data found on the linked pages I therefore needed to process the HTML instead.

I wrote a script that pulled in all of the files in the ‘HTML’ directory and processed each in turn.  From the filenames my script could ascertain the ledger volume, its dates and the page number.  For example ‘Page_UYLY205_2_Receipt_Book_1748-1753.djvu_10.html’ is ledger 2 (1748-1753) page 10.  The script creates ledgers and pages, and adds in the ‘next’ and ‘previous’ page links to join all the pages in a ledger together.

The actual data in the file posed further problems.  As you can see from the linked page above, dates are just too messy to automatically extract into our strongly structured borrowed and returned date system.  Often a record is split over multiple rows as well (e.g. the borrowing record for ‘Rollins belles Lettres’ is actually split over 3 rows).  I could have just grabbed each row and inserted it as a separate borrowing record, which would then need to be manually merged, but I figured out a way to do this automatically.  The first row of a record always appears to have a code (the shelf number) in the second column (e.g. J.5.2 for ‘Rollins’) whereas subsequent rows that appear to belong to the same record don’t (e.g. ‘on profr Shaws order by’ and ‘James Key’).  I therefore set up my script to insert new borrowing records for rows that have codes, and to append any subsequent rows that don’t have codes to this record until a row with a code is reached again.

I also used this approach to set up books and borrowers too.  If you look at the page linked to above again you’ll see that the links through to things are not categorised – some are links to books and others to borrowers, with no obvious way to know which is which.  However, it’s pretty much always the case that it’s a book that appears in the row with the code and it’s people that are linked to in the other rows.  I could therefore create or link to existing book holding records for links in the row with a code and create or link to existing borrower records for links in rows without a code.  There are bound to be situations where this system doesn’t quite work correctly, but I think the majority of rows do fit this pattern.

The next thing I needed to do was to figure out which data from the St Andrews files should be stored as what in our system.  I created four new ‘Additional Fields’ for St Andrews as follows:

  • Original Borrowed date: This contains the full text of the first column (e.g. Decr 16)
  • Code: This contains the full text of the second column (e.g. J.5.2)
  • Original Returned date: This contains the full text of the fourth column (e.g. Jan. 5)
  • Original returned text: This contains the full date of the fifth column (e.g. ‘Rollins belles Lettres V. 2d’)

In the borrowing table the ‘transcription’ field is set to contain the full text of the ‘borrowed’ column, but without links.  Where subsequent rows contain data in this column but no code, this data is then appended to the transcription.  E.g. the complete transcription for the third item on the page linked to above is ‘Rollins belles Lettres Vol 2<sup>d</sup> on profr Shaws order by James Key’.

The contents of all pages linked to in the transcriptions are added to the ‘editors notes’ field for future use if required.  Both the page URL and the page content are included, separated by a bar (|) and if there are multiple links these are separated by five dashes.  E.g. for the above the notes field contains:

‘Rollins_belles_Lettres| <p>Possibly: De la maniere d’enseigner et d’etuder les belles-lettres, Par raport à l’esprit &amp; au coeur, by Charles Rollin. (A Amsterdam : Chez Pierre Mortier, M. DCC. XLV. [1745]) <a href=”http://library.st-andrews.ac.uk/record=b2447402~S1″>http://library.st-andrews.ac.uk/record=b2447402~S1</a></p>

—– profr_Shaws| <p><a href=”https://arts.st-andrews.ac.uk/biographical-register/data/documents/1409683484″>https://arts.st-andrews.ac.uk/biographical-register/data/documents/1409683484</a></p>

—– James_Key| <p>Possibly James Kay: <a href=”https://arts.st-andrews.ac.uk/biographical-register/data/documents/1389455860″>https://arts.st-andrews.ac.uk/biographical-register/data/documents/1389455860</a></p>


As mentioned earlier, the script also generates book and borrower records based on the linked pages too.  I’ve chosen to set up book holding rather than book edition records as the details are all very vague and specific to St Andrews.  In the holdings table I’ve set the ‘standardised title’ to be the page link with underscores replaced with dashes (e.g. ‘Rollins belles Lettres’) and the page content is stored in the ‘editors notes’ field.  One book item is created for each holding to be used to link to the corresponding borrowing records.

For borrowers a similar process is followed, with the link added to the surname column (e.g. Thos Duncan) and the page content added to the ‘editors notes’ field (e.g. <p>Possibly Thomas Duncan: <a href=”https://arts.st-andrews.ac.uk/biographical-register/data/documents/1377913372″>https://arts.st-andrews.ac.uk/biographical-register/data/documents/1377913372</a></p>’).  All borrowers are linked to records as ‘Main’ borrowers.

During the processing I noticed that the fourth ledger had a slightly different structure to the others, with entire pages devoted to a particular borrower, whose name then appeared in a heading row in the table.  I therefore updated my script to check for the existence of this heading row, and if it exists my script then grabs the borrower name, creates the borrower record if it doesn’t already exist and then links this borrower to every borrowing item found on the page.  After my script had finished running we had 11147 borrowing records, 996 borrowers and 6395 book holding records for St Andrew in the system.

I then moved onto looking at the data for Selkirk library.  This data was more nicely structured than the St Andrews data, with separate spreadsheets for borrowings, borrowers and books and borrowers and books connected to borrowings via unique identifiers.  Unfortunately the dates were still transcribed as they were written rather than being normalised in any way, which meant it was not possible to straightforwardly generate structured dates for the records and these will need to be manually generated.  The script I wrote to import the data took about a day to write, and after running it we had a further 11,431 borrowing records across two registers and 415 pages entered into our database.

As with St Andrews, I created book records as Holding records only (i.e. associated specifically with the library rather than being project-wide ‘Edition’ records.  There are 612 Holding records for Selkirk.  I also processed the borrower records, resulting in 86 borrower records being added.  I added the dates as originally transcribed to an additional field named ‘Original Borrowed Date’ and the only other additional field is in the Holding records for ‘Subject’, that will eventually be merged with our ‘Genre’ when this feature becomes available.

Also this week I advised Katie on a file naming convention for the digitised images of pages that will be created for the project.  I recommended that the filenames shouldn’t have spaces in them as these can be troublesome on some operating systems and that we’d want a character to use as a delimiter between the parts of the filename that wouldn’t appear elsewhere in the filename so it’s easy to split up the filename.  I suggested that the page number should be included in the filename and that it should reflect the page number as it will be written into the database – e.g. if we’re going to use ‘r’ and ‘v’ these would be included.  Each page in the database will be automatically assigned an auto-incrementing ID, and the only means of linking a specific page record in the database with a specific image will be via the page number entered when the page is created, so if this is something like ‘23r’ then ideally this should be represented in the image filename.

Katie had wondered about using characters to denote ledgers and pages in the filename (e.g. ‘L’ and ‘P’) but if we’re using a specific delimiting character to separate parts of the filename then using these characters wouldn’t be necessary and I suggested it would be better to not use ‘L’ as a lower case ‘l’ is very easy to confuse with a ‘1’ or a capital ‘I’ which might confuse future human users.

Instead I suggested using a ‘-‘ instead of spaces and a ‘_’ as a delimiter and pointed out that we should  ensure that no other non-alphanumeric characters are ever used in the filename – no apostrophes, commas, colons, semi-colons, ampersands etc and to make sure the ‘-‘ is really a minus sign and not one of the fancy dashes (–) that get created by MS Office.  This shouldn’t be an issue when entering a filename, but might be if a list of filenames is created in Word and then pasted into the ‘save as’ box, for example.

Finally, I suggested that it might be best to make the filenames entirely lower case, as some operating systems are case sensitive and if we don’t specify all lower case then there may be variation in the use of case.  Following these guidelines the filenames would look something like this:

  • jpg
  • dumfries-presbytery_2_3v.jpg
  • standrews-ul_9_300r.jpg

In addition to the Books and Borrowing project I worked on a number of other projects this week.  I gave Matthew Creasy some further advice on using forums in his new project website, and ‘Scottish Cosmopolitanism at the Fin de Siècle’ website is now available here: https://scoco.glasgow.ac.uk/.

I also worked a bit more on using dates from the OED data in the Historical Thesaurus.  Fraser had sent me a ZIP file containing the entire OED dataset as 240 XML files and I began analysing these to figure out how we’d extract these dates so that we could use them to update the dates associated with the lexemes in the HT.  I needed to extract the quotation dates as these have ‘ante’ and ‘circa’ notes, plus labels.  I noted that in addition to ‘a’ and ‘c’ a question mark is also used, somethings with an ‘a’ or ‘c’ and sometimes without.  I decided to process things as follows:

  • ?a will just be ‘a’
  • ?c will just be ‘c’
  • ? without an ‘a’ or ‘c’ will be ‘c’.

I also noticed that a date may sometimes be a range (e.g. 1795-8) so I needed to include a second date column in my data structure to accommodate this.  I also noted that there are sometimes multiple Old English dates, and the contents of the ‘date’ tag vary depending on the date – sometimes the content is ‘OE’ and othertimes ‘lOE’ or ‘eOE’.  I decided to process any OE dates for a lexeme as being 650 and to have only one OE date stored, so as to align with how OE dates are stored in the HT database (we don’t differentiate between date for OE words).

While running my date extraction script over one of the XML files I also noticed that there were lexemes in the OED data that were not present in the OED data we had previously extracted.  This presumably means the dataset Fraser sent me is more up to date than the dataset I used to populate our online OED data table.  This will no doubt mean we’ll need to update our online OED table, but as we link to the HT lexeme table using the OED catid, refentry, refid and lemmaid fields if we were to replace the online OED lexeme table with the data in these XML files the connections from OED to HT lexemes would be retained without issue (hopefully), but any matching processes we performed would need to be done again for the new lexemes.

I set my extraction script running on the OED XML files on Wednesday and processing took a long time.  The script didn’t complete until sometime during Friday night, but after it had finished it had processed 238,699 categories, 754,285 lexemes, generating 3,893,341 date rows.  It also found 4,062 new words in the OED data that it couldn’t process because they don’t exist in our OED lexeme database.

I also spent a bit more time working on some scripts for Fraser’s Scots Thesaurus project.  The scripts now ignore ‘additional’ entries and only include ‘n.’ entries that match an HT ‘n’ category.  Variant spellings are also removed (these were all tagged with <form> and I removed all of these).  I also created a new field to store only the ‘NN_’ tagged words and remove all others.

The scripts generated three datasets, which I saved as spreadsheets for Fraser.  The first (postagged-monosemous-dost-no-adds-n-only) contains all of the content that matches the above criteria. The second (postagged-monosemous-dost-no-adds-n-only-catheading-match) lists those lexemes where a postagged word fully matches the HT category heading.  The final (postagged-monosemous-dost-no-adds-n-only-catcontents-match) lists those lexemes where a postagged word fully matches a lexeme in the HT category.  For this table I’ve also added in the full list of lexemes for each HT category too.

I also spent a bit of time working on the Data Management Plan for the new project for Jane Stuart-Smith and Eleanor Lawson at QMU and arranged for a PhD student to get access to the TextGrid files that were generated for the audio records for the SCOTS Corpus project.

Finally, I investigated the issue the DSL people are having with duplicate child entries appearing in their data.  This was due to something not working quite right in a script Thomas Widmann had written to extract the data from the DSL’s editing system before he left last year, and Ann had sent me some examples of where the issue was cropping up.

I have the data that was extracted from Thomas’s script last July as two XML files (dost.xml and snd.xml) and I looked through these for the examples Ann had sent.  The entry for snd13897 contains the following URLs:




The first is the ID for the main entry and the other two are child entries.  If I search for the second one (snds3788) this is the only occurrence of the ID in the file, as the child entry has been successfully merged.  But if I search for the third one (sndns2217) I find a separate entry with this ID (with more limited content).  The pulling in of data into a webpage in the V3 site uses URLs stored in a table linked to entry IDs. These were generated from the URLs in the entries in the XML file (see the <url> tags above).  For the URL ‘sndns2217’ the query finds multiple IDs, one for the entry snd13897 and another for the entry sdnns2217.  But it finds snd13897 first, so it’s the content of this entry that is pulled into the page.

The entry for dost16606 contains the following URLs:



(in addition to headword URLs).  Searching for the second one discovers a separate entry with the ID dost50272 (with more limited content).  As with SND, searching the URL table for this URL finds two IDs, and as dost16606 appears first this is the entry that gets displayed.

What we need to do is remove the child entries that still exist as separate entries in the data.  To do this I could is write a script that would go through each entry in the dost.xml and snd.xml files.  It would then pick out every <url> that is not the same as the entry ID and search the file to see if any entry exists with this ID.  If it does then presumably this is a duplicate that should then be deleted.  I’m waiting to hear back from the DSL people to see how we should proceed with this.

As you can no doubt gather from the above, this was a very busy week but I do at least feel that I’m getting on top of things again.

Week Beginning 3rd February 2020

I worked on several different projects this week.  One of the major tasks I tackled was to continue with the implementation of a new way of recording dates for the Historical Thesaurus.  Last week I created a script that generated dates in the new format for a specified (or random) category, including handling labels.  This week I figured that we would also need a method to update the fulldate field (i.e. the full date as a text string, complete with labels etc that is displayed on the website beside the word) based on any changes that are subsequently made to dates using the new system, so I updated the script to generate a new fulldate field using the values that have been created during the processing of the dates.  I realised that if this newly generated fulldate field is not exactly the same as the original fulldate field then something has clearly gone wrong somewhere, either with my script or with the date information stored in the database.  Where this happens I added the text ‘full date mismatch’ with a red background at the end of the date’s section in my script.

Following on from this I created a script that goes through every lexeme in the database, temporarily generates the new date information and from this generates a new fulldate field.  Where this new fulldate field is not an exact match for the original fulldate field the lexeme is added to a table, which I then saved as a spreadsheet.

The spreadsheet contains 1,116 rows containing lexemes that have problems with their dates, which out of 793,733 lexemes is pretty good going, I’d say.  Each row includes a link to the category on the website and the category name, together with the HTID, word, original fulldate, generated fulldate and all original date fields for the lexeme in question.  I spent several hours going through previous, larger outputs and fixing my script to deal with a variety of edge cases that were not originally taken into consideration  (e.g. purely OE dates with labels were not getting processed and some ‘a’ and ‘c’ dates were confusing the algorithm that generated labels).  The remaining rows can mostly be split into the following groups:

  1. Original and generated fulldate appear to be identical but there must be some odd invisible character encoding issue that is preventing them being evaluated as identical. E.g. ‘1513(2) Scots’ and ‘1513(2) Scots’.
  2. Errors in the original fulldate. E.g. ‘OE–1614+ 1810 poet.’ doesn’t have a gap between the plus and the preceding number, another lexeme has ‘1340c’ instead of ‘c1340’
  3. Corrections made to the original fulldate that were not replicated in the actual date columns, E,g, ‘1577/87–c1630’ has a ‘c’ in the fulldate but this doesn’t appear in any of the ‘dac’ fields, and a lexeme has the date ‘c1480 + 1485 + 1843’ but the first ‘+’ is actually stored as a ‘-‘ in the ‘con’ column.
  4. Inconsistent recording of the ‘b’ dates where a ‘b’ date in the same decade does not appear as a single digit but as two digits. There are lots of these, e.g. ‘1430/31–1630’ should really be ‘1430/1–1630’ following the convention used elsewhere.
  5. Occasions where two identical dates appear with a label after the second date, resulting in the label not being found, as the algorithm finds the first instance of the date with no label after it. E,g, a lexeme with the fulldate ‘1865 + 1865 rare’.
  6. Any dates that have a slash connector and a label associated with the date after the slash end up with the label associated with the date before the slash too. E.g. ‘1731/1800– chiefly Dict.’.  This is because the script can’t differentiate between a slash used to split a ‘b’ date (in which case a following label ‘belongs’ to the date before the slash) and a slash used to connect a completely different date (in which case the label ‘belongs’ to the other date).  I tried fixing this but ended up breaking other things so this is something that will need manual intervention.  I don’t think it occurs very often, though.  It’s a shame the same symbol was used to mean two different things.

It’s now down to some manual fixing of these rows, probably using the spreadsheet to make any required changes.  Another column could be added to note where no changes to the original data are required and then for the remainder make any changes that are necessary (e.g. fixing the original first date, or any of the other date fields).  Once that’s done I will be able to write a script that will take any rows that need updating and perform the necessary updates.  After that we’ll be ready to generate the new date fields for real.

I also spent some time this week going through the sample data that Katie Halsey had sent me from a variety of locations for the Books and Borrowing project.  I went through all of the sample data and compiled a list of all of the fields found in each.  This is a first step towards identifying a core set of fields and of mapping the analogous fields across different datasets.  I also included the GU students and professors from Matthew’s pilot project but I have not included anything from the images from Inverness as deciphering the handwriting in the images is not something I can spend time doing.  With this mapping document in place I can now think about how best to store the different data recorded at the various locations in a way that will allow certain fields to be cross-searched.

I also continued to work on the Place-Names of Mull and Ulva project.  I copied all of the place-names taken from the GB1900 data to the Gaelic place-name field, added in some former parishes and updated the Gaelic classification codes and text.  I also began to work on the project’s API and front end.  By the end of the week I managed to get an ‘in development’ version of the quick search working.  Markers appear with labels and popups and you can change base map or marker type.  Currently only ‘altitude’ categorisation gives markers that are differentiated from each other, as there is no other data yet (e.g. classification, dates).  The links through to the ‘full record’ also don’t currently work, but it is handy to have the maps to be able to visualise the data.

A couple of weeks ago E Jamieson, the RA on the SCOSYA project noticed that the statistics pane in the Linguists Atlas didn’t scroll, meaning that if there was a lot of data in the pane it was getting cut off the bottom.  I found a bit of time to investigate this and managed to update the JavaScript to ensure the height of the pane was checked and updated when necessary whenever the pane is opened.

Also this week I had a further email conversation with Heather Pagan about the Anglo-Norman Dictionary, spoke to Rhona Alcorn about a new version of the Scots School Dictionary app, met with Matthew Creasey to discuss the future of his Decadence and Translation Network recourse and a new project of his that is starting up soon, responded to a PhD student who had asked me for some advice about online mapping technologies, arranged a coffee meeting for the College of Arts Developers and updated the layout of the video page of SCOSYA.

Week Beginning 22nd July 2019

I was on holiday last week and had quite a stack of things to do when I got back in on Monday.  This included setting up a new project website for a student in Scottish Literature who had received some Carnegie funding for a project and preparing for an interview panel that I was on, with the interview taking place on Friday.  I also responded to Alison Wiggins about the content management system I’d created for her Mary Queen of Scots letters project and had a discussion with someone in Central IT Services about the App and Play Store accounts that I’ve been managing for several years now.  It’s looking like responsibility for this might be moving to IT services, which I think makes a lot of sense.  I also gave some advice to a PhD student about archiving and preserving her website and engaged in a long email discussion with Heather Pagan of the Anglo-Norman Dictionary about sorting out their data and possibly migrating it to a new system.  On Wednesday I had a meeting with the SCOSYA team about further developments of the public atlas.  We decided on another few requirements and discussed timescales for the completion of the work.  They’re hoping to be able to engage in some user testing in the middle of September, so I need to try and get everything completed before then.  I had hoped to start on some of this on Thursday, but I was struck down by a really nasty cold that I’ve still not shaken yet, which made focussing on such tricky tasks as getting questionnaire areas to highlight when clicked on rather difficult.

I spent most of the rest of the week working for DSL in various capacities.  I’d put in a request to get Apache Solr installed on a new server, so we could use this for free-text searching and thankfully Arts IT Support agreed to do this.  A lot of my week was spent preparing the data from both the ‘v2’ version of the DSL (the data outputted from the original API, but with full quotes and everything pre-generated rather than being created on the fly every time an entry is requested) and the ‘v3’ API (data taken from the editing server and outputted by a script written by Thomas Widmann) so that it could be indexed by Solr.  Raymond from Arts IT Support set up an instance of Solr on a new server and I created scripts that went through all 90,000 DSL entries in both versions and generated full-text versions of the entries that stripped out the XML tags.  For each set I created three versions – on that was ‘full text’, one that was full text without the quotations and the other that was just the quotations.  The script outputted this data in a format that Solr could work with and I sent this on to Raymond for indexing.  The first test version I sent Raymond was just the full text, and Solr managed to index this without incident.  However, the other views of the text required working with the XML a bit, and this appears to have brought in some issues with special characters that Solr is not liking.  I’m still in the middle of sorting this out and will continue to look into it next week, but progress with the free-text searching is definitely being made and it looks like the new API will be able to offer the same level of functionality as the existing API.  I also ensured I documented the process of generating all of the data from the XML files outputted by the editing system through to preparing the full-text for indexing by Solr, so next time we come to update the data we will know exactly what to do.  This is much better than how things previously stood, as the original API is entirely a ‘black box’ with no documentation whatsoever as to how to update the data contained therein.

Also during this time I engaged in an email conversation about managing the dictionary entries and things like cross references with Ann Ferguson and the people who will be handling the new editor software for the dictionary, and helped to migrate control for the email part of the DSL domain to the control of the DSL’s IT people.  We’re definitely making progress with sorting out the DSL’s systems, which is really great.

I’m going to be working for just three days over the next two weeks, and all of these days will be out of the office, so I’ll just need to see how much time I have to continue with the DSL tasks, especially as work for the SCOSYA project is getting rather urgent.

Week Beginning 27th May 2019

Monday was a holiday this week, so Tuesday was the start of my working week.  I spent about half the day completing work on the Data Management Plan that I had been asked to write by the College of Arts research people, and the remainder of the day continuing to write scripts the help in the linkup of HT and OED lexeme data.  The latest script gets all unmatched HT words that are monosemous within part of speech in the unmatched dataset.  For each of these the script then retrieves all OED words where the stripped form matches, as does POS, but the words are already matched to a different HT lexeme.  If there is more than one OED lexeme matched to the HT lexeme I’ve added the information on subsequent rows in the table, so that full OED category information can more easily be read.  I’m not entirely sure what this script will be used for, but Fraser seems to think it will be useful in automatically pinpointing certain words that the OED are currently trying to manually track down.

During the week I also made some further updates to a couple of song stories for the RNSN project and had a meeting over the phone with Kirsteen McCue about a new project she’s in the planning stages for at the moment, and which I will be helping with the technical aspects for.  I also had a meeting with PhD student Ewa Wanat about a website she’s putting together and gave her some advice about things.

The rest of my week was split between DSL and SCOSYA.  For DSL I spent time answering a number of emails.  I then went through the SND data that had been outputted by Thomas Widmann’s scripts on the DSL server.  I had tried running this data through the script I’d written to take the outputted XML data and insert it into our online MySQL database, but my script was giving errors, stating that the input file wasn’t valid XML.  I loaded the file (an almost 90Mb text file) into Oxygen and asked it to validate the XML.  It took a while, but managed to find one easily identifiable error, and one error that was trickier to track down.

In the entry for snd22907 there was a closing </sense> tag in the ‘History’ that has no corresponding opening tag.  This was easy to track down and manually fix.  Entry snd12737 had two opening tags (<entry id=”snd12737″>) one below the <meta> tag.  This was trickier to find as I needed to manually track it down by chopping the file in half, checking which half the error was in, chopping this bit in half and so on until I ended up with a very small file in which it was easy to locate the problem.

With the SND data fixed I could then run it through my script.  However, I wanted to change the way the script worked based on feedback from Ann last week.  Previously I had added new fields to a test version of the main database, and the script found the matching row and inserted new data.  I decided instead to create an entirely new table for the new data, to keep things more cleanly divided, and to handle the possibility of there being new entries in the data that were not present in the existing database.  I also needed to update the way in which the URL tag was handled, as Ann had explained that there could be any number of URL tags, with them referencing other entries that has been merged with the current one.  After updating my test version of the database to make new tables and fields, and updating my script to take these changes into consideration I ran both the DOST and the SND data through the script, resulting in 50,373 entries for DOST and 34,184 entries for SND.  This is actually less entries than in the old database.  There are 3023 missing SND entries and 1994 missing DOST entries.  They are all supplemental entries (with IDs starting ‘snds’ in SND and ‘adds’ in DOST).  This leaves just 24 DOST ‘adds’ entries in the Sienna data and 2730 SND ‘snds’ entries.  I’m not sure what’s going on with the output – whether the omission of these entries is intentional (because the entries have been merged with regular entries) or whether this is an error, but I have exported information about the missing rows and have sent these on to Ann for further investigation.

For SCOSYA I focussed on adding in the sample sound clips and groupings for location markers.  I also engaged with some preparations for the project’s ‘data hack’ that will be taking place in mid June.  Adding in sound clips took quite a bit of time, as I needed to update both the Content Management System to allow sound clips to be uploaded and managed, and the API to incorporate links to the uploaded sound clips.  This is in addition to incorporating the feature into the front-end.

Now if a member of staff logs into the CMS and goes to the ‘Browse codes’ page they will see a new column that lists the number of sound clips associated with a code.  I’ve currently uploaded the four for E1 ‘This needs washed’ for test purposes.  From the table, clicking on a code loads its page, which now includes a new section for sound clips.  Any previously uploaded ones are listed and can be played or deleted.  New clips in MP3 format can also be uploaded here, with files being renamed upon upload, based on the code and the next free auto-incrementing number in the database.

In the API all soundfiles are included in the ‘attributes’ endpoint, which is used by the drop-down list in the atlas.  The public atlas has also been updated to include buttons to play any sound clips that are available, as the screenshot towards the end of this post demonstrates.

There is now a new section labelled ‘Listen’ with four ‘Play’ icons.  Pressing on one of these plays a different sound clip.  Getting these icons to work has been more tricky than you might expect.  HTML5 has a tag called <audio> that browsers can interpret in order to create their own audio player which is then embedded in the page.  This is what happens in the CMS.  Unfortunately an interface designer has no control over the display of the player – it’s different in every browser and generally takes up a lot of room, which we don’t really have.  I initially just used the HTML5 audio player but each sound clip then had to appear on a new row and the player in Chrome was too wide for the side panel.

Instead I’ve had to create my own audio player in JavaScript.  It still uses HTML5 audio so it works in all browsers, but it allows me to have complete control over the styling, which in this case means just a single button with a ‘Play’ icon on it than changes to a ‘Pause’ icon when the clips is playing.  But it also meant that functionality you might take for granted, such as ensuring an already playing clip is stopped when a different ‘play’ button is pressed, resetting a ‘pause’ button back to a ‘play’ button when a clip ends and such things needed to be implemented by me.  I think it’s all working now, but there may still be some bugs.

I then moved on to looking at the groups for locations.  This will be a fixed list of groups, taken from ones that the team has already created.  I copied these groups to a new location in the database and updated the API to create new endpoints for listing groups and bringing back the IDs of all locations that are contained within a specified group.  I haven’t managed to get the ‘groups’ feature fully working yet, but the selection options are now in place.  There’s a ‘groups’ button in the ‘Examples’ section, and when you click on this a section appears listing the groups.  Each group appears as a button.  When you click on one a ‘tick’ is added to the button and currently the background turns a highlighted green colour.  I’m going to include several different highlighted colours so the buttons light up differently.  Although it doesn’t work yet, these colours will then be applied to the appropriate group of points / areas on the map.  You can see an example of the buttons below:

The only slight reservation I have is that this option does make the public atlas more complicated to use and more cluttered.  I guess it’s ok as the groups are hidden by default, though.  There also may be an issue with the sidebar getting too long for narrow screens that I’ll need to investigate.

Week Beginning 19th November 2018

This week I mainly working on three projects:  The Historical Thesaurus, the Bilingual Thesaurus and the Romantic National Song Network.  For the HT I continued with the ongoing and seemingly never-ending task of joining up the HT and OED datasets.  Marc, Fraser and I had a meeting last Friday and I began to work through the action points from this meeting on Monday.  By Wednesday I had ticked off most of the items, which I’ll summarise here.

Whilst developing the Bilingual Thesaurus I’d noticed that search term highlighting on the HT site wasn’t working for quick searches, only advanced searches for words, so I investigated and fixed this.  I then updated the lexeme pattern matching / date matching script to incorporate the stoplist we’d created during last week’s meeting (words or characters that should be removed when comparing lexemes, such as ‘to ‘ and ‘the’).  This worked well and has bumped matches up to better colour levels, but has resulted in some words getting matched multiple times.  E.g. when removing ‘to’, ‘of’ etc this results in a form that then appears multiple times.  For example, in one category the OED has ‘bless’ twice (presumably an erro?) and HT has ‘bless’ and ‘bless to’.  With ‘to’ removed there then appear to be more matches that there should be.  However, this is not an issue when dates are also taken into consideration.  I also updated the script so that categories where there are 3 matches and at least 66% of words match have been promoted from orange to yellow.

When looking at the outputs at the meeting Marc wondered why certain matches (e.g. 120202 ‘relating to doctrine or study’ / ‘pertaining to doctrine/study’ and 88114 ‘other spec.’ / ‘other specific’) hadn’t been ticked off and wondered whether category heading pattern matching had worked properly.  After some investigation I’d say it has worked properly – the reason these haven’t been ticked off is they contain too few words to have reached the criteria for ticking off.

Another script we looked at during our meeting was the sibling matching script, which looks for matches at the same hierarchical level and part of speech, but different numbers.  I completely overhauled the script to bring it into line with the other scripts (including recent updates such as the stoplist for lexeme matching and the new yellow criteria).  There are currently 19, 17 and 25 green, lime green and yellow matches that could be ticked off.  I also ticked off the empty category matches listed on the ‘thing heard’ script (so long as they have a match) and for the ‘Noun Matching’ I ticked off the few matches that there were.  Most were empty categories and there were less than 15 in total.

Another script I worked on was the ‘monosemous’ script, which looks for monosemous forms in unmatched categories and tries to identify HT categories that also contain these forms.  We weren’t sure at the meeting whether this script identified words that were fully monosemous in the entire dataset, or those that were monosemous in the unmatched categories.  It turned out it was the former, so I updated the script to only look through the unchecked data, which has identified further monosemous forms.  This has helped to more accurately identify matched categories.  I also created a QA script that checks the full categories that have potentially been matched by the monosemous script.

I also worked on the date fingerprinting script.  This gets all of the start dates associated with lexemes in a category, plus a count of the number of times each date appears, and uses these to try and find matches in the HT data.  I updated this script to incorporate the stoplist and the ‘3 matches and 66% match’ yellow rule, and ticked off lots of matches that this script identified.  I ticked off all green (1556), lime green (22) and yellow (123) matches.

Out of curiosity, I wrote a script that looked at our previous attempt at matching the categories, which Fraser and I worked on last year and earlier this year.  The script looks at categories that were matched during this ‘v1’ process that had yet to be matched during our current ‘v2’ process.  For each of these the script performs the usual checks based on content: comparing words and first dates and colour coding based on number of matches (this includes the stoplist and new yellow criteria mentioned earlier).  There are 7148 OED categories that are currently unmatched but were matched in V1.  Almost 4000 of these are empty categories.  There are 1283 ‘purple’ matches, which means (generally) something is wrong with the match.  But there are 421 in the green, lime green and yellow sections, which is about 12% of the remaining unmatched OED categories that have words.  It might also be possible to spot some patterns to explain why they were matched during v1 but have yet to be matched in v2.  For example, 2711 ‘moving water’ has and its HT counterpart has  There are possibly patterns in the 1504 orange matches that could be exploited too.

Finally, I updated the stats page to include information about main and subcats.  Here are the current unmatched figures:

Unmatched (with POS): 8629

Unmatched (with POS and not empty): 3414

Unmatched Main Categories (with POS): 5036

Unmatched Main Categories (with POS and not empty): 1661

Unmatched Subcategories (with POS): 3573

Unmatched Subcategories (with POS and not empty): 1753

So we are getting there!

For the Bilingual Thesaurus I completed an initial version of the website this week.  I have replaced the original colour scheme with a ‘red, white and blue’ colour scheme as suggested by Louise.  This might be changed again, but for now here is an example of how the resource looks:

The ‘quick’ and ‘advanced’ searches are also now complete, using the ‘search words’ mentioned in a previous post, and ignoring accents on characters.  As with the HT, by default the quick search matches category headings and headwords exactly, so ‘ale’ will return results as there is a category ‘ale’ and also a word ‘ale’ but ‘bread’ won’t match anything because there are no words or categories with this exact text.  You need to use an asterisk wildcard to find text within word or category text:  ‘bread*’ would find all items starting with ‘bread’, ‘*bread’ would find all items ending in ‘bread’ and ‘*bread*’ would find all items with ‘bread’ occurring anywhere.

The ‘advanced search’ lets you search for any combination of headword, category, part of speech, section, dates and languages or origin and citation.  Note that if you specify a range of years in the date search it brings back any word that was ‘active’ in your chosen period.  E.g. a search for ‘1330-1360’ will bring back ‘Edifier’ with a date of 1100-1350 because it was still in use in this period.

As with the HT, different search boxes are joined with ‘AND’ – e.g. if you tick ‘verb’ and select ‘Anglo Norman’ as the section then only words that are verbs AND Anglo Norman will be returned.  Where search types allow multiple options to be selected (i.e. part of speech and languages of origin and citation) if multiple options in each list are selected these are joined by ‘OR’.  E.g. if you select ‘noun’ and ‘verb’ and select ‘Dutch’, ‘Flemish’ and ‘Italian’ as languages or origin this will find all words that are either nouns OR verbs AND have a language of origin of Dutch OR Flemish OR Italian.

Search results display the full hierarchy leading to the category, the category name, plus the headword, section, POS and dates (if the result is a ‘word’ result rather than a ‘category’ result).  Clicking through to the category highlights the word. I also added in a ‘cite’ option to the category page and updated the ‘About’ page to add a sentence about the current website.  The footer still needs some work (e.g. maybe including logos for the University of Westminster and Leverhulme) and there’s a ‘terms of use’ page linked to from the homepage that currently doesn’t have any content, but other than that I think most of my work here is done.

For the Romantic National Song Network I continued to create timelines and ‘storymaps’ based on powerpoint presentations that had been sent to me.  This is proving to be a very time-intensive process, as it involves extracting images, audio files and text from the presentations, formatting the text as HTML, reworking the images (resizing, sometimes joining multiple images together to form one image, changing colour levels, saving the images, uploading them to the WordPress site), uploading the audio files, adding in the HTML5 audio tags to get the audio files to play, creating the individual pages for each timeline entry / storymap entry.  It took the best part of an afternoon to create one timeline for the project, which involved over 30 images, about 10 audio files and more than 20 Powerpoint slides.  Still, the end result works really well, so I think it’s worth putting the effort in.

In addition to these projects I met with a PhD student, Ewa Wanat, who wanted help in creating an app.  I spent about a day attempting to make a proof of concept for the app, but unfortunately the tools I work with are just not very well suited to the app she wants to create.  The app would be interactive and highly dependent on logging user interactions as accurately as possible.  I created looked into using the d3.js library to create the sort of interface she wanted (a circle that rotates with smaller circles attached to it, that the user should tap on when a certain point in the rotation is reached), but although this worked, the ‘tap’ detection was not accurate enough.  In fact on touchscreens more often than not a ‘tap’ wasn’t even being registered.  D3.js just isn’t made to deal with time-sensitive user interaction on animated elements and I have no experience with any libraries that are made in this way, so unfortunately it looks like I won’t be able to help out with this project.  Also, Ewa wanted the app to be launched in January and I’m just far too busy with other projects to be able to do the required work in this sort of timescale.

Also this week I helped extract some data about the Seeing Speech and Dynamic Dialects videos for Eleanor Lawson, I responded to queries from Meg MacDonald and Jennifer Nimmo about technical work on proposals they are involved with, I responded to a request for advice from David Wilson about online surveys, and another request from Rachel Macdonald about the use of Docker on the SPADE server.  I think that’s just about everything to report.

Week Beginning 25th January 2016

I spent a fair amount of time this week working on the REELS project, which began last week. I set up a basic WordPress powered project website and got some network drive space set up and then on Wednesday we had a long meeting where we went over some of the technical aspects of the project. We discussed the structure of the project website and also the structure of the database that the project will require in order to record the required place-name data. I spent the best part of Thursday writing a specification document for the database and content management system which I sent to the rest of the project team for comment on Thursday evening. Next week I will update this document based on the team’s comments and will hopefully find the time to start working on the database itself.

I met with a PhD student this week to discuss online survey tools that might be suitable for the research that she was hoping to gather. I heard this week from Bryony Randall in English Literature that an AHRC proposal that I’d given her some technical advice on had been granted funding, which is great news. I had a brief meeting with the SCOSYA team this week too, mainly to discuss development of the project website. We’re still waiting on the domain being activated, but we’re also waiting for a designer to finish work on a logo for the project so we can’t do much about the interface for the project website until we get this anyway.

I also attended the ‘showcase’ session for the Digging into Data conference that was taking place at Glasgow this week. The showcase was an evening session where projects had stalls and could speak to attendees about their work. I was there with the Mapping Metaphor project, along with Wendy, Ellen and Rachael. We had some interesting and at times pretty in-depth discussions with some of the attendees and it was a good opportunity to see the sorts of outputs other projects have created with their data.

Before the event I went through the website to remind myself of how it all worked and managed to uncover a bug in the top-level visualisation: When you click on a category yellow circles appear at the categories the one you’ve clicked on have a connection to. These circles represent the number of metaphorical connections between the two categories. What I noticed was that the size of the circles was not taking into consideration the metaphor strength that had been selected, which was giving confusing results. E.g. if there are 14 connections but only one of these is ‘strong’ and you’ve selected to view only strong metaphors the circle size was still being based on 14 connections rather than one. Thankfully I managed to track down the cause of the error and I fixed it before the event.

I also spent a little bit of time further investigating the problems with the Curious Travellers server, which for some reason is blocking external network connections. I was hoping to install a ‘captcha’ on the contact form to cut down on the amount of spam that was being submitted and the Contact Form 7 plugin has a facility to integrated Google’s ‘reCaptcha’ service. This looked like it was working very well, but for some reason when ‘reCaptcha’ was added to forms these forms failed to submit, instead giving error messages in a yellow box. The Contact Form 7 documentation suggests that a yellow box means the content has been marked as spam and therefore won’t send, but my message wasn’t spam. Removing ‘reCaptcha’ from the form allowed it to submit without any issue. I tried to find out what was causing this but have been unable to find an answer. I can only assume it is something to do with the server blocking external connections and somehow failing to receive a ‘message is not spam’ notification from the service. I think we’re going to have to look at moving the site to a different server unless Chris can figure out what’s different about the settings on the current one.

My final project this week was SAMUELS, for which I am continuing to work on the extraction of the Hansard data. Last week I figured out how to run a test job on the Grid and I split the gigantic Hansard text file into 5000 line chunks for processing. This week I started writing a shell script that will be able to process these chunks. The script needs to do the same tasks as my initial PHP script, but because of the setup of the Grid I need to write a script that will run directly in the Bash shell. I’ve never done much with shell scripting so it’s taken me some time to figure out how to write such a script. So far I have managed to write a script that takes a file as an input, goes through each line at a time, splits the line up into two sections based on the tab character, base64 decodes each section and then extracts the parts of the first section into variables. The second section is proving to be a little trickier as the decoded content includes line breaks which seem to be ignored. Once I’ve figured out how to work with the line breaks I should then be able to isolate each tag / frequency pair, write the necessary SQL insert statement and then write this to an output file. Hopefully I’ll get this sorted next week.


Week Beginning May 5th 2014

This week is a three day week for me as Monday was May Day and I’m off on holiday on Friday (I’ll be away all next week).  When I got into work on Tuesday the first thing I set about tackling was an issue Vivien had raised with the Burns Choral website.  A colleague had tried to access the site using IE and was presented with nothing but a blank white screen.  I’d tested the site out previously with the current version of IE (version 11) and with it set to ‘compatibility mode’ and in the former case the site worked perfectly and in the latter case there were a couple of quirks with the menu but it was perfectly possible to access the content.  I made use of a free 30 minute trial of http://www.browserstack.com/, which provides access to older versions of IE via your browser and discovered that the blank screen issue was limited to IE versions 7 and 8, both of which were released years ago, are no longer supported by Microsoft and have a combined market share of less than 3%.  However, I still wanted to get to the bottom of the problem as I didn’t like the idea of a website displaying a blank screen.  I eventually managed to track down the cause, which is a bug in IE 7 and 8: when using a CSS file with an @font-face declaration the entire webpage disappears.  A couple of tweaks later and IE7 and 8 users could view the site again.

Also on Tuesday I attended a project meeting for Mapping Metaphor, the first meeting since the colloquium.  Things seem to be progressing quite well, and we arranged some further meetings to discuss the feedback from the test version of the visualisations and the poster for DH2014.  I’m probably not going to do much more development for the project until September due to other work commitments, but will focus back in on the project from September onwards.  I also spent a little time this week doing some further investigations into the new SLD project that Jean would like me to tackle.  I can’t really say much about it at this stage but I spent about half a day working with the data and figuring out what might be possible to do in the available time.

On Wednesday I finally managed to get to grips with phonegap and how to use it to wrap HTML5 / CSS3 / JavaScript websites as platform specific apps.  I put my new MacBook to good use, setting everything up on it afresh after my bungled attempt to get the software to work on my desktop machine months ago.  I approached the task of installing phonegap with some trepidation because of my previous failure, and the process of getting the software installed and working was still far from a straightforward process, but it was a success this time, thank goodness.  Although I installed Phonegap (http://phonegap.com/) I would recommend using Apache Cordova instead (http://cordova.apache.org/).  They are pretty much forks of the same code, but Phonegap is a bit of a mess, with documentation referring to cordova commands that just won’t run (because the command should be phonegap instead!) and certain commands giving error messages.  I eventually installed cordova and used this instead and it just worked a lot better.  So I finally managed to get both an Android and an iOS emulator running on my MacBook and created Android and iOS apps for ARIES based on the HTML5 / Javascript / CSS that I developed last year.  I’ve tested this out using the emulators and both versions appear to work perfectly, which is something of a relief.  The next step will be to actually test the apps on real hardware, but in order to do this for iOS I will have to sign up as a developer and pay the $99 annual subscription fee.  It’s something I’m going to do, but not until I get back from my holidays.

On Thursday I answered a query from a PhD student about online questionnaires and gave some advice on the options that are available.  Although I’ve previously used and recommended SurveyMonkey I would now recommend using Google Forms instead due to it being free and the way it integrates with Google Docs to give a nice spreadsheet of results.  The rest of Thursday I devoted to further DSL development, which mostly boiled down to getting text highlighting working in the entry page.  Basically if the user searches for a word or phrase then this string should be highlighted anywhere it appears within the entry.  As mentioned last week, I had hoped to be able to use XSLT for this task, but what on the face of it looked very straightforward proved to be rather tricky to implement due to the limitations of XSLT version 1, which is the only version supported by PHP.  Instead I decided to process highlighting using jQuery, and I managed to find a very neat little plugin that would highlight text in any element you specify (the plugin can be found here)

That’s all for this week, which proved to be pretty busy despite being only three days long.  I will return to work the week beginning the 19th.

Week Beginning April 14th 2014

I only worked Monday and Tuesday this week as I took some additional time off over the Easter weekend, so there’s not a massive amount to report.  Other than providing some technical help to a postgrad student in English Language I spent all of my available time working on DSL related matters.  The biggest thing I achieved was to get a first version of the advanced search working.  This doesn’t include all options, only those that are currently available through the API, so for example users can’t select a part of speech to refine their search with.  But you can search for a word / phrase with wildcards, specify the search type (headwords, quotations or full text), specify the match type if it’s a headword search (whole / word / partial) and select a source (snd / dost / both).  There are pop-ups for help text for each of these options, currently containing placeholder text.

A quotation or full text search displays results using the ‘highlights’ field from the API, displaying every snippet for each headword (there can be multiple snippets) with the search word highlighted in yellow. If a specific source is selected then the results take up the full width of the page rather than half the width. Advanced search options are ‘remembered’, allowing the user to return to the search form to refine their search.  There is also an option to start a new advanced search.

I made a number of other smaller tweaks too, e.g. removing the ‘search results’ box from an entry when it is the only search result, and adding in pop-ups for DOST and SND in the results page (click on the links in the headings above each set of results to see these).  I’ve also added in placeholder pages for the bibliography pages linked to from entries, with facilities to return to the entry from the bibliography page.

Also this week I received the shiny new iPad that I had ordered and I spent a little time getting to grips with how it works.  I think it’s going to be very useful for testing websites!

Week beginning 11th November 2013

This was an important week, as the new version of the Historical Thesaurus went live!  I spent most of Monday moving the new site across to its proper URL, testing things, updating links, validating the HTML, putting in redirects from the old site and ensuring everything was working smoothly and the final result was unveiled at the Samuels lecture on Tuesday evening.  You can find the new version here:
It was also the week that the new version of the SCOTS corpus and CMSW went live too, and I spent a lot of time on Tuesday and Wednesday working on setting up these new versions, undertaking the same tasks as I did for the HT.  The new version of SCOTS and CMSW can be accessed here:
I had to spend some extra time following the relaunch of SCOTS updating the ‘Thoms Crawford’s diary’ microsite.  This is a WordPress powered site and I updated the header and footer template files so as to give each page of the microsite the same header and footer as the rest of the CMSW site, which looks a lot better than the old version which didn’t have any link back to the main CMSW site.
I had a couple of meetings this week, the first with a PhD student who is wanting to OCR some 19th century Scottish courts records.  I received some very helpful pointers on this from Jenny Bann, who was previously involved with the massive amounts of OCR work that went on in the CMSW project and was able to give some good advice on how to improve the success rate of OCR on historical documents thanks to Jenny’s input.
My second meeting was with a member of staff in English Literature who is putting together an AHRC bid.  Her project will involve the use of Google Books and the Ngram interface, plus developing some visualisations of links between themes in novels.  We had a good meeting and should hopefully proceed further with the writing of the bid in future weeks.  Having not used Ngrams much I spent a bit of time researching it and playing around with it.  It’s a pretty amazing system and has lots of potential for research due to the size of the corpus and the also the sophisticated query tools that are on offer.