This week began with Easter Monday, which was a holiday. I’d also taken Tuesday and Thursday off to cover some of the Easter school holidays so it was a two-day working week for me. I spent some of this time continuing to download and process images of library register books for the Books and Borrowing project, including 14 from St Andrews and several further books from Edinburgh. I was also in communication with one of the people responsible for the Dictionary of the Scots Language’s new editor interface regarding the export of new data from this interface and importing it into the DSL’s website. I was sent a ZIP file containing a sample of the data for SND and DOST, plus a sample of the bibliographical data, with some information on the structure of the files and some points for discussion.
I looked through all of the files and considered how I might be able to incorporate the data into the systems that I created for the DSL’s website. I should be able to run the new dictionary XML files through my upload script with only a few minor modifications required. It’s also really great that the bibliographies and cross references are getting sorted via the new Editor interface. One point of discussion is that the new editor interface has generated new IDs for the entries, and the old IDs are not included. I reckoned that it would be good if the old IDs were included in the XML as well, just in case we ever need to match up the current data with older datasets. I did notice that the old IDs already appeared to be included in the <url> fields, but after discussion we decided that it would be safer to include them as an attribute of the <entry> tag, e.g. <entry oldid=”snd848”> or something like that, which is what will happen when I receive the full dataset.
There are also new labels for entries, stating when and how the entry was prepared. The actual labels are stored in a spreadsheet and a numerical ID appears in the XML to reference a row in the spreadsheet. This method of dealing with labels seems fine with me – I can update my system to use the labels from the spreadsheet and display the relevant labels depending on the numerical codes in the entry XML. I reckon it’s probably better to not store the actual labels in the XML as this saves space and makes it easier to change the label text, if required, as it’s only then stored in a single place.
The bibliographies are looking good in the sample data, but I pointed out that it might be handy to have a reference of the old bibliographical IDs in the XML, if that’s possible. There were also spurious xmlns=”” attributes in the new XML, but these shouldn’t pose any problems and I said that it’s ok to leave them in. Once I receive the full dataset with some tweaks (e.g. the inclusion of old IDs) then I will do some further work on this.
I spent most of the rest of my available time working on the new Comparative Kingship place-names systems. I completed work on the Scotland CMS, including adding in the required parishes and former parishes. This means my place-name system has now been fully modernised and uses the Bootstrap framework throughout, which looks a lot better and works more effectively on all screen dimensions.
I also imported the data from GB1900 for the relevant parishes. There are more than 10,000 names, although a lot of these could be trimmed out – lots of ‘F.P.’ for footpath etc. It’s likely that the parishes listed are rather broader than the study will be. All the names in and around St Andrews are in there, for example. In order to generate altitude for each of the names imported from GB1900 I had to run a script I’d written that passes the latitude and longitude for each name in turn to Google Maps, which then returns elevation data. I had to limit the frequency of submissions to one every few seconds otherwise Google blocks access, so it took rather a long time for the altitudes of more than 10,000 names to be gathered, but the process completed successfully.
Also this week I dealt with an issue with the SCOTS corpus, which had broken (the database had gone offline) and helped Raymond at Arts IT Support to investigate why the Anglo-Norman Dictionary server had been blocking uploads to the dictionary management system when thousands of files were added to the upload form. It turns out that while the Glasgow IP address range was added into the whitelist the VPN’s IP address range wasn’t, which is why uploads were being blocked.
Next week I’m also taking a couple of days off to cover the Easter School holidays, and will no doubt continue with the DSL and Comparative Kingship projects then.
This was week 15 of Lockdown, which I guess is sort of coming to an end now, although I will still be working from home for the foreseeable future and having to juggle work and childcare every day. I continued to work on the Books and Borrowing project for much of this week, this time focussing on importing some of the existing datasets from previous transcription projects. I had previously written scripts to import data from Glasgow University library and Innerpeffray library, which gave us 14,738 borrowing records. This week I began by focussing on the data from St Andrews University library.
The St Andrews data is pretty messy, reflecting the layout and language of the original documents, so I haven’t been able to fully extract everything and it will require a lot of manual correcting. However, I did manage to migrate all of the data to a test version of the database running on my local PC and then updated the online database to incorporate this data.
The data I’ve got are CSV and HTML representations of transcribed pages that come from an existing website with pages that look like this: https://arts.st-andrews.ac.uk/transcribe/index.php?title=Page:UYLY205_2_Receipt_Book_1748-1753.djvu/100. The links in the pages (e.g. Locks Works) lead through to further pages with information about books or borrowers. Unfortunately the CSV version of the data doesn’t include the links or the linked to data, and as I wanted to try and pull in the data found on the linked pages I therefore needed to process the HTML instead.
I wrote a script that pulled in all of the files in the ‘HTML’ directory and processed each in turn. From the filenames my script could ascertain the ledger volume, its dates and the page number. For example ‘Page_UYLY205_2_Receipt_Book_1748-1753.djvu_10.html’ is ledger 2 (1748-1753) page 10. The script creates ledgers and pages, and adds in the ‘next’ and ‘previous’ page links to join all the pages in a ledger together.
The actual data in the file posed further problems. As you can see from the linked page above, dates are just too messy to automatically extract into our strongly structured borrowed and returned date system. Often a record is split over multiple rows as well (e.g. the borrowing record for ‘Rollins belles Lettres’ is actually split over 3 rows). I could have just grabbed each row and inserted it as a separate borrowing record, which would then need to be manually merged, but I figured out a way to do this automatically. The first row of a record always appears to have a code (the shelf number) in the second column (e.g. J.5.2 for ‘Rollins’) whereas subsequent rows that appear to belong to the same record don’t (e.g. ‘on profr Shaws order by’ and ‘James Key’). I therefore set up my script to insert new borrowing records for rows that have codes, and to append any subsequent rows that don’t have codes to this record until a row with a code is reached again.
I also used this approach to set up books and borrowers too. If you look at the page linked to above again you’ll see that the links through to things are not categorised – some are links to books and others to borrowers, with no obvious way to know which is which. However, it’s pretty much always the case that it’s a book that appears in the row with the code and it’s people that are linked to in the other rows. I could therefore create or link to existing book holding records for links in the row with a code and create or link to existing borrower records for links in rows without a code. There are bound to be situations where this system doesn’t quite work correctly, but I think the majority of rows do fit this pattern.
The next thing I needed to do was to figure out which data from the St Andrews files should be stored as what in our system. I created four new ‘Additional Fields’ for St Andrews as follows:
- Original Borrowed date: This contains the full text of the first column (e.g. Decr 16)
- Code: This contains the full text of the second column (e.g. J.5.2)
- Original Returned date: This contains the full text of the fourth column (e.g. Jan. 5)
- Original returned text: This contains the full date of the fifth column (e.g. ‘Rollins belles Lettres V. 2d’)
In the borrowing table the ‘transcription’ field is set to contain the full text of the ‘borrowed’ column, but without links. Where subsequent rows contain data in this column but no code, this data is then appended to the transcription. E.g. the complete transcription for the third item on the page linked to above is ‘Rollins belles Lettres Vol 2<sup>d</sup> on profr Shaws order by James Key’.
The contents of all pages linked to in the transcriptions are added to the ‘editors notes’ field for future use if required. Both the page URL and the page content are included, separated by a bar (|) and if there are multiple links these are separated by five dashes. E.g. for the above the notes field contains:
‘Rollins_belles_Lettres| <p>Possibly: De la maniere d’enseigner et d’etuder les belles-lettres, Par raport à l’esprit & au coeur, by Charles Rollin. (A Amsterdam : Chez Pierre Mortier, M. DCC. XLV. ) <a href=”http://library.st-andrews.ac.uk/record=b2447402~S1″>http://library.st-andrews.ac.uk/record=b2447402~S1</a></p>
—– profr_Shaws| <p><a href=”https://arts.st-andrews.ac.uk/biographical-register/data/documents/1409683484″>https://arts.st-andrews.ac.uk/biographical-register/data/documents/1409683484</a></p>
—– James_Key| <p>Possibly James Kay: <a href=”https://arts.st-andrews.ac.uk/biographical-register/data/documents/1389455860″>https://arts.st-andrews.ac.uk/biographical-register/data/documents/1389455860</a></p>
As mentioned earlier, the script also generates book and borrower records based on the linked pages too. I’ve chosen to set up book holding rather than book edition records as the details are all very vague and specific to St Andrews. In the holdings table I’ve set the ‘standardised title’ to be the page link with underscores replaced with dashes (e.g. ‘Rollins belles Lettres’) and the page content is stored in the ‘editors notes’ field. One book item is created for each holding to be used to link to the corresponding borrowing records.
For borrowers a similar process is followed, with the link added to the surname column (e.g. Thos Duncan) and the page content added to the ‘editors notes’ field (e.g. <p>Possibly Thomas Duncan: <a href=”https://arts.st-andrews.ac.uk/biographical-register/data/documents/1377913372″>https://arts.st-andrews.ac.uk/biographical-register/data/documents/1377913372</a></p>’). All borrowers are linked to records as ‘Main’ borrowers.
During the processing I noticed that the fourth ledger had a slightly different structure to the others, with entire pages devoted to a particular borrower, whose name then appeared in a heading row in the table. I therefore updated my script to check for the existence of this heading row, and if it exists my script then grabs the borrower name, creates the borrower record if it doesn’t already exist and then links this borrower to every borrowing item found on the page. After my script had finished running we had 11147 borrowing records, 996 borrowers and 6395 book holding records for St Andrew in the system.
I then moved onto looking at the data for Selkirk library. This data was more nicely structured than the St Andrews data, with separate spreadsheets for borrowings, borrowers and books and borrowers and books connected to borrowings via unique identifiers. Unfortunately the dates were still transcribed as they were written rather than being normalised in any way, which meant it was not possible to straightforwardly generate structured dates for the records and these will need to be manually generated. The script I wrote to import the data took about a day to write, and after running it we had a further 11,431 borrowing records across two registers and 415 pages entered into our database.
As with St Andrews, I created book records as Holding records only (i.e. associated specifically with the library rather than being project-wide ‘Edition’ records. There are 612 Holding records for Selkirk. I also processed the borrower records, resulting in 86 borrower records being added. I added the dates as originally transcribed to an additional field named ‘Original Borrowed Date’ and the only other additional field is in the Holding records for ‘Subject’, that will eventually be merged with our ‘Genre’ when this feature becomes available.
Also this week I advised Katie on a file naming convention for the digitised images of pages that will be created for the project. I recommended that the filenames shouldn’t have spaces in them as these can be troublesome on some operating systems and that we’d want a character to use as a delimiter between the parts of the filename that wouldn’t appear elsewhere in the filename so it’s easy to split up the filename. I suggested that the page number should be included in the filename and that it should reflect the page number as it will be written into the database – e.g. if we’re going to use ‘r’ and ‘v’ these would be included. Each page in the database will be automatically assigned an auto-incrementing ID, and the only means of linking a specific page record in the database with a specific image will be via the page number entered when the page is created, so if this is something like ‘23r’ then ideally this should be represented in the image filename.
Katie had wondered about using characters to denote ledgers and pages in the filename (e.g. ‘L’ and ‘P’) but if we’re using a specific delimiting character to separate parts of the filename then using these characters wouldn’t be necessary and I suggested it would be better to not use ‘L’ as a lower case ‘l’ is very easy to confuse with a ‘1’ or a capital ‘I’ which might confuse future human users.
Instead I suggested using a ‘-‘ instead of spaces and a ‘_’ as a delimiter and pointed out that we should ensure that no other non-alphanumeric characters are ever used in the filename – no apostrophes, commas, colons, semi-colons, ampersands etc and to make sure the ‘-‘ is really a minus sign and not one of the fancy dashes (–) that get created by MS Office. This shouldn’t be an issue when entering a filename, but might be if a list of filenames is created in Word and then pasted into the ‘save as’ box, for example.
Finally, I suggested that it might be best to make the filenames entirely lower case, as some operating systems are case sensitive and if we don’t specify all lower case then there may be variation in the use of case. Following these guidelines the filenames would look something like this:
In addition to the Books and Borrowing project I worked on a number of other projects this week. I gave Matthew Creasy some further advice on using forums in his new project website, and ‘Scottish Cosmopolitanism at the Fin de Siècle’ website is now available here: https://scoco.glasgow.ac.uk/.
I also worked a bit more on using dates from the OED data in the Historical Thesaurus. Fraser had sent me a ZIP file containing the entire OED dataset as 240 XML files and I began analysing these to figure out how we’d extract these dates so that we could use them to update the dates associated with the lexemes in the HT. I needed to extract the quotation dates as these have ‘ante’ and ‘circa’ notes, plus labels. I noted that in addition to ‘a’ and ‘c’ a question mark is also used, somethings with an ‘a’ or ‘c’ and sometimes without. I decided to process things as follows:
- ?a will just be ‘a’
- ?c will just be ‘c’
- ? without an ‘a’ or ‘c’ will be ‘c’.
I also noticed that a date may sometimes be a range (e.g. 1795-8) so I needed to include a second date column in my data structure to accommodate this. I also noted that there are sometimes multiple Old English dates, and the contents of the ‘date’ tag vary depending on the date – sometimes the content is ‘OE’ and othertimes ‘lOE’ or ‘eOE’. I decided to process any OE dates for a lexeme as being 650 and to have only one OE date stored, so as to align with how OE dates are stored in the HT database (we don’t differentiate between date for OE words).
While running my date extraction script over one of the XML files I also noticed that there were lexemes in the OED data that were not present in the OED data we had previously extracted. This presumably means the dataset Fraser sent me is more up to date than the dataset I used to populate our online OED data table. This will no doubt mean we’ll need to update our online OED table, but as we link to the HT lexeme table using the OED catid, refentry, refid and lemmaid fields if we were to replace the online OED lexeme table with the data in these XML files the connections from OED to HT lexemes would be retained without issue (hopefully), but any matching processes we performed would need to be done again for the new lexemes.
I set my extraction script running on the OED XML files on Wednesday and processing took a long time. The script didn’t complete until sometime during Friday night, but after it had finished it had processed 238,699 categories, 754,285 lexemes, generating 3,893,341 date rows. It also found 4,062 new words in the OED data that it couldn’t process because they don’t exist in our OED lexeme database.
I also spent a bit more time working on some scripts for Fraser’s Scots Thesaurus project. The scripts now ignore ‘additional’ entries and only include ‘n.’ entries that match an HT ‘n’ category. Variant spellings are also removed (these were all tagged with <form> and I removed all of these). I also created a new field to store only the ‘NN_’ tagged words and remove all others.
The scripts generated three datasets, which I saved as spreadsheets for Fraser. The first (postagged-monosemous-dost-no-adds-n-only) contains all of the content that matches the above criteria. The second (postagged-monosemous-dost-no-adds-n-only-catheading-match) lists those lexemes where a postagged word fully matches the HT category heading. The final (postagged-monosemous-dost-no-adds-n-only-catcontents-match) lists those lexemes where a postagged word fully matches a lexeme in the HT category. For this table I’ve also added in the full list of lexemes for each HT category too.
I also spent a bit of time working on the Data Management Plan for the new project for Jane Stuart-Smith and Eleanor Lawson at QMU and arranged for a PhD student to get access to the TextGrid files that were generated for the audio records for the SCOTS Corpus project.
Finally, I investigated the issue the DSL people are having with duplicate child entries appearing in their data. This was due to something not working quite right in a script Thomas Widmann had written to extract the data from the DSL’s editing system before he left last year, and Ann had sent me some examples of where the issue was cropping up.
I have the data that was extracted from Thomas’s script last July as two XML files (dost.xml and snd.xml) and I looked through these for the examples Ann had sent. The entry for snd13897 contains the following URLs:
The first is the ID for the main entry and the other two are child entries. If I search for the second one (snds3788) this is the only occurrence of the ID in the file, as the child entry has been successfully merged. But if I search for the third one (sndns2217) I find a separate entry with this ID (with more limited content). The pulling in of data into a webpage in the V3 site uses URLs stored in a table linked to entry IDs. These were generated from the URLs in the entries in the XML file (see the <url> tags above). For the URL ‘sndns2217’ the query finds multiple IDs, one for the entry snd13897 and another for the entry sdnns2217. But it finds snd13897 first, so it’s the content of this entry that is pulled into the page.
The entry for dost16606 contains the following URLs:
(in addition to headword URLs). Searching for the second one discovers a separate entry with the ID dost50272 (with more limited content). As with SND, searching the URL table for this URL finds two IDs, and as dost16606 appears first this is the entry that gets displayed.
What we need to do is remove the child entries that still exist as separate entries in the data. To do this I could is write a script that would go through each entry in the dost.xml and snd.xml files. It would then pick out every <url> that is not the same as the entry ID and search the file to see if any entry exists with this ID. If it does then presumably this is a duplicate that should then be deleted. I’m waiting to hear back from the DSL people to see how we should proceed with this.
As you can no doubt gather from the above, this was a very busy week but I do at least feel that I’m getting on top of things again.
During week 11 of Lockdown I continued to work on the Books and Borrowing project, but also spent a fair amount of time catching up with other projects that I’d had to put to one side due to the development of the Books and Borrowing content management system. This included reading through the proposal documentation for Jennifer Smith’s follow-on funding application for SCOSYA, and writing a new version of the Data Management Plan based on this updated documentation and making some changes to the ‘export data for print publication’ facility for Carole Hough’s REELS project. I also spent some time creating as new export facility to format the place-name elements and any associated place-names for print publication too.
During this week a number of SSL certificates expired for a bunch of websites, which meant browsers were displaying scary warning messages when people visited the sites. I had to spend a bit of time tracking these down and passing the details over to Arts IT Support for them to fix as it is not something I have access rights to do myself. I also liaised with Mike Black to migrate some websites over from the server that houses many project websites to a new server. This is because the old server is running out of space and is getting rather temperamental and freeing up some space should address the issue.
I also made some further tweaks to Paul Malgrati’s interactive map of Burns’ Suppers and created a new WordPress-powered project website for Matthew Creasy’s new ‘Scottish Cosmopolitanism at the Fin de Siècle’ project. This included the usual choosing a theme, colour schemes and fonts, adding in header images and footer logos and creating initial versions of the main pages of the site. I’d also received a query from Jane Stuart-Smith about the audio recordings in the SCOTS Corpus so I did a bit of investigation about that.
Fraser Dallachy had got back to me with some further tasks for me to carry out on the processing of dates for the Historical Thesaurus, and I had intended to spend some time on this towards the end of the week, but when I began to look into this I realised that the scripts I’d written to process the old HT dates (comprising 23 different fields) and to generate the new, streamlined date system that uses a related table with just 6 fields were sitting on my PC in my office at work. Usually all the scripts I work on are located on a server, meaning I can easily access them from anywhere by connecting to the server and downloading them. However, sometimes I can’t run the scripts on the server as they may need to be left running for hours (or sometimes days) if they’re processing large amounts of data or performing intensive tasks on the data. In these cases the scripts run directly on my office PC, and this was the situation with the dates script. I realised I would need to get into my office at work on retrieve the scripts, so I put in a request to be allowed into work. Staff are not currently allowed to just go into work – instead you need to get approval from your Head of School and then arrange a time that suits security. Thankfully it looks like I’ll be able to go in early next week.
Other than these issues, I spent my time continuing to work for the Books and Borrowing project. On Tuesday we had a Zoom call with all six members of the core project team, during which I demonstrated the CMS as it currently stands. This gave me an opportunity to demonstrate the new Author association facilities I had created last week. The demonstration all went very smoothly and I think the team are happy with how the system works, although no doubt once they actually begin to use it there will be bugs to fix and workflows to tweak. I also spent some time before the meeting testing the system again, and fixing some issues that were not quite right with the author system.
I spent the remainder of my time on the project completing work on the facility to add, edit and view book holding records directly via the library page, as opposed to doing so whilst adding / editing a borrowing record. I also implemented a similar facility for borrowers as well. Next week I will begin to import some of the sample data from various libraries into the system and will allow the team to access the system to test it out.
I spent some time this week investigating the final part of the SCOSYA online resource that I needed to implement: A system whereby researchers could request access to the full audio dataset and a member of the team could approve the request and grant the person access to a facility where the required data could be downloaded. Downloads would be a series of large ZIP files containing WAV files and accompanying textual data. As we wanted to restrict access to legitimate users only I needed to ensure that the ZIP files were not directly web accessible, but were passed through to a web accessible location on request by a PHP script.
I created a test version using a 7.5Gb ZIP file that had been created a couple of months ago for the project’s ‘data hack’ event. This version can be set up to store the ZIP files in a non-web accessible directory and then grab a file and pass it through to the browser on request. It will be possible to add user authentication to the script to ensure that it can only be executed by a registered user. The actual location of the ZIP files is never divulged so neither registered nor unregistered users will ever be able to directly link to or download the files (other than via the authenticated script).
This all sounds promising but I realised that there are some serious issues with this approach. HTTP as used by web pages to transfer files is not really intended for downloading huge files and using this web-based method to download massive zip files is just not going to work very well. The test ZIP file I used was about 7.5Gb in size (roughly the size of a DVD), but the actual ZIP files are likely to be much larger than this – with the full dataset taking up about 180Gb. Even using my desktop PC on the University network it’s taken roughly 30 minutes to download the 7.5Gb file. Using an external network would likely take a lot longer and bigger files are likely to be pretty unmanageable for people to download.
It’s also likely that a pretty small number of researchers will be requesting the data, and if this is the case then perhaps it’s not such a good idea to take up 180Gb of web server space (plus the overheads of backups) to store data that is seldomly going to be accessed, especially if this is simply replicating data that is already taking up a considerable amount of space on the shared network drive. 180Gb is probably more web space than is used by most other Critical Studies websites combined. After discussing this issue with the team, we decided that we would not set up such a web-based resource to access the data, but would instead send ZIP files on request to researchers using the University’s transfer service, which allows files of up to 20Gb to be sent to both internal and external email addresses. We’ll need to see how this approach works out, but I think it’s a better starting point than setting up our own online system.
I also spent some further time on the SCOSYA project this week implementing some changes to both the experts and the public atlases based on feedback from the team. This included changing the default map position and zoom level, replacing some of the colours used for map markers and menu items, tweaking the layout of the transcriptions, ensuring certain words in story titles can appear in bold (as opposed to the whole title being bold as was previously the case) and removing descriptions from the list of features found in the ‘Explore’ menu in the public atlas. I also added a bit of code to ensure that internal links from story pages to other parts of the public atlas would work (previously they weren’t doing anything because only the part after the hash was changing). I also ensured that the experts atlas side panel resizes to fit the content whenever an additional attribute is added or removed.
Also this week I finally found a bit of time to fix the map on the advanced search page of the SCOTS Corpus website. This map was previously powered by Google Maps, but they have now removed free access to the Google Maps service (you now need to provide a credit card and get billed if your usage goes over a certain number of free hits a month). As we hadn’t updated the map or provided such details Google broke the map, covering it with warning messages and removing our custom map styles. I have now replaced the Google Maps version with a map created using the free to use Leaflet,js mapping library (as I’m using for SCOSYA) and a free map tileset from OpenStreetMap. Other than that it works in exactly the same way as the old Google Map. The new version is now live here: https://www.scottishcorpus.ac.uk/advanced-search/.
Also this week I upgraded all of the WordPress sites I manage, engaged in some App Store duties and had a further email conversation with Marc Alexander about how dates may be handled in the Historical Thesaurus. I also engaged in a long email conversation with Heather Pagan of the Anglo-Norman Dictionary about accessing the dictionary data. Heather has now managed to access one of the servers that the dictionary website runs on and we’re now trying to figure out exactly where the ‘live’ data is located so that I can work with it. I also fixed a couple of issues with the widgets I’d created last week for the GlasgowMedHums project (some test data was getting pulled into them) and tweaked a couple of pages. The project website is launching tomorrow so if anyone wants to access it they can do so here: https://glasgowmedhums.ac.uk/
Finally, I continued to work on the new API for the Dictionary of the Scots Language, implementing the bibliography search for the ‘v2’ API. This version of the API uses data extracted from the original API, and the test website I’ve set up to connect to it should be identical to the live site, but connects to the ‘v2’ API to get all of its data and in no way connects to the old, undocumented API. API calls to search the bibliographies (both a predictive search used for displaying the auto-complete results and to populate a full search results page), and to display an individual bibliography are now available and I’ve connected the test site to these API calls, so staff can search for bibliographies here.
Whilst investigating how to replicate the original API I realised that the bibliography search on the live site is actually a bit broken. The ‘Full Text’ search simply doesn’t work, but instead just does the same as a search for authors and titles (in fact the original API doesn’t even include a ‘full text’ option). Also, results only display authors, so for records with no author you get some pretty unhelpful results. I did consider adding in a full-text search, but as bibliographies contain little other than authors and titles there didn’t seem much point, so instead I’ve removed the option. As the search is primarily set up as an auto-complete, which is set up to match words in authors or titles that begin with the characters that are being typed (i.e. a wildcard search such as ‘wild*’) and the full search results page only gets displayed if someone ignores the auto-complete list of results and manually presses the ‘Search’ button, I’ve made full search results page always work as a ‘wild*’ search too. So typing ‘aber’ into the search box and pressing ‘Search’ will bring up a list of all bibliographies with titles / authors featuring a word beginning with these characters. With the previous version this wasn’t the case – you had to add a ‘*’ after ‘aber’ otherwise the full search results page would match ‘aber’ exactly and find nothing. I’ve updated the help text on the bibliography search page to explain this a bit.
The full search results (and the results side panel) in the new version now include titles as well as authors, which makes things clearer and I’ve also made the search results numbering appear at the top of the corresponding result text rather than on the last line. This is also the case for entry searches too. Once the test site has been fully tested and approved we should be able to replace the live site with the new site (ensuring all WordPress content from the live site is carried over, of course). Doing so will mean the old server containing the original API can (once we’re confident all is well) be switched off. There is still the matter of implementing the bibliography search for the V3 data, but as mentioned previously this will probably be best tackled once we have sorted out the issues with the data and we are getting ready to launch the new version.
I focussed on the SCOSYA project for the first few days of this week. I need to get everything ready to launch by the end of September and there is an awful lot still left to do, so this is really my priority at the moment. I’d noticed over the weekend that the story pane wasn’t scrolling properly on my iPad when the length of the slide was longer than the height of the atlas. In such cases the content was just getting cut off and you couldn’t scroll down to view the rest or press the navigation buttons. This was weird as I thought I’d fixed this issue before. I spent quite a bit of time on Monday investigating the issue, which has resulted in me having to rewrite a lot of the slide code. After much investigation I reckoned that this was an intermittent fault caused by the code returning a negative value for the height of the story pane instead of its real height. When the user presses the button to load a new slide the code pulls the HTML content of the slide in and immediately displays it. After that another part of the code then checks the height of the slide to see if the new contents make the area taller than the atlas, and if so the story area is then resized. The loading of the HTML using jQuery’s html() function should be ‘synchronous’ – i.e. the following parts of code should not execute before the loading of the HTML is completed. But sometimes this wasn’t the case – the new slide contents weren’t being displayed before the check for the new slide height was being run, meaning the slide height check was giving a negative value (no contents minus the padding round the slide). The slide contents then displayed but as the code thought the slide height was less than the atlas it was not resizing the slide, even when it needed to. It is a bit of a weird situation as according to the documentation it shouldn’t ever happen. I’ve had to put a short ‘timeout’ into the script as a work-around – after the slide loads the code waits for half a second before checking for the slide height and resizing, if necessary. This seems to be working but it’s still annoying to have to do this. I tested this out on my Android phone and on my desktop Windows PC with the browser set to a narrow height and all seemed to be working. However, when I got home I tested the updated site out on my iPad and it still wasn’t working, which was infuriating as it was working perfectly on other touchscreens.
In order to fix the issue I needed to entirely change how the story pane works. Previously the story pane was just an HTML area that I’d added to the page and then styled to position within the map, but there were clearly some conflicts with the mapping library Leaflet when using this approach. The story pane was positioned within the map area and mouse actions that Leaflet picks up (scrolling and clicking for zoom and pan) were interfering with regular mouse actions in the HTML story area (clicking on links, scrolling HTML areas). I realised that scrolling within the menu on the left of the map was working fine on the iPad so I investigated how this differed from the story pane on the right. It turned out that the menu wasn’t just a plain HTML area but was instead created by a plugin for Leaflet that extends Leaflet’s ‘Control’ options (used for buttons like ‘+/-‘ and the legend). Leaflet automatically prevents the map’s mouse actions from working within its control areas, which is why scrolling in the left-hand menu worked. I therefore created my own Leaflet plugin for the story pane, based on the menu plugin. Using this method to create the story area thankfully worked on my iPad, but it did unfortunately taken several hours to get things working, which was time I should ideally have been spending on the Experts interface. It needed to be done, though, as we could hardly launch an interface that didn’t work on iPads.
I also has to spend some further time this week making some more tweaks to the story interface that the team had suggested such as changing the marker colour for the ‘Home’ maps, updating some of the explanatory text and changing the pop-up text on the ‘Home’ map to add in buttons linking through to the stories. The team also wanted to be able to have blank maps in the stories, to make users focus on the text in the story pane rather than getting confused by all of the markers. Having blank maps for a story slide wasn’t something the script was set up to expect, and although it was sort of working, if you navigated from a map with markers to a blank map and then back again the script would break, so I spent some time fixing this. I also managed to find a bit of time starting on the experts interface, although less time than I had hoped. For this I’ve needed to take elements from the atlas I’d created for staff use, but adapt it to incorporate changes that I’d introduced for the public atlas. This has basically meant starting from scratch and introducing new features one by one. So far I have the basic ‘Home’ map showing locations and the menu working. There is still a lot left to do.
I spent the best part of two days this week working on the front-end for the 18th Century Borrowing pilot project for Matthew Sangster. I wrote a little document that detailed all of the features I was intending to develop and sent this to Matt so he could check to see if what I’m doing met his expectations. I spent the rest of the time working on the interface, and made some pretty good progress. So far I’ve made an initial interface for the website (which is just temporary and any aspect of which can be changed as required), I’ve written scripts to generate the student forename / surname and professor title / surname columns to enable searching by surname, and I’ve created thumbnails of the images. The latter was a bit of a nightmare as previously I’d batch rotated the images 90 degrees clockwise as the manuscripts (as far as I could tell) were written in landscape format but the digitised images were portrait, meaning everything was on its side.
However, I did this using the Windows image viewer, which gives the option of applying the rotation to all images in a folder. What I didn’t realise is that the image viewer doesn’t update the metadata embedded in the images, and this information is used by browsers to decide which way round to display the images. I ended up in a rather strange situation where the images looked perfect on my Windows PC, and also when opened directly within the browser, but when embedded in an HTML page they appeared on their side. It took a while to figure out why this was happening, but once I did I regenerated the thumbnails using the command-line ImageMagick tool instead, which I set to wipe the image metadata as well as rotating the images, which seemed to work. That is until I realised that Manuscript 6 was written in portrait not landscape so I had to repeat the process again but miss out Manuscript 6. I have since realised that all the batch processing of images I did to generate tiles for the zooming and panning interface is also now going to be wrong for all landscape images and I’m going to have to redo all of this too.
Anyway, I also made the facility where a user can browse the pages of the manuscripts, enabling them to select a register, view the thumbnails of each page contained therein and then click through to view all of the records on the page. This ‘view records’ page has both a text and image view. The former displays all of the information about each record on the page in a tabular manner, including links through to the GUL catalogue and the ESTC. The latter presents the image in a zoomable / pannable manner, but as mentioned earlier, the bloody image is on its side for any manuscript written in a landscape way and I still need to fix this, as the following screenshot demonstrates:
Also this week I spent a further bit of time preparing for my PDR session that I will be having next week, spoke to Wendy Anderson about updates to the SCOTS Corpus advanced search map that I need to fix, fixed an issue with the Medical Humanities Network website, made some further tweaks to the RNSN song stories and spoke to Ann Ferguson at the DSL about the bibliographical data that needs to be incorporated into the new APIs. A another pretty busy week, all things considered.
I continued with the group statistics feature for the SCOSYA project this week. Last week Gary had let me know that he was experiencing issues when using the feature with a large group he had created, so I did some checking of functionality. I created a group with 140 locations in it and tried out the feature with a variety of searches on a variety of devices, operating systems and browsers but didn’t encounter any issues. Thankfully it turned out that Gary needed to clear his browser’s cache, and with that done the feature worked perfectly for him. Gary had also reported an issue with the data export facilitiy I created a while back for the project team to use. It was working fine if limits on the returned data were included, but gave nothing but a blank page when all the data was requested. After a bit of investigation I reached the conclusion that it must be a some kind of limit imposed on the server, and a quick check with Chris revealed that when the script returned all of the data it was exceeding a memory limit. When Chris increased the limit the script began to work perfectly again.
In addition to these investigations I added a couple of new pieces of functionality to the group statistics feature. I added in the option to show or hide locations that are not part of your selected group, allowing the user to cut down on the clutter and focus on the locations that they are partiuclarly interested in. I also added in an option to download the data relating specifically to the user’s selected locations, rather than for all locations. This meant updating the project’s API to allow any number of locations to be included in the GET request sent to the server. Unfortunately this uncovered another server setting that was preventing certain requests working. With many locations selected the URL sent to the API is very long, and in such cases the request was not fully getting through to my API scripts but was instead getting blocked by the server. Rather than processing the API’s default index page was displaying, but wothout the CSS file properly loadng. With shorter URLs the request got through fine. I checked with Chris and a setting on the server was limiting URL parameters to 512 characters in length. Chris increased this and the request got through and returned the required data. With this issue out of the way the ‘download group data’ feature worked properly. I had been making these changes on a temporary version of the atlas in the CMS, but with everything in place I moved my temporary version over to the main atlas, and all seems to be working well.
I had a few meetings this week. The first was with someone from a start-up company who are wanting to develop some kind of transcription service. We talked about the SCOTS corpus and its time-aligned transcriptions of audio files. I’m not sure how much help I really was, however, as what they really need is a tool to create such transcriptions rather than publish them, and the SCOTS project used a different tool to do this called PRAAT. The guy is going to meet with Jane Stuart-Smith who should be able to give more information on this side of things, and also with Wendy Anderson who knows a bit more about the history of the SCOTS project than I do, so maybe these subsequent meetings will be more useful. I also met with Ewa Wanat, a PhD student in English Language, who is wanting to put together an app about rhythm and metre in English. I gave her some advice about the sorts of tools she could use to develop the app and showed her the ‘English Metre’ app I created last year. She already has a technical partner in mind for her project so probably won’t need me to do the actual work, but I think I was able to give her some useful advice. I also met with Scott Spurlock from Theology, for whom I will be creating a crowdsourcing tool that will be used to transcribe some records of the Church of Scotland. There has been a bit of a delay in getting the images for the project, and Scott hasn’t decided what URL he would like for the project, but once these things are sorted I’ll be able to start to work developing the tool, hopefully using some existing technologies.
Before I went away on holiday the SLD people were in touch to say that the Android version of the Scots Dictionary for Schools app had been taken down, and the person with the account details had retired without passing the account details on. We tried various approaches to get access to the account but in the end it looked like the only thing to do would be to create a new account and republish the app. Thomas Widmann set up the account just before I went away and I said I’d sort out the technical side of things when I got back to the office. On Friday this week I tackled this task. As I suspected, it look rather a long time to get all of the technologies up to date again. I don’t develop apps all that often and it seems that every time I come to develop a new one (or create a new version of an old one) the software and methodologies needed to publish an app have all changed. It took most of the morning to install the necessary software updates, and a fair bit of the afternoon to figure out how the new workflow for publishing an app would work. However, I got there in the end and by the end of the day the new version was available for download (for free) via the Google Play store. You can access the dictionary app here: https://play.google.com/store/apps/details?id=com.sld.ssd2
I’m on holiday on Monday to Wednesday next week, so next week’s report should be rather shorter.
I spent most of this week working on the new timeline features for the Historical Thesaurus. Marc, Fraser and I had a useful meeting on Wednesday where we discussed some final tweaks to the mini-timelines and the category page in general, and also discussed some future updates to the sparklines.
I made the mini-timelines slightly smaller than they were previously, and Marc changed the colours used for them. I also updated the script that generates the category page content via an AJAX call so that an additional ‘sort by’ option could be passed to it. I then implemented sorting options that matched up with those available through the full Timeline feature, namely sorting by first attested date, alphabetically, and length of use. I also updated this script to allow users to control whether the mini-timelines appear on the page or not. With these options available via the back-end script I then set up the choices to be stored as a session variable, meaning the user’s choices are ‘remembered’ as they navigate throughout the site and can be applied automatically to the data.
While working on the sorting options I noticed that the alphabetical ordering of the main timeline didn’t properly order ashes and thorns – e.g. words beginning with these were appearing at the end of the list when ordered alphabetically. I fixed this so that for ordering purposes an ash is considered ‘ae’ and a thorn ‘th’. This doesn’t affect how words are displayed, just how they are ordered.
We also decided at the meeting that we would move the thesaurus sites that were on a dedicated (but old) server (namely HT, Mapping Metaphor, Thesaurus of Old English and a few others) to a more centrally hosted server that is more up to date. This switch would allow these sites to be made available via HTTPS as opposed to HTTP and will free up the old server for us to use for other things, such as some potential corpus based resources. Chris migrated the content over and after we’d sorted a couple of initial issues with the databases all of the sites appear to be working well. It is also a really good thing to have the sites available via HTTPS. We are also now considering setting up a top-level ‘.ac.uk’ address for the HT and spent some time making a case for this.
A fairly major feature I added to the HT this week was a ‘menu’ section for main categories, which contains some additional options, such as the options to change the sorting of the category pages and turn the mini-timelines on and off. For the button to open the section I decided to use the ‘hamburger’ icon, which Marc favoured, rather than a cog, which I was initially thinking of using, because a cog suggests managing options whereas this section contains both options and additional features. I initially tried adding the drop-down section as near to the icon as possible, but I didn’t like the way it split up the category information, so instead I set it to appear beneath the part of speech selection. I think this will be ok as it’s not hugely far away from the icon. I did wonder whether instead I should have a section that ‘slides up’ above the category heading, but decided this was a bad idea as if the user has the heading at the very top of the screen it might not be obvious that anything has happened.
The new section contains buttons to open the ‘timeline’ and ‘cite’ options. I’ve expanded the text to read ‘Timeline visualization’ and ‘Cite this category’ respectively. Below these buttons there are the options to sort the words. Selecting a sort option reloads the content of the category pane (maincat and subcats), while keeping the drop-down area open. Your choice is ‘remembered’ for the duration of your session, so you don’t have to keep changing the ordering as you navigate about. Changing to another part of speech or to a different category closes the drop-down section. I also updated the ‘There are xx words’ text to make it clearer how the words are ordered if the drop-down section is not open.
Below the sorting option is a further option that allows you to turn on or off the mini-timelines. As with the sorting option, your choice is ‘remembered’. I also added some tooltip text to the ‘hamburger’ icon, as I thought it was useful to have some detail about what the button does.
I then updated the main timeline so that the default sorting option aligns itself with the choice you made on the category page. E.g. If you’ve ordered the category by ‘length of use’ then the main timeline will be ordered this way too when you open it. I also set things up so that if you change the ordering via the main timeline pop-up then the ordering of the category will be updated to reflect your choice when you close the popup, although Fraser didn’t like this so I’ll probably remove this feature next week. Here’s how the new category page looks with the options menu opened:
I spent some more time on the REELS project this week, as Eila had got back to me with some feedback about the front-end. This included changing the ‘Other’ icon, which Eila didn’t like. I wasn’t too keen on it either, was I was happy to change it. I now use a sort of archway instead of the tall, thin monument, which I think works better. I also removed non-Berwickshire parishes from the Advanced Search page, tweaked some of the site text and also fixed the search for element language, which I had inadvertently broken when changing the way date searches worked last week.
Also this week I fixed an issue with the SCOTS corpus, which was giving 403 errors instead of playing the audio and video files, and was giving no results on the Advanced Search page. It turned out that this was being caused by a security patch that had been installed on the server recently, which was blocking legitimate requests for data. I was also in touch with Scot Spurlock about his crowdsourcing project, that looks to be going ahead in some capacity, although not with the funding that was initially hoped for.
Finally, I had received some feedback from Faye Hammill and her project partners about the data management plan I’d written for her project. I responded to some queries and finalised some other parts of the plan, sending off a rather extensive list of comments to her on Friday.
With the strike action over (for now, at least) I returned to a full week of work, and managed to tackle a few items that had been pending for a while. I’d been asked to write a Technical Plan for an AHRC application for Faye Hammill in English Literature, but since then the changeover from four-page, highly structured Technical Plans to two-page more free-flowing Data Management Plans has taken place. This was a good opportunity to write an AHRC Data Management Plan, and after following the advice on the AHRC website(http://www.ahrc.ac.uk/documents/guides/data-management-plan/) and consulting the additional documentation on the DCC’s DMPonline tool (https://dmponline.dcc.ac.uk/) I managed to write a plan that covered all of the points. There are still some areas where I need further input from Faye, but we do at least have a first draft now.
I also created a project website for Anna McFarlane’s British Academy funded project. The website isn’t live yet, so I can’t include the URL here, but Anna is happy with how it looks, which is good. After sorting that out I then returned to the REELS project. I created the endpoints in the API that would allow the various browse facilities we had agreed upon to function, and then built these features in the front-end. It’s now possible to (for example) list all sources and see which has the most place-names associated with it, or bring up a list of all of the years in which historical forms were first attested.
I spent quite a bit of time this week working on the extraction of words and their thematic headings from EEBO for the Linguistic DNA project. Before the strike I’d managed to write a script that went through a single file and counted up all of the occurrences of words, parts of speech and associated thematic headings, but I was a little confused that there appeared to be thematic heading data in column 6 and also column 10 of the data files. Fraser looked into this and figured out that the most likely thematic heading appeared in column 10, while other possible ones appeared in column 6. This was a rather curious way to structure the data, but once I knew about it I could set my script to focus on column 10, as we’re only interested in the most likely thematic heading.
I updated my script to insert data into a database rather than just hold things temporarily in an array, and I also wrapped the script in another function that then applied the processing to every file in a directory rather than just a single file. With this in place I set the script running on the entire EEBO directory. I was unsure whether running this on my desktop PC would be fast enough, but thankfully the entire dataset was processed in just a few hours.
My script finished processing all 14590 files that I had copied from the J drive to my local PC, resulting in whopping 70,882064 rows entered into my database. Everything seemed to be going very well, but Fraser wasn’t sure I had all of the files, and he was correct. Having checked the J drive, there were 25,368 items, so when I had copied the files across the process must have silently failed at some point. And even more annoyingly it didn’t fail in an orderly manner. E.g. the earliest file I have on my PC is A00018 while there are several earlier ones on the J drive.
I copied all of the files over again and decided that rather then dropping the database and started from scratch I’d update my script to check to see whether a file had already been processed, meaning that only the missing 10,000 or so would be dealt with. However, in order to do this the script would need to query a 70 million row database for the ‘filename’ column, which didn’t have an index. I began the process of creating an index, but indexing 70 million rows took a long time – several hours, in fact. I almost gave up and inserted all the data again from scratch, but the thing is I knew I would need this index in order to query the data anyway, so I decided to persevere. Thankfully the index finally finished building and I could then run my script to insert the missing 10,000 files, a process that took a bit longer as the script now had to query the database and also update the index as well as insert the data. But finally all 25,368 files were processed, resulting in 103,926,008 rows in my database.
The script and the data are currently located on my desktop PC, but if Fraser and Marc want to query it I’ll need to get this migrated to a web server of some sort, so I contacted Chris about this. Chris said he’d sort a temporary solution out for me, which is great. I then set to work writing another script that would extract summary information for the thematic headings and insert this into another table. After running the script this table now contains a total count of each word / part of speech / thematic heading across the entire EEBO collection. Where a lemma appears with multiple parts of speech these are treated as separate entities and are not added together. For example, ‘AA Creation NN1’ has a total count of 4609 while ‘AA Creation NN2’ has a total count of 19, and these are separate rows in the table.
Whilst working with the data I noticed that a significant amount of it is unusable. Of the almost 104 million rows of data, over 20 million have been given the heading ’04:10’ and a lot of these are words that probably could have been cleaned up before the data was fed into the tagger. A lot of these are mis-classified words that have an asterisk or a dash at the start. If the asterisk / dash had been removed then the word could have been successfully tagged. E.g. there are 88 occurrences of ‘*and’ that have been given the heading ’04:10’ and part of speech ‘FO’. Basically about a fifth of the dataset is an unusable thematic heading, and much of this is data that could have been useful if the data had been pre-processed a little more thoroughly.
Anyway, after tallying up the frequencies across all texts I then wrote a script to query this table and extract a ‘top 10’ list of lemma / pos combinations for each of the 3,972 headings that are used. The output has one row per heading and a column for each of the top 10 (or less if there are less than 10). This currently has the lemma, then the pos in brackets and the total frequency across all 25,000 texts after a bar, as follows: christ (NP1) | 1117625. I’ve sent this to Fraser and once he gets back to me I’ll proceed further.
In addition to the above big tasks, I also dealt with a number of smaller issues. Thomas Widmann of SLD had asked me to get some DSL data from the API for him, so I sent that on to him. I updated the ‘favicon’ for the SPADE website, I fixed a couple of issues for the Medical Humanities Network website, and I dealt with a couple of issues with legacy websites: For SWAP I deleted the input forms as these were sending spam to Carole. I also fixed an encoding issue with the Emblems websites that had crept in when the sites had been moved to a new server.
I also heard this week that IT Services are going to move all project websites to HTTPS from HTTP. This is really good news as Google has started to rank plain HTTP sites lower than HTTPS sites, plus Firefox and Chrome give users warnings about HTTP websites. Chris wanted to try migrating one of my sites to HTTPS and we did this for the Scots Corpus. There were some initial problems with the certificate not working for the ‘www’ subdomain but Chris quickly fixed this and everything appeared to be working fine. Unfortunately, although everything was fine within the University network, the University’s firewall was blocking HTTPS requests from external users, meaning no-one outside of the University network could access the site. Thankfully someone contacted Wendy about this and Chris managed to get the firewall updated.
I also did a couple of tasks for the SCOSYA project, and spoke to Gary about the development of the front-end, which I think is going to need to start soon. Gary is going to try and set up a meeting with Jennifer about this next week. On Friday afternoon I attended a workshop about digital editions that Sheila Dickson in German had organised. There were talks about the Cullen project, the Curious Travellers project, and Sheila’s Magazin zur Erfahrungsseelenkunde project. It was really interesting to hear about these projects and their approaches to managing transcriptions.
I decided this week to devote some time to redevelop the Thesaurus of Old English, to bring it into line with the work I’ve been doing to redevelop the main Historical Thesaurus website. I had thought I wouldn’t have time to do this before next week’s ‘Kay Day’ event but I decided that it would be better to tackle the redevelopment whilst the changes I’d made for the main site were still fresh in my mind, rather than coming back to it in possibly a few months’ time, having forgotten how I implemented the tree browse and things like that. It actually took me less time than I had anticipated to get the new version up and running, and by the end of Tuesday I had a new version in place that was structurally similar to the new HT site. We will hopefully be able to launch this alongside the new HT site towards the end of next week.
I sent the new URL to Carole Hough for feedback as I was aware that she had some issues with the existing TOE website. Carole sent me some useful feedback, which led to me making some additional changes to the site – mainly to the tree browse structure. The biggest issue is that the hierarchical structure of TOE doesn’t quite make sense. There are 18 top-level categories, but for some reason I am not at all clear about each top-level category isn’t a ‘parent’ category but is in fact a sibling category to the ones that are one level down. E.g, logically ’04 Consumption of food/drink’ would be the parent category of ’04.01’, ’04.02’ etc but in the TOE this isn’t the case, rather ’04.01’, ’04.02’ should sit alongside ‘04’. This really confuses both me and my tree browse code, which expects categories ‘xx.yy’ to be child categories of ‘xx’. This led to the tree browse putting categories where logically they belong, but within the confines of the TOE make no sense – e.g. we ended up with ’04.04 Weaving’ within ’04 Consumption of food/drink’!
To confuse matters further, there are some additional ‘super categories’ that I didn’t have in my TOE database but apparently should be used as the real 18 top-level categories. Rather confusingly these have the same numbers as the other top-level categories. So we now have ’04 Material Needs’ that has a child category ’04 Consumption of food/drink’ that then has ’04.04 Weaving’ as a sibling and not as a child as the number would suggest. This situation is a horrible mess that makes little sense to a user, but is even harder for a computer program to make sense of. Ideally we should renumber the categories in a more logical manner, but apparently this isn’t an option. Therefore I had to hack about with my code to try and allow it to cope with these weird anomalies. I just about managed to get it all working by the end of the week but there are a few issues that I still need to clear up next week. The biggest one is that all of the ‘xx.yy’ categories and their child categories are currently appearing in two places – within ‘xx’ where they logically belong and beside ‘xx’ where this crazy structure says they should be placed.
In addition to all this TOE madness I also spent some further time tweaking the new HT website, including updating the quick search box so the display doesn’t mess up on narrow screens, making some further tweaks to the photo gallery and making alterations to the interface. I also responded to a request from Fraser to update one of the scripts I’d written for the HT OED data migration that we’re still in the process of working through.
In terms of non-thesaurus related tasks this week, I was involved in a few other projects. I had to spend some time on some AHRC review duties. I also fixed an issue that had crept into the SCOTS and CMSW Corpus websites since their migration: the ‘download corpus as a zip’ issue was no longer working due to the PHP code using an old class to create the zip that was not compatible with the new server. I spent some time investigating this and finding a new way of using PHP to create zip files. I also locked down the SPADE website admin interface to IP address ranges of our partner institutions and fixed an issue with the SCOSYA questionnaire upload facility. I also responded to a request for information about TEI XML training from a PhD student and made a tweak to a page of the DSL website.
I spent the remainder of my week looking at some app issues. We are hopefully going to be releasing a new and completely overhauled version of the ARIES app by the end of the summer and I had been sent a document detailing the overall structure of the new site. I spent a bit of time creating a new version of the web-based ARIES app that reflected this structure, in preparation for receiving content. I also returned to the Metre app, that I’ve not done anything about since last year. I added in some explanatory text and I am hopefully going to be able to start wrapping this app up and deploying it to the App and Play stores soon. But possibly not until after my summer holiday, which starts the week after next.
I spent quite a bit of time this week on the Historical Thesaurus. A few tweaks ahead of Kay Day has now turned into a complete website redevelopment, so things are likely to get a little hectic over the next couple of weeks. Last week I implemented an initial version of a new HT tree-based browse mechanism but at the start of this week I still wasn’t sure how best to handle different parts of speech and subcategories. Originally I had thought we’d have a separate tree for each part of speech, but I came to realise that this was not going to work as the non-noun hierarchy has more gaps than actual content. There are also issues with subcategories as ones with the same number but different parts of speech have no direct connection. Main categories with the same number but different parts of speech always refer to the same thing – e.g. 01.02aj is the adjective version of 01.02.n. But subcategories just fill out the numbers, meaning 01.02|01.aj can be something entirely different to 01.02|01.n. This means providing an option to jump from a subcategory in one part of speech to another wouldn’t make sense.
Initially I went with the idea of having noun subcategories represented in the tree and the option to switch part of speech in the right-hand pane after the user selected a category in the tree (if a main category was selected). When a non-noun main category was selected then the subcategories for this part of speech would then be displayed under the main category words. This approach worked, but I felt that it was too inconsistent. I didn’t like that subcategories were handled differently depending on their part of speech. I therefore created two additional versions of the tree browser in addition to the one I created last week.
The second one has [+] and [-] instead of chevrons. It has the catnum in grey before the heading. The tree structure is the same as the first version (i.e. includes all noun categories and noun subcats). When you open a category the different parts of speech now appear as tabs, with ‘noun’ open by default. Hover over a tab to see the full part of speech and the heading for that part of speech. The good thing about the tabs is the currently active PoS doesn’t disappear from the list, as happens with the other view. When viewing a PoS that isn’t ‘noun’ and there are subcategories the full contents of these subcategories are visible underneath the maincat words. Subcats are indented and coloured to reflect their level, as with the ‘live’ site’s subcats, but here all lexemes are also displayed. As ‘noun’ subcats are handled differently and this could be confusing a line of text explains how to access these when viewing a non-noun category.
For the third version I removed all subcats from the tree and it only features noun maincats. It is therefore considerably less extensive, and no doubt less intimidating. In the category pane, the PoS selector is the same as the first version. The full subcat contents as in v2 are displayed for every PoS including nouns. This does make for some very long pages, but does at least mean all parts of speech are handled in the same way.
Marc, Fraser and I met to discuss the HT on Wednesday. It was a very productive meeting and we formed a plan about how to proceed with the revamp of the site. Marc showed us some new versions of the interface he has been working on too. There is going to be a new colour scheme and new fonts will be used too. Following on from the meeting I updated the navigation structure of the HT site, replaced all icons used in the site with Font Awesome icons, added in the facility to reload the ’random category’ that gets displayed on the homepage, moved the ‘quick search’ to the navigation bar of every page and made some other tweaks to the interface.
I spent more time towards the end of the week on the tree browser. I’ve updated the ‘parts of speech’ section so that the current PoS is also included. I’ve also updated the ordering to reflect the order in the printed HT and updated the abbreviations to match these too. Tooltips now give text as found in the HT PDF. The PoS beside the cat number is also now a tooltip. I’ve updated the ‘random category’ to display the correct PoS abbreviation too. I’ve also added in some default text that appears on the ‘browse’ page before you select a category.
- If it’s a subcat we don’t just want to display this, we need to grab its maincat, all of the maincat’s subcats but then ensure the passed subcat is displayed on screen.
- We need to build up the tree hierarchy, which is for nouns, so if the passed catid is not a noun category we need to also then find the appropriate noun category
I have sorted out point 1 now. If you pass a subcat ID to the page the maincat record is loaded and the page scrolls until the subcat is in view. I will also highlight the subcat as well, but haven’t done this yet. I’m still in the middle of addressing the second point. I know where and how to add in the grabbing of the noun category, I just haven’t had the time to do it yet. I also need to properly build up the tree structure and have the relevant parts open. This is still to do as currently only the tree from the maincat downwards is loaded in. It’s potentially going to be rather tricky to get the full tree represented and opened properly so I’ll be focussing on this next week. Also, T7 categories are currently giving an error in the tree. They all appear to have children and when you click on the [+] then an error occurs. I’ll get this fixed next week too. After that I’ll focus on integrating the search facilities with the tree view. Here’s a screenshot of how the tree currently looks:
I was pretty busy with other projects this week as well. I met with Thomas Clancy and Simon Taylor on Tuesday to discuss a new place-names project they are putting together. I will hopefully be able to be involved in this in some capacity, despite it not being based in the School of Critical Studies. I also helped Chris to migrate the SCOTS Corpus websites to a new server. This caused some issues with the PostGreSQL database that took use several hours to get to the bottom of. These were causing the search facilities to be completely broken, but thankfully I figured out what was causing this and by the end of the week the site was sitting on a new server. I also had an AHRC review to undertake this week.
On Friday I met with Marc and the group of people who are working on a new version of the ARIES app. I will be implementing their changes so it was good to speak to them and learn what they intend to do. The timing of this is going to be pretty tight as they want to release a new version by the end of August, so we’ll just need to see how this goes. I also made some updates to the ‘Burns and the Fiddle’ section of the Burns website. It’s looking like this new section will now launch in July.
Finally, I spent several hours on The People’s Voice project, implementing the ‘browse’ functionality for the database of poems. This includes a series of tabs for different ways of browsing the data. E.g. you can browse the titles of poems by initial letter, you can browse a list of authors, years of publication etc. Each list includes the items plus the number of poems that are associated with the item – so for example in the list of archives and libraries you can see that Aberdeen Central Library has 70 associated poems. You can then click on an item and view a list of all of the matching poems. I still need to create the page for actually viewing the poem record. This is pretty much the last thing I need to implement for the public database and all being well I’ll get this finished next Friday.