Week Beginning 23rd October 2023

After a delightful holiday last week I was back at work again this week.  This involved spending quite a bit of time catching up with emails and dealing with the ongoing issue of migrating sites from old servers to either our new external supplier or a newer server hosted internally.  I was involved with the migration of the SCOTS Corpus to a new server, with my work including fixing a few PHP errors that were cropping up on the more up to date server.  There were also some issues relating to database connections as the original code (which I didn’t write) uses rather a lot of connections – more than the new server was set to allow.  We had thought we’d fixed the issue but it looks like further investigation will be required.

We also migrated the thesaurus.ac.uk site and the Bilingual Thesaurus of Everyday Life in Medieval England (https://thesaurus.ac.uk/bth/) to a new server, which also required tweaking some of the code.  The new server was caching scripts that generated different output each time they were run (e.g. to generate the random category on the homepage), meaning the category wasn’t random but was constantly stuck on ‘Lard a roast’, which wasn’t very helpful.  Thankfully we managed to unstick the cache.

Also this week I investigated an issue with the advanced search of the Dictionaries of the Scots Language as the full-text search had stopped working.  It turned out that the Solr index that powers this search had entirely disappeared from the server, which is more than a little concerning.  It wasn’t a huge issue to rectify as I had the configuration scripts and the data on my PC, but we’re in the dark as to how the index could have been removed.  It had also been brought to my attention that some of the video files I’d uploaded for the Speech Star project before I went on holiday had also disappeared and I’ve reuploaded them too.  Our IT people are investigating what might have caused these issues and if they are linked, but it is concerning.

I also spent a bit of time looking through the old arts.gla.ac.uk server to try and figure out what needed to be retained from it.  It’s mostly old subject area sites that were long ago superseded by T4, plus old conference sites that are no longer needed.  A few of the other sites I’ve already previously moved to T4 myself (e.g. https://www.gla.ac.uk/schools/critical/aboutus/resources/stella/projects/starn/  and https://www.gla.ac.uk/schools/critical/aboutus/resources/stella/projects/bibliography-of-scottish-literature/).  The only site that I think need to be retained are the STELLA apps that I developed from old teaching resources in around 2015.  I therefore requested a new subdomain be set up to host them and migrated them over.  I’ve also requested we set up external hosting for arts.gla.ac.uk, purely to host redirects from old URLs so we don’t end up with broken links.  The new sites are now available (see https://stella.glasgow.ac.uk/aries/, https://stella.glasgow.ac.uk/grammar/, https://stella.glasgow.ac.uk/eoe/, https://stella.glasgow.ac.uk/metre/ and https://stella.glasgow.ac.uk/readings/) but  the redirects from the old URLs are not yet in place.  I’d really like to spend some time redeveloping all of these old apps (apart from ARIES, which has already been redeveloped).  Maybe next year I’ll find some time.

I also set up a new project website for Rhona Brown in Scottish Literature.  I’ve created a bare-bones website at the moment and I’m awaiting further instruction from her on things like themes, colour schemes, site structure and logos.  I also tweaked the project website I’d set up a couple of weeks ago for Petra Poncarova in Scottish Literature to improve the URLs for the Gaelic version of the homepage and helped a project team member get access to a site I’d set up for Matthew Creasy in English Literature.

On Wednesday morning this week I participated in a networking event for the new Research Professional Staff Network.  The event went well and it was very interesting to find out more about other people involved in research support across the University.

For the remainder of the week I began work on the development of the new ‘map first’ interface for the place-names projects, which I’m developing initially for the Iona project.  Below is a screenshot of how things look so far:

At the moment the interface consists of a narrow bar at the top of the browser window with the site’s icon, title and subtitle using the blue colour of the site’s banner as a background.  You can press on the logo or site title to navigate to the main site.  The rest of the browser is taken up with the map.  On the left is the side menu.  As discussed in the requirements document I previously wrote, it consists of four collapsible sections, with ‘Home’ open by default.  I haven’t had the time to implement the search and browse options yet, but the ‘Display options’ section is operational, as you can see above.  Pressing on the section’s title will open the section and you can access the various options.  You can show or hide the side menu by pressing on the button above it.

For the moment the map displays all data that has been marked as ‘on web’ in the CMS (362 records, I think).  By default these are colour-coded by classification code.  The legend is displayed in the top right, allowing you to turn specific features on or off.  You can also show or hide the legend to free up space.  In the bottom right are zoom options plus a ‘full screen’ button that does what you’d expect.  You can press on a map marker to open up the pop-up.  As of yet there is no link through to the full record and some Gaelic fields may be visible.  These will be removed at some point.

Using the ‘Display options’ in the side menu you can change how the map markers are classified.  We may need to be a little more fine-grained with start date and especially altitude.  Also colours for classification codes are currently arbitrarily assigned but we might want to change this – having blue for ‘field’ seems a bit daft, for example.  You can also change the base map and these options are currently the same as for the other place-name sites.  We still need to figure out if / how we can integrate another map of Iona that we discussed at a meeting before I went on holiday.  There is also an option to turn labels on or off.

That’s as far as I’ve got this week.  There’s still a lot to do but I’ve made pretty good progress.  I’ll hopefully find some time to continue with this next week.  I also discovered that the Leaflet mapping library has a method to set the map view so as to show all markers at the closest zoom possible so I’ll ensure I use this when I develop the search and the browse.  I’m currently already using it when the map is first opened to ensure that all of Iona, Soa in the south-west and Eilean Annraidh in the north-east are always visible, no matter what dimensions your screen / browse window are.

Week Beginning 29th November 2021

I participated in the UCU strike action on Wednesday to Friday this week, so it was a two-day working week for me.  During this time I gave some help to the students who are migrating the International Journal of Scottish Theatre and Screen and talked to Gerry Carruthers about another project he’s hoping to put together.  I also passed on information about the DNS update to the DSL’s IT people, added a link to the DSL’s new YouTube site to the footer of the DSL site and dealt with a query regarding accessing the DSL’s Google Analytics data.  I also spoke with Luca about arranging a meeting with him and his line manager to discuss digital humanities across the college and updated the listings for several Android apps that I created a few years ago that had been taken down due to their information being out of date.  As central IT services now manages the University Android account I hadn’t received notifications that this was going to take place.  Hopefully the updates have done the trick now.

Other than this I made some further updates to the Anglo-Norman Dictionary’s locution search that I created last week.  This included changing the ordering to list results by the word that was search for rather than by headword, changing the way the search works so that a wildcard search such as ‘te*’ now matches the start of any word in the locution phrase rather than just the first work and fixing a number of bugs that had been spotted.

I spent the rest of my available time starting to work on an interactive version of the radar diagram for the Historical Thesaurus.  I’d made a static version of this a couple of months ago which looks at a the words in an HT category by part of speech and visualises how the numbers of words in each POS change over time.  What I needed to do was find a way to allow users to select their own categories to visualise.  We had decided to use the broader Thematic Categories for the feature rather than regular HT categories so my first task was to create a Thematic Category browser from ‘AA The World’ to ‘BK Leisure’.  It took a bit of time to rework the existing HT category browser to work with thematic categories, and also to then enable the selection of multiple categories by pressing on the category name.  Selected categories appear to the right of the browser, and I added in an option to remove a selected category if required.  With this in place I began work on the code to actually grab and process the data for the selected categories.  This finds all lexemes and their associated dates for each lexeme in each HT category in each of the selected thematic categories.  For now the data is just returned and I’m still in the middle of processing the dates to work out which period each word needs to appear in.  I’ll hopefully find some time to continue with this next week.  Here’s a screenshot of the browser:

Week Beginning 11th November 2019

It was another mostly SCOSYA week this week, ahead of the launch of the project that was planned for the end of next week.  However, on Friday this week I bumped into Jennifer who said that the launch will now be pushed back into December.  This is because our intended launch date was the last working day before the UCU strike action begins, and is a bad time to launch the project, for reasons of publicity, engaging with other scholars and risks associated with technical issues that might crop up which might not be able to be sorted until after the strike.  As there’s a general election soon after the strike is due to end, it looks like the launch is going to be pushed back until closer to Christmas.  But as none of this transpired until Friday I still spent most of the week until then making what I thought were last-minute tweaks to the website and fixing bugs that had cropped up during user testing.

This included going through all of the points raised by Gary following the testing session he had arranged with his students in New York the week before, and meeting with Jennifer, E and Frankie to discuss how we intended to act on the feedback, which was all very positive but did raise a few issues relating to the user interface, the data and the explanatory text.

After the meeting I make such tweaks as removing most of the intro text from above the public atlas, but adding in a sentence about how to make the atlas full screen, as this is a feature that I think most users overlook.  I also overhauled the footer, adding in the new AHRC logo, logos for Edinburgh and QMUL and rearranging everything.  I also updated the privacy policy based on feedback from the University’s data protection people. I also updated the styling of the atlas’s menu headers to make them bolder on Macs, adding in links to the project’s API from the Linguists’ atlas and extended the height of the example selection area in the Linguists’ atlas too.  I also slightly tweaked the menu header text (e.g. ‘Search Examples’ is now ‘Search the examples’ to make it clearer that the tab isn’t just a few example searches) and updated the rating selection option to make unselected ratings appear in a very faded grey colour, to hopefully make it more obvious what is selected and what isn’t.  I also updated the legend so that the square grey boxes that previously said ‘no data’ now say ‘Example not tested’ instead.  I also updated the pop-ups accordingly.

I also sent the URL for the public and linguists’ atlases to the other developers in the College of Arts for feedback.  Luca Guariento found a way to break the map, which was good as after some investigation I figured out what was causing the issue and fixed it.  Basically, if you press the ‘top’ button in the footer it jumps to the div with ID ‘masthead’ using HTML’s plain ‘if hash passed show this on screen’ option.  But then if you do a full reload of the page the JavaScript grabs ‘masthead’ from the URL and tries to convert it to a float to pass it to Leaflet and things break.  By ensuring that the ‘jump to masthead’ link is handled in JavaScript rather than HTML I stopped this situation arising.  Stevie Barrett also noted that in Internet Explorer the HTML5 audio player in the pop-ups was too large for the pop-up area, and thankfully by adding in a bit of CSS to set the width of the audio player this issue was resolved.

Also this week I had a further chat with Luca about the API he’s building, and a DMP request that came his way, and arranged for the App and Play store account administration to be moved over the Central IT Services.  I also helped Jane Roberts with an issue with the Thesaurus of Old English and had a chat with Thomas Clancy and Gilbert Markus about the Place-names of Kirkcudbrightshire project, which I set the systems up for last year and is now nearing completion and requiring some further work to develop the front-end.

I also completed an initial version of a WordPress site for Corey Gibson’s bibliography project and spoke to Eleanor Capaldi about how to get some images for her website that I recently set up.  I also spent a bit of time upgrading all of the WordPress sites I manage to the latest version.  Also this week I had a chat with Heather Pagan about the Anglo-Norman Dictionary data.  She now has access to the data that powers the current website and gave me access to this.  It’s great to finally know that the data has been retrieved and to get a copy of it to work with.  I spent a bit of time looking through the XML files, but we need to get some sort of agreement about how Glasgow will be involved in the project before I do much more with it.

I had a bit of an email chat with the DSL people about adding a new ‘history’ field to their entries, something that will happen through the new editing interface that has been set up for them by another company, but will have implications for the website once we reach the point of adding the newly edited data from their new system to the online dictionary.  I also arranged for the web space for Rachel Smith and Ewa Wanat’s project to be set up and spent a bit of time playing around with a new interface and design for the Digital Humanities Network website (https://digital-humanities.glasgow.ac.uk/) which is in desperate need of a makeover.

Week Beginning 28th October 2019

I split most of my time this week between the SCOSYA project and the Historical Thesaurus.  The launch of the SCOSYA atlases is scheduled to take place in November and I had suggested to Jennifer that it might be good to provide access to the project’s data via tables rather than through the atlas interfaces.  This is because although the atlases look great and are a nice interactive way of accessing and visualising the data, some people prefer looking at tables of data instead, and other people may struggle to use the interactive atlases due to accessibility issues, but may still want to be able to view the project’s data.  We will of course provide free access to the project’s API, through which all of the data can be accessed as CSV or JSON files, or can even be incorporated into a completely new interface, but I thought it might be useful if we provided text-based access to the data through the project’s front-end as well.  Jennifer agreed that this would be useful, so I spent some time writing a specification document for the new features, sending it to the team for feedback and developing the new features.

I created four new features.  First was a table of dialect samples, which lists all of the locations that have dialect sample recordings and provides access to these recordings and the text that accompanies them, replicating the data as found on the ‘home’ map of the public atlas.  The second feature provides a tabular list of all of the locations that have community voice recordings.  Clicking on a location then displays the recordings and the transcriptions of each, as the following screenshot shows:

The third new feature lists all of the examples that can be browsed for through the public atlas.  You can then click on one of these examples to listen to the example sound clips of the example and to view a table of results for all of the questionnaire locations.  Users can also click through to view this example on the atlas itself too, as I figured that some people might want to view the results as a table but then see how these look on the atlas too.  The following screenshot shows the ‘explore’ feature for a particular example:

The fourth new feature replicates the full list of examples as found in the linguists’ atlas.  There are many examples nested within parent and sub-parent categories and it can be a bit difficult to get a clear picture of what is available through the nested menus in the atlas, so this new feature provides access to a complete list of the examples that is fully expanded and more easy to view, as the following screenshot demonstrates:

It’s then possible to click on an example to view the results of this example for every location in a table, again with a link through to the result on the atlas, which then enables the user to customise the display of results further, for example focussing on older or younger speakers or limiting the display to particular rating levels.

I reckon these new features are going to complement the atlases very well and will hopefully prove very useful to researchers.  Also this week I received some feedback on the atlases from the project team and I spent some time going through this, adding in some features that had been requested (e.g. adding in buttons to scroll the user’s page down so that the full atlas is on screen) and investigating some bugs and other issues that had been reported, including some issues with the full-screen view of the atlas when using Safari in MacOS that Gary reported that I have so far been unable to replicate.  I also implemented a new way of handling links to other parts of the atlas from the stories, as new project RA Frankie had alerted me to an issue with the old way.  Handling internal links is rather tricky as we’re not really loading a new page, it’s just the JavaScript in the user’s browser processing and displaying some different data.  As a new page is never requested pressing the ‘back’ button doesn’t load what you might expect to be the previous page, but instead displays the last full web page that was loaded.  The pages also don’t open properly in a new tab because the reference in the link is not to an actual page, but is instead intended to be picked up by JavaScript in the page when the link is clicked on and then processed to change the map contents.  When you open the link in a new tab the JavaScript doesn’t get to run and the browser tries to load the reference in the link, which ends up as a broken link.

It’s not a great situation and the alternative should work a bit better.  Rather than handling the link in JavaScript the links are now full page requests that get sent to the server.  A script on the server then picks up the link and processes a full page reload of the relevant section of the atlas.  Unfortunately if the user is on the last slide of a story, then clicks a link on the last slide then presses the back button they’ll end up back at the first slide in the story not the last, as there isn’t a way to reference a specific slide in a story, but setting the links to open in a new tab by default gets round this problem.

Finally for the project this week I met with Jennifer and E to discuss the ancillary pages and text that need to be put in place before the launch, and we discussed the launch itself and what this would involve.

For the HT I generated some datasets that an external researcher had requested from the Thesaurus of Old English data, and I generated some further datasets from the main HT database for another request.  I also started to implement a system to generate the new dates table.  I created the necessary table and wrote a function that takes a lexeme and goes through all of the 19 date fields to generate the rows that would need to be created for the lexeme.  As of yet I haven’t set this running on the whole dataset, but instead I’ve created a test script that allows you to pass a catid and view all of the date rows that would be created for each lexeme in the category so I (and Marc and Fraser) can test things out.  I’ve tested it out with categories that have some complicated date structures and so far I’ve not encountered any unexpected behaviour, apart from one thing:  Some lexemes have a full date such as ‘1623 Dict. + 1642–1711’.  The script doesn’t analyse the ‘fulldate’ field but instead looks at each of the actual date fields.  There is only one ‘label’ field so it’s not possible to ascertain that in this case the label is associated with the first date.  Instead the script always associates the label with the last date that a lexeme has.  I’m not sure how common it is for a label to appear in the middle of a full date, but it definitely crops up fairly regularly when I load a random category on the HT homepage, always as ‘Dict.’ so far.  We’ll need to see what we can do about this, if it turns out to be important, which I guess it probably will.

Also this week I performed some App Manager duties, had a conversation with Dauvit Broun, Nigel Leask and the RDM team about the ArtsLab session on data management plans next week, and spoke to Ann Ferguson of the DSL about how changes to the XML structure of entries will be  reflected in the front-end of the DSL.

 

Week Beginning 12th August 2019

I’d taken Tuesday off this week to cover the last day of the school holidays so it was a four-day week for me.  It was a pretty busy four days, though, involving many projects.  I had some app related duties to attend to, including setting up a Google Play developer account for people in Sports and Recreation and meeting with Adam Majumdar from Research and Innovation about plans for commercialising apps in future.  I also did some further investigation into locating the Anglo-Norman Dictionary data, created a new song story for RNSN and read over Thomas Clancy’s Iona proposal materials one last time before the documents are submitted.  I also met with Fraser Dallachy to discuss his Scots Thesaurus plans and will spend a bit of time next week preparing some data for him.

Other than these tasks I split my remaining time between SCOSYA and DSL.  For SCOSYA we had a team meeting on Wednesday to discuss the public atlas.  There is only about a month left to complete all development work on the project and I was hoping that the public atlas that I’d been working on recently was more or less complete, which would then enable me to move on to the other tasks that still need to be completed, such as the experts interface and the facilities to manage access to the full dataset.  However, the team have once again changed their minds about how they want the public atlas to function and I’m therefore going to have to devote more time to this task than I had anticipated, which is rather frustrating at this late stage.  I made a start on some of the updates towards the end of the week, but there is still a lot to be done.

For DSL we finally managed to sort out the @dsl.ac.uk email addresses, meaning the DSL people can now use their email accounts again.  I also investigated and fixed an issue with the ‘v3’ version of the API which Ann Ferguson had spotted.  This version was not working with exact searches, which use speech marks.  After some investigation I discovered that the problem was being caused by the ‘v3’ API code missing a line that was present in the ‘v2’ API code.  The server automatically escapes quotes in URLs by adding a preceding slash (\).  The ‘v2’ code was stripping this slash before processing the query, meaning it correctly identified exact searches.  As the ‘v3’ code didn’t get rid of the slashes it wasn’t finding the quotation mark and was not treating it as an exact search.

I also investigated why some DSL entries were missing from the output of my script that prepared data for Solr.  I’d previously run the script on my laptop, but running it on my desktop instead seemed to output the full dataset including the rows I’d identified as being missing from the previous execution of the script.  Once I’d outputted the new dataset I sent it on to Raymond for import into Solr and then I set about integrating full-text searching into both ‘v2’ and ‘v3’ versions of the API.  This involved learning how Solr uses wildcard characters and Boolean searches, running some sample queries via the Solr interface and then updating my API scripts to connect to the Solr interface, format queries in a way that Solr could work with, submit the query and then deal with the results that Solr outputs, integrating these with fields taken from the database as required.

Other than the bibliography side of things I think that’s the work on the API more or less complete now (I still need to reorder the ‘browse’ output).  What I haven’t done yet is to work on the advanced search pages of the ‘new’ and ‘sienna’ versions of the website to actually work with the new APIs, so as of yet you can’t perform any free-text searches through these interfaces but only directly through the APIs.  Working to connect the front-ends fully to the APIs is my next task, which I will try to start on next week.

Week Beginning 11th September 2017

I spent more than half of this week continuing to work on the new ARIES app.  Last week I finished work on an initial, plain HTML and JavaScript version of the app, and I received another couple of bits of feedback this week that I implemented.  The bulk of my time, however, was spent using Apache Cordova to ‘wrap’ the HTML and JavaScript version, converting it into actual iOS and Android apps, then testing these apps on my iOS and Android devices, and then making all of the media files that an app needs, such as icon files, screenshots, app loading screens, app store graphics and things like that.  This process always takes longer than I think it should.  For example, I have to make more than 20 different icon files at varying resolutions, and I need to grab multiple screenshots from at least four different devices.  This latter process is made trickier because my Android Nexus 7 tablet no longer connects properly to my PC – the ‘photos’ folder appears blank when I connect for photo transfer and doesn’t contain the actual updated contents when I connect for file transfer, so I have to use a third party file explorer app to move the screenshots to a different folder on the device that somehow does get updated when viewing on my PC.  Regarding the icons, I came up with a few alternatives for this, based on the header image for the app, and we finally agreed on a sort of ‘marble’ effect circle on a white background.  I think it looks pretty good, and is certainly better than the old ARIES logo.  The app publication process was also complicated by two new issues that have emerged since I last made an app.  Firstly, Apple have updated the build process to disallow any extended image metadata, I guess as a security precaution.  I created my app icon PNG files in Photoshop, which added in such metadata.  When I then built my iOS app in xCode I received some rather unhelpful errors.  Thankfully StackOverflow had the answer (see https://stackoverflow.com/questions/39652867/code-sign-error-in-macos-sierra-xcode-8-3-3-resource-fork-finder-information) and after running a couple of command-line scripts this metadata was stripped out and the build succeeded.  My second issue related to the app name on the App Store.  Apple has decided to limit app names to 30 characters, meaning we could no longer call our app ‘ARIES: Assisted Revision in English Style’.  And as there is already an app names ‘ARIES’ we couldn’t call it that either.  This is a real pain, and seems like a completely unnecessary restriction to me.  In the end we called the app “ARIES: English Academic Style”.  I managed to submit the app to Apple and Google on Wednesday, and thankfully by the end of the week the new version was available on both the App and Play Stores.  I also made the ‘web’ version available, replacing the old ARIES site.  You can access this, and link through to the app versions from here: http://www.arts.gla.ac.uk/stella/apps/web/aries/

Other than ARIES work, I made some further changes to the Edinburgh Gazetteer keywords, replacing the old list of keywords with a much trimmed down list that Rhona supplied.  I think this works much better than the previous list, and things are looking good.  I also helped Alison Wiggins with some information she wanted to add to the Digital Humanities website, and I spent about half a day working with the Mapping Metaphor data, generating new versions of all of the JSON files that are required for the ‘Metaphoric’ app and testing the web version of this out.  It looks like everything is working fine with the full dataset, so next week I’ll hopefully publish a new version of the app that contains this data.  I also started working on the database of Burns’ paper for Ronnie Young, firstly converting his Access database into an online MySQL version and then creating a simple browse interface for it.  There’s still lots more to be done for this but I need to meet with Ronnie before I can take this further.

The rest of my week was taken up with meetings.  On Wednesday morning I was on an interview panel for a developer post in another part of the college.  I also met with Gerry McKeever in the afternoon to discuss his new British Academy funded ‘Regional Romanticism’ project.  I’ll be working with him to set up a website for this, with some sort of interactive map being added in sometime down the road.  I spent Friday morning attending a network meeting for Kirsteen McCue’s Romantic National Song Network.  It was interesting to hear more about the project and to participate in the discussions about how the web resource for this project will work.  There were several ideas for where the focus for the online aspect of the project should lie, and thankfully by lunchtime we’d reached a consensus about this.  I can’t say much more about it now, but it’s going to be using some software I’ve not used before but am keen to try out, which is great.

Week Beginning 4th September 2017

I spent a lot of this week continuing with the redevelopment of the ARIES app and thankfully after laying the groundwork last week (e.g. working out the styles and the structure, implementing a couple of exercise types) my rate of progress this week was considerably improved.  In fact, by the end of the week I had added in all of the content and had completed an initial version of the web version of the app.  This included adding in some new quiz types, such as one that allows the user to reorder the sentences in a paragraph by dragging and dropping them, and also a simple multiple choice style quiz.  I also received some very useful feedback from members of the project team and made a number of refinements to the content based on this.

This included updating the punctuation quiz so that if you get three incorrect answers in a quiz a ‘show answer’ button is displayed.  Clicking on this puts in all of the answers and shows the ‘well done’ box.  This was rather tricky to implement as the script needed to reset the question, including removing all previous answers, ticks, and resetting the initial letter case as if you select a full stop the following letter is automatically capitalised.  I also implemented a workaround for answers where a space is acceptable.  These no longer count towards the final tally of correct answers, so leaving a space rather than selecting a comma can now result in the ‘well done’ message being displayed.  Again, this was rather tricky to implement and it would be good if you could test out this quiz thoroughly to make sure there aren’t any occasions where the quiz breaks.

I also improved navigation throughout the app.  I added ‘next’ buttons to all of the quizzes, which either take you to the next section, or to the next part of the quiz, as applicable.  I think this works much better than just having the option to return to the page the quiz was linked from.  I also added in a ‘hamburger’ button to the footer of every page within a section.  Pressing on this takes you to the section’s contents page, and I added ‘next’ and ‘previous’ buttons to the contents pages too, so you can navigate between sections without having to go back to the homepage.

I spent a bit of time fixing the drag / drop quizzes so that the draggable boxes were constrained to each exercise’s boundaries.  This seemed to work great until I got to the references quiz, which has quite long sections of draggable text.  With the constraint in place it became impossible for the part of the draggable button that triggers the drop to reach the boxes nearest the boundaries of the question as none of the button could pass the borders.  So rather annoyingly I had to remove this feature and just allow people to drag the buttons all over the page.  But dropping a button from one question into another will always give you an incorrect answer now, so it’s not too big a problem.

With all of this in place I’ll start working on the app version of the resource next week and will hopefully be able to submit it to the app stores by the end of the week, all being well.

In addition to my work on ARIES, I completed some other tasks for a number of other projects.  For Mapping Metaphor I created a couple of scripts for Wendy that output some statistics about the metaphorical connections in the data.  For the Thesaurus of Old English I created a facility to enable staff to create new categories and subcatetories (previously it was only possible to edit existing categories or add / edit / remove words from existing categories).  I met with Nigel Leask and some of the Curious Travellers team on Friday to discuss some details for a new post associated with this project.  I had an email discussion with Ronnie Young about the Burns database he wants me to make an online version of.  I also met with Jane Stuart-Smith and Rachel MacDonald, who is the new project RA for the SPADE project, and set up a user account for Rachel to manage the project website.  I had a chat with Graeme Cannon about a potential project he’s helping put together that may need some further technical input and I updated the DSL website and responded to a query from Ann Ferguson regarding a new section of the site.

I also spent most of a day working on the Edinburgh Gazetteer project, during which I completed work on the new ‘keywords’ feature.  It was great to be able to do this as I had been intending to work on this last week but just didn’t have the time.  I took Rhona’s keywords spreadsheet, which had page ID in one column and keywords separated by a semi-colon in another and created two database tables to hold the information (one for information about keywords and a joining table to link keywords to individual pages).  I then wrote a little script that went through the spreadsheet, extracted the information and added it to my database.  I then set to work on adding the actual feature to the website.

The index page of the Gazetteer now has a section where all of the keywords are listed.  There are more than 200 keywords so it’s a lot of information.  Currently the keywords appear like ‘bricks’ in a scrollable section, but this might need to be updated as it’s maybe a bit much information.  If you click on a keyword a page loads that lists all of the pages that the keyword is associated with.  When you load a specific page, either from the keyword page or from the regular browse option, there’s now a section above the page image that lists the associated keywords.  Clicking on one of these loads the keyword’s page, allowing you to access any other pages that are associated with it.  It’s a pretty simple system but it works well enough.  The actual keywords need a bit of work, though, as some are too specific and there are some near duplications due to typos and things like that.  Rhona is going to send me an updated spreadsheet and I will hopefully upload this next week.

Oh yes, it was five years ago this week that I started in this post.  How time flies.

Week Beginning 28th August 2017

This week was rather a hectic one as I was contacted by many people who wanted my help and advice with things.  I think it’s the time of year – the lecturers are returning from their holidays but the students aren’t back yet so they start getting on with other things, meaning busy times for me.  I had my PDR session on Monday morning, so I spent a fair amount of time at this and then writing things up afterwards.  All went fine, and it’s good to know that the work I do is appreciated.  After that I had to do a few things for Wendy for Mapping Metaphor.  I’d forgotten to run my ‘remove duplicates’ script after I’d made the final update to the MM data, which meant that many of the sample lexemes were appearing twice.  Thankfully Wendy spotted this and a quick execution of my script removed 14,286 duplicates in a flash.  I also had to update some of the text on the site, update the way search terms are highlighted in the HT to avoid links through from MM highlighting multiple terms.  I also wrote a little script that displays the number of strong and weak metaphorical connections there are for each of the categories, which Wendy wanted.

My big task for the week was to start on the redevelopment of the ARIES app.  I had been expecting to receive the materials for this several weeks earlier as Marc wanted the new app to be ready to launch at the beginning of term.  As I’d heard nothing I assumed that this was no longer going to happen, but on Monday Marc gave me access to the files and said the launch must still go ahead at the start of term.  There is rather a lot to do and very little time to do it in, especially as preparing stuff for the App Store takes so much time once the app is actually developed.  Also, Marc is still revising the materials so even though I’m now creating the new version I’m still going to have to go back and make further updates later on.  It’s not exactly an ideal situation.  However, I did manage to get started on the redevelopment on Tuesday, and spent pretty much all of my time on Tuesday, Wednesday and Thursday on this task.  This involved designing a new interface based on the colours found in the logo file, creating the structure of the app, and migrating the static materials that the team had created in HTML to the JSON file I’m creating for the app contents.  This included creating new styles for the new content where required and testing things out on various devices to make sure everything works ok.  I also implemented two of the new quizzes, which also took quite a bit of time, firstly because I needed to manually migrate the quiz contents to a format that my scripts could work with and secondly because although the quizzes were similar to ones I’ve written before they were not identical in structure, so needed some reworking in order to meet the requirements.  I’m pretty happy with how things are developing, but progress is slow.  I’ve only completed the content for three subsections of the app, and there are a further nine sections remaining.  Hopefully the pace will quicken as I proceed, but I’m worried that the app is not going to be ready for the start of term, especially as the quizzes should really be tested out by the team and possibly tweaked before launch.

I spent most of Friday this week writing the Technical Plan for Thomas Clancy’s new place-name project.  Last week I’d sent off a long list of questions about the project and Thomas got back to me with some very helpful answers this week, which really helped in writing the plan.  It’s still only a first version and will need further work, but I think the bulk of the technical issues have been addressed now.

Other than these tasks, I responded to a query from Moira Rankin from the Archives about an old project I was involved with, I helped Michael Shaw deal with some more data for The People’s Voice project, I had a chat to Catriona MacDonald about backing up The People’s Voice database, I looked through a database that Ronnie Young had sent me, which I will be turning into an online resource sometime soon (hopefully), I replied to Gerry McKeever about a project he’s running that’s just starting up which I will be involved with, and I replied to John Davies in History about a website query he had sent me.  Unfortunately I didn’t get a chance to continue with the Edinburgh Gazetteer work I’d started last week, but I’ll hopefully get a chance to do some further work on this next week.

Week Beginning 31st July 2017

This week was another four-day week for me as I was on holiday on Friday.  I will also be on holiday all next week.  As I am currently between any pressing deadlines and I didn’t want to start anything major before my holiday, I decided to return to the migration of one of the old STELLA resources to the University’s T4 website this week.  The resource in question is STARN (the Scots Teaching and Resource Network).  It’s a collection of Scottish literary and non-literary materials that was mainly compiled in the 90s.  Although the old site mostly still worked it looked very old fashioned and contained many broken links.  I had started to migrate the site across to T4 before Christmas last year during a bit of slack time I had, but as things got busier I had to leave the migration half done and focus on current research projects instead.  When I returned to it this week I discovered I was right in the middle of migrating Sir Walter Scott’s Waverley novels, which was something of a mammoth task.  There were countless chapters that each needed their own pages, then I needed to add ‘next’ and ‘previous’ links to all of these after I’d created the pages, then I needed to create contents pages and a variety of ancillary pages.  It was a tedious, time-consuming and pretty brainless task, but there is a certain amount of satisfaction to be gained from getting it all done.    You can now access the STARN resource here: http://www.gla.ac.uk/schools/critical/aboutus/resources/stella/projects/starn/

I also spent a bit of time this week speaking to Alison Wiggins about her upcoming AHRC project that starts in September and I will be involved with for a small amount of my time.  I also set up a subdomain for Stuart Gillespie’s project.  I’m going to be helping out on an interview panel for a post in another School within the College in September and I spent a bit of time going through the applications for this too.  There’s not really much else to say about the work I did this week.  Once I’m back after my holiday I’ll need to focus on the new version of the ARIES app that is due to launch in September (all being well) and I need to get back into developing the atlas for the SCOSYA project.

Week Beginning 24th July 2017

I spent Monday this week creating the Android version of the ‘Basics of English Metre’ app, which took a little bit of time as the workflow for creating and signing apps for publication had completely changed since the last time I created an app.  The process now uses Android Studio, and once I figured out how it all worked it was actually a lot easier than the old way that used to involve several command-line tools such as zipalign.  By the end of the day I had submitted the app and on Tuesday both the iOS and the Android version had been approved and had been added to the respective stores.  You can see the iOS version here: https://itunes.apple.com/us/app/the-basics-of-english-metre/id1262414928?mt=8 and the Android version here: https://play.google.com/store/apps/details?id=com.gla.stella.metre&hl=en_GB&pcampaignid=MKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1.  The Web version is also available here: http://www.arts.gla.ac.uk/stella/apps/web/metre/.

On Friday I met with Stuart Gillespie in English Literature to discuss a new website he requires.  He has a forthcoming monograph that will be published by OUP and he needs a website to host an ‘Annexe’ for this publication.  Initially it will just be a PDF but it might develop into something bigger with online searches later on.  Also this week I had a further email conversation with Thomas Clancy about some potential place-name projects that might use the same system as the REELS project and I had a chat with someone from the KEEP archive if Suffolk about hosting the Woolf short story digital edition I’ve created.

I spent the rest of the week getting back into the visualisations I’ve been making using data from the Historical Thesaurus for the Linguistic DNA project.  It took me some time to read through all of the documentation again, look through previously sent emails and my existing code and figure out where I’d left things off several weeks ago.  I created a little ‘to do’ list of things I need to do with the visualisations.  My ‘in progress’ versions of the sparklines went live when the new version of the Historical Thesaurus was launched at the end of June (see http://historicalthesaurus.arts.gla.ac.uk/sparklines/) but these still need quite a bit of work, firstly to speed up their generation and secondly to make sure the data actually makes sense when a period other than the full duration is specified.  The pop-ups that appear on the visualisations also need to be reworked when looking at shorter periods too as the statistics contained currently refer to the full duration.

I didn’t actually tackle any of the above during this week as instead I decided to look into creating a new set of visualisations for ‘Deviation in the LDNA period’ instead.  Marc had created a sort of heatmap for this data in Excel and what I needed to do was create a dynamic, web-based version of this.  I decided to use the always useful D3.js library for these and rather handily I found an example heatmap that I could use as a basis for further work:  http://bl.ocks.org/tjdecke/5558084.  Also via this I found some very handy colour scales that I could use for the heatmap and I’ll no doubt use for future visualisations: https://bl.ocks.org/mbostock/5577023

The visualisation I created is pretty much the same as the spreadsheet – increasingly darker shades of green representing positive numbers and increasingly darker shades of blue representing negative numbers.  There are columns for each decade in the LDNA period and rows for each Thematic Heading.

I’ve split the visualisation up based on the ‘S1’ code.  It defaults to just showing the ‘AA’ headings but using the drop-down list you can select another heading, e.g. ‘AB’ and the visualisation updates, replacing the data.  This calls a PHP script that generates new data from the database and formats it as a CSV file.  We could easily offer up the CSV files to people too if they would want to reuse the data.

Note that not all of the ‘S1’ Thematic Headings appear in the spreadsheet or have ‘glosses’.  E.g. ‘AJ Matter’ is not in the spreadsheet and has no ‘gloss’ so I’ve had to use ‘AJ01 Alchemy’ as the ‘group’ in the drop-down list, which is probably not right.  Where there is no ‘S1’ heading (or no ‘S1’ heading that has a ‘gloss’) the ‘S2’ heading appears instead.

Here’s a screenshot of the visualisation, showing Thematic Heading group ‘AE Animals’:

In the ‘live’ visualisation (which I can’t share the URL of yet) if you hover over a thematic heading code down the left-hand edge of the visualisation a pop-up appears containing the full ‘gloss’ so you can tell what it is you’re looking at.  Similarly, if you hover over one of the cells a pop-up appears, this time containing the decade (helpful if you’ve scrolled down and the column headings are not visible) and the actual value contained within the cell.

Rather than make the cells square boxes as in the example I started with, I’ve made the boxes rectangles, with the intention of giving more space between rows and hopefully making it clearer that the data should primarily be read across the way.  I have to say I rather like the look of the visualisation as it brings to mind DNA sequences, which is rather appropriate for the project.

I experimented with a version of the page that had a white background and another that had a black background.  I think the white background actually makes it easier to read the data, but the black background looks more striking and ‘DNA sequency’ so I’ve added in an option to switch from a ‘light’ theme to a ‘dark’ theme, with a nice little transition between the two.  Here’s the ‘dark’ theme selected for ‘AB Life’:

There’s still probably some further work to be done on this, e.g. allowing users to in some way alter the cell values based on limits applied, or allowing users to click through from a cell pop-up to some actual words in the HT or something.  I could also add in a legend that shows what values the different shades represent.  I wasn’t sure whether this was really needed as you can tell by hovering over the boxes anyway.  I’ll see what Marc and Fraser suggest when they have a chance to use the visualisations.