Week Beginning 11th November 2019

It was another mostly SCOSYA week this week, ahead of the launch of the project that was planned for the end of next week.  However, on Friday this week I bumped into Jennifer who said that the launch will now be pushed back into December.  This is because our intended launch date was the last working day before the UCU strike action begins, and is a bad time to launch the project, for reasons of publicity, engaging with other scholars and risks associated with technical issues that might crop up which might not be able to be sorted until after the strike.  As there’s a general election soon after the strike is due to end, it looks like the launch is going to be pushed back until closer to Christmas.  But as none of this transpired until Friday I still spent most of the week until then making what I thought were last-minute tweaks to the website and fixing bugs that had cropped up during user testing.

This included going through all of the points raised by Gary following the testing session he had arranged with his students in New York the week before, and meeting with Jennifer, E and Frankie to discuss how we intended to act on the feedback, which was all very positive but did raise a few issues relating to the user interface, the data and the explanatory text.

After the meeting I make such tweaks as removing most of the intro text from above the public atlas, but adding in a sentence about how to make the atlas full screen, as this is a feature that I think most users overlook.  I also overhauled the footer, adding in the new AHRC logo, logos for Edinburgh and QMUL and rearranging everything.  I also updated the privacy policy based on feedback from the University’s data protection people. I also updated the styling of the atlas’s menu headers to make them bolder on Macs, adding in links to the project’s API from the Linguists’ atlas and extended the height of the example selection area in the Linguists’ atlas too.  I also slightly tweaked the menu header text (e.g. ‘Search Examples’ is now ‘Search the examples’ to make it clearer that the tab isn’t just a few example searches) and updated the rating selection option to make unselected ratings appear in a very faded grey colour, to hopefully make it more obvious what is selected and what isn’t.  I also updated the legend so that the square grey boxes that previously said ‘no data’ now say ‘Example not tested’ instead.  I also updated the pop-ups accordingly.

I also sent the URL for the public and linguists’ atlases to the other developers in the College of Arts for feedback.  Luca Guariento found a way to break the map, which was good as after some investigation I figured out what was causing the issue and fixed it.  Basically, if you press the ‘top’ button in the footer it jumps to the div with ID ‘masthead’ using HTML’s plain ‘if hash passed show this on screen’ option.  But then if you do a full reload of the page the JavaScript grabs ‘masthead’ from the URL and tries to convert it to a float to pass it to Leaflet and things break.  By ensuring that the ‘jump to masthead’ link is handled in JavaScript rather than HTML I stopped this situation arising.  Stevie Barrett also noted that in Internet Explorer the HTML5 audio player in the pop-ups was too large for the pop-up area, and thankfully by adding in a bit of CSS to set the width of the audio player this issue was resolved.

Also this week I had a further chat with Luca about the API he’s building, and a DMP request that came his way, and arranged for the App and Play store account administration to be moved over the Central IT Services.  I also helped Jane Roberts with an issue with the Thesaurus of Old English and had a chat with Thomas Clancy and Gilbert Markus about the Place-names of Kirkcudbrightshire project, which I set the systems up for last year and is now nearing completion and requiring some further work to develop the front-end.

I also completed an initial version of a WordPress site for Corey Gibson’s bibliography project and spoke to Eleanor Capaldi about how to get some images for her website that I recently set up.  I also spent a bit of time upgrading all of the WordPress sites I manage to the latest version.  Also this week I had a chat with Heather Pagan about the Anglo-Norman Dictionary data.  She now has access to the data that powers the current website and gave me access to this.  It’s great to finally know that the data has been retrieved and to get a copy of it to work with.  I spent a bit of time looking through the XML files, but we need to get some sort of agreement about how Glasgow will be involved in the project before I do much more with it.

I had a bit of an email chat with the DSL people about adding a new ‘history’ field to their entries, something that will happen through the new editing interface that has been set up for them by another company, but will have implications for the website once we reach the point of adding the newly edited data from their new system to the online dictionary.  I also arranged for the web space for Rachel Smith and Ewa Wanat’s project to be set up and spent a bit of time playing around with a new interface and design for the Digital Humanities Network website (https://digital-humanities.glasgow.ac.uk/) which is in desperate need of a makeover.

Week Beginning 28th October 2019

I split most of my time this week between the SCOSYA project and the Historical Thesaurus.  The launch of the SCOSYA atlases is scheduled to take place in November and I had suggested to Jennifer that it might be good to provide access to the project’s data via tables rather than through the atlas interfaces.  This is because although the atlases look great and are a nice interactive way of accessing and visualising the data, some people prefer looking at tables of data instead, and other people may struggle to use the interactive atlases due to accessibility issues, but may still want to be able to view the project’s data.  We will of course provide free access to the project’s API, through which all of the data can be accessed as CSV or JSON files, or can even be incorporated into a completely new interface, but I thought it might be useful if we provided text-based access to the data through the project’s front-end as well.  Jennifer agreed that this would be useful, so I spent some time writing a specification document for the new features, sending it to the team for feedback and developing the new features.

I created four new features.  First was a table of dialect samples, which lists all of the locations that have dialect sample recordings and provides access to these recordings and the text that accompanies them, replicating the data as found on the ‘home’ map of the public atlas.  The second feature provides a tabular list of all of the locations that have community voice recordings.  Clicking on a location then displays the recordings and the transcriptions of each, as the following screenshot shows:

The third new feature lists all of the examples that can be browsed for through the public atlas.  You can then click on one of these examples to listen to the example sound clips of the example and to view a table of results for all of the questionnaire locations.  Users can also click through to view this example on the atlas itself too, as I figured that some people might want to view the results as a table but then see how these look on the atlas too.  The following screenshot shows the ‘explore’ feature for a particular example:

The fourth new feature replicates the full list of examples as found in the linguists’ atlas.  There are many examples nested within parent and sub-parent categories and it can be a bit difficult to get a clear picture of what is available through the nested menus in the atlas, so this new feature provides access to a complete list of the examples that is fully expanded and more easy to view, as the following screenshot demonstrates:

It’s then possible to click on an example to view the results of this example for every location in a table, again with a link through to the result on the atlas, which then enables the user to customise the display of results further, for example focussing on older or younger speakers or limiting the display to particular rating levels.

I reckon these new features are going to complement the atlases very well and will hopefully prove very useful to researchers.  Also this week I received some feedback on the atlases from the project team and I spent some time going through this, adding in some features that had been requested (e.g. adding in buttons to scroll the user’s page down so that the full atlas is on screen) and investigating some bugs and other issues that had been reported, including some issues with the full-screen view of the atlas when using Safari in MacOS that Gary reported that I have so far been unable to replicate.  I also implemented a new way of handling links to other parts of the atlas from the stories, as new project RA Frankie had alerted me to an issue with the old way.  Handling internal links is rather tricky as we’re not really loading a new page, it’s just the JavaScript in the user’s browser processing and displaying some different data.  As a new page is never requested pressing the ‘back’ button doesn’t load what you might expect to be the previous page, but instead displays the last full web page that was loaded.  The pages also don’t open properly in a new tab because the reference in the link is not to an actual page, but is instead intended to be picked up by JavaScript in the page when the link is clicked on and then processed to change the map contents.  When you open the link in a new tab the JavaScript doesn’t get to run and the browser tries to load the reference in the link, which ends up as a broken link.

It’s not a great situation and the alternative should work a bit better.  Rather than handling the link in JavaScript the links are now full page requests that get sent to the server.  A script on the server then picks up the link and processes a full page reload of the relevant section of the atlas.  Unfortunately if the user is on the last slide of a story, then clicks a link on the last slide then presses the back button they’ll end up back at the first slide in the story not the last, as there isn’t a way to reference a specific slide in a story, but setting the links to open in a new tab by default gets round this problem.

Finally for the project this week I met with Jennifer and E to discuss the ancillary pages and text that need to be put in place before the launch, and we discussed the launch itself and what this would involve.

For the HT I generated some datasets that an external researcher had requested from the Thesaurus of Old English data, and I generated some further datasets from the main HT database for another request.  I also started to implement a system to generate the new dates table.  I created the necessary table and wrote a function that takes a lexeme and goes through all of the 19 date fields to generate the rows that would need to be created for the lexeme.  As of yet I haven’t set this running on the whole dataset, but instead I’ve created a test script that allows you to pass a catid and view all of the date rows that would be created for each lexeme in the category so I (and Marc and Fraser) can test things out.  I’ve tested it out with categories that have some complicated date structures and so far I’ve not encountered any unexpected behaviour, apart from one thing:  Some lexemes have a full date such as ‘1623 Dict. + 1642–1711’.  The script doesn’t analyse the ‘fulldate’ field but instead looks at each of the actual date fields.  There is only one ‘label’ field so it’s not possible to ascertain that in this case the label is associated with the first date.  Instead the script always associates the label with the last date that a lexeme has.  I’m not sure how common it is for a label to appear in the middle of a full date, but it definitely crops up fairly regularly when I load a random category on the HT homepage, always as ‘Dict.’ so far.  We’ll need to see what we can do about this, if it turns out to be important, which I guess it probably will.

Also this week I performed some App Manager duties, had a conversation with Dauvit Broun, Nigel Leask and the RDM team about the ArtsLab session on data management plans next week, and spoke to Ann Ferguson of the DSL about how changes to the XML structure of entries will be  reflected in the front-end of the DSL.

 

Week Beginning 12th August 2019

I’d taken Tuesday off this week to cover the last day of the school holidays so it was a four-day week for me.  It was a pretty busy four days, though, involving many projects.  I had some app related duties to attend to, including setting up a Google Play developer account for people in Sports and Recreation and meeting with Adam Majumdar from Research and Innovation about plans for commercialising apps in future.  I also did some further investigation into locating the Anglo-Norman Dictionary data, created a new song story for RNSN and read over Thomas Clancy’s Iona proposal materials one last time before the documents are submitted.  I also met with Fraser Dallachy to discuss his Scots Thesaurus plans and will spend a bit of time next week preparing some data for him.

Other than these tasks I split my remaining time between SCOSYA and DSL.  For SCOSYA we had a team meeting on Wednesday to discuss the public atlas.  There is only about a month left to complete all development work on the project and I was hoping that the public atlas that I’d been working on recently was more or less complete, which would then enable me to move on to the other tasks that still need to be completed, such as the experts interface and the facilities to manage access to the full dataset.  However, the team have once again changed their minds about how they want the public atlas to function and I’m therefore going to have to devote more time to this task than I had anticipated, which is rather frustrating at this late stage.  I made a start on some of the updates towards the end of the week, but there is still a lot to be done.

For DSL we finally managed to sort out the @dsl.ac.uk email addresses, meaning the DSL people can now use their email accounts again.  I also investigated and fixed an issue with the ‘v3’ version of the API which Ann Ferguson had spotted.  This version was not working with exact searches, which use speech marks.  After some investigation I discovered that the problem was being caused by the ‘v3’ API code missing a line that was present in the ‘v2’ API code.  The server automatically escapes quotes in URLs by adding a preceding slash (\).  The ‘v2’ code was stripping this slash before processing the query, meaning it correctly identified exact searches.  As the ‘v3’ code didn’t get rid of the slashes it wasn’t finding the quotation mark and was not treating it as an exact search.

I also investigated why some DSL entries were missing from the output of my script that prepared data for Solr.  I’d previously run the script on my laptop, but running it on my desktop instead seemed to output the full dataset including the rows I’d identified as being missing from the previous execution of the script.  Once I’d outputted the new dataset I sent it on to Raymond for import into Solr and then I set about integrating full-text searching into both ‘v2’ and ‘v3’ versions of the API.  This involved learning how Solr uses wildcard characters and Boolean searches, running some sample queries via the Solr interface and then updating my API scripts to connect to the Solr interface, format queries in a way that Solr could work with, submit the query and then deal with the results that Solr outputs, integrating these with fields taken from the database as required.

Other than the bibliography side of things I think that’s the work on the API more or less complete now (I still need to reorder the ‘browse’ output).  What I haven’t done yet is to work on the advanced search pages of the ‘new’ and ‘sienna’ versions of the website to actually work with the new APIs, so as of yet you can’t perform any free-text searches through these interfaces but only directly through the APIs.  Working to connect the front-ends fully to the APIs is my next task, which I will try to start on next week.

Week Beginning 11th September 2017

I spent more than half of this week continuing to work on the new ARIES app.  Last week I finished work on an initial, plain HTML and JavaScript version of the app, and I received another couple of bits of feedback this week that I implemented.  The bulk of my time, however, was spent using Apache Cordova to ‘wrap’ the HTML and JavaScript version, converting it into actual iOS and Android apps, then testing these apps on my iOS and Android devices, and then making all of the media files that an app needs, such as icon files, screenshots, app loading screens, app store graphics and things like that.  This process always takes longer than I think it should.  For example, I have to make more than 20 different icon files at varying resolutions, and I need to grab multiple screenshots from at least four different devices.  This latter process is made trickier because my Android Nexus 7 tablet no longer connects properly to my PC – the ‘photos’ folder appears blank when I connect for photo transfer and doesn’t contain the actual updated contents when I connect for file transfer, so I have to use a third party file explorer app to move the screenshots to a different folder on the device that somehow does get updated when viewing on my PC.  Regarding the icons, I came up with a few alternatives for this, based on the header image for the app, and we finally agreed on a sort of ‘marble’ effect circle on a white background.  I think it looks pretty good, and is certainly better than the old ARIES logo.  The app publication process was also complicated by two new issues that have emerged since I last made an app.  Firstly, Apple have updated the build process to disallow any extended image metadata, I guess as a security precaution.  I created my app icon PNG files in Photoshop, which added in such metadata.  When I then built my iOS app in xCode I received some rather unhelpful errors.  Thankfully StackOverflow had the answer (see https://stackoverflow.com/questions/39652867/code-sign-error-in-macos-sierra-xcode-8-3-3-resource-fork-finder-information) and after running a couple of command-line scripts this metadata was stripped out and the build succeeded.  My second issue related to the app name on the App Store.  Apple has decided to limit app names to 30 characters, meaning we could no longer call our app ‘ARIES: Assisted Revision in English Style’.  And as there is already an app names ‘ARIES’ we couldn’t call it that either.  This is a real pain, and seems like a completely unnecessary restriction to me.  In the end we called the app “ARIES: English Academic Style”.  I managed to submit the app to Apple and Google on Wednesday, and thankfully by the end of the week the new version was available on both the App and Play Stores.  I also made the ‘web’ version available, replacing the old ARIES site.  You can access this, and link through to the app versions from here: http://www.arts.gla.ac.uk/stella/apps/web/aries/

Other than ARIES work, I made some further changes to the Edinburgh Gazetteer keywords, replacing the old list of keywords with a much trimmed down list that Rhona supplied.  I think this works much better than the previous list, and things are looking good.  I also helped Alison Wiggins with some information she wanted to add to the Digital Humanities website, and I spent about half a day working with the Mapping Metaphor data, generating new versions of all of the JSON files that are required for the ‘Metaphoric’ app and testing the web version of this out.  It looks like everything is working fine with the full dataset, so next week I’ll hopefully publish a new version of the app that contains this data.  I also started working on the database of Burns’ paper for Ronnie Young, firstly converting his Access database into an online MySQL version and then creating a simple browse interface for it.  There’s still lots more to be done for this but I need to meet with Ronnie before I can take this further.

The rest of my week was taken up with meetings.  On Wednesday morning I was on an interview panel for a developer post in another part of the college.  I also met with Gerry McKeever in the afternoon to discuss his new British Academy funded ‘Regional Romanticism’ project.  I’ll be working with him to set up a website for this, with some sort of interactive map being added in sometime down the road.  I spent Friday morning attending a network meeting for Kirsteen McCue’s Romantic National Song Network.  It was interesting to hear more about the project and to participate in the discussions about how the web resource for this project will work.  There were several ideas for where the focus for the online aspect of the project should lie, and thankfully by lunchtime we’d reached a consensus about this.  I can’t say much more about it now, but it’s going to be using some software I’ve not used before but am keen to try out, which is great.

Week Beginning 4th September 2017

I spent a lot of this week continuing with the redevelopment of the ARIES app and thankfully after laying the groundwork last week (e.g. working out the styles and the structure, implementing a couple of exercise types) my rate of progress this week was considerably improved.  In fact, by the end of the week I had added in all of the content and had completed an initial version of the web version of the app.  This included adding in some new quiz types, such as one that allows the user to reorder the sentences in a paragraph by dragging and dropping them, and also a simple multiple choice style quiz.  I also received some very useful feedback from members of the project team and made a number of refinements to the content based on this.

This included updating the punctuation quiz so that if you get three incorrect answers in a quiz a ‘show answer’ button is displayed.  Clicking on this puts in all of the answers and shows the ‘well done’ box.  This was rather tricky to implement as the script needed to reset the question, including removing all previous answers, ticks, and resetting the initial letter case as if you select a full stop the following letter is automatically capitalised.  I also implemented a workaround for answers where a space is acceptable.  These no longer count towards the final tally of correct answers, so leaving a space rather than selecting a comma can now result in the ‘well done’ message being displayed.  Again, this was rather tricky to implement and it would be good if you could test out this quiz thoroughly to make sure there aren’t any occasions where the quiz breaks.

I also improved navigation throughout the app.  I added ‘next’ buttons to all of the quizzes, which either take you to the next section, or to the next part of the quiz, as applicable.  I think this works much better than just having the option to return to the page the quiz was linked from.  I also added in a ‘hamburger’ button to the footer of every page within a section.  Pressing on this takes you to the section’s contents page, and I added ‘next’ and ‘previous’ buttons to the contents pages too, so you can navigate between sections without having to go back to the homepage.

I spent a bit of time fixing the drag / drop quizzes so that the draggable boxes were constrained to each exercise’s boundaries.  This seemed to work great until I got to the references quiz, which has quite long sections of draggable text.  With the constraint in place it became impossible for the part of the draggable button that triggers the drop to reach the boxes nearest the boundaries of the question as none of the button could pass the borders.  So rather annoyingly I had to remove this feature and just allow people to drag the buttons all over the page.  But dropping a button from one question into another will always give you an incorrect answer now, so it’s not too big a problem.

With all of this in place I’ll start working on the app version of the resource next week and will hopefully be able to submit it to the app stores by the end of the week, all being well.

In addition to my work on ARIES, I completed some other tasks for a number of other projects.  For Mapping Metaphor I created a couple of scripts for Wendy that output some statistics about the metaphorical connections in the data.  For the Thesaurus of Old English I created a facility to enable staff to create new categories and subcatetories (previously it was only possible to edit existing categories or add / edit / remove words from existing categories).  I met with Nigel Leask and some of the Curious Travellers team on Friday to discuss some details for a new post associated with this project.  I had an email discussion with Ronnie Young about the Burns database he wants me to make an online version of.  I also met with Jane Stuart-Smith and Rachel MacDonald, who is the new project RA for the SPADE project, and set up a user account for Rachel to manage the project website.  I had a chat with Graeme Cannon about a potential project he’s helping put together that may need some further technical input and I updated the DSL website and responded to a query from Ann Ferguson regarding a new section of the site.

I also spent most of a day working on the Edinburgh Gazetteer project, during which I completed work on the new ‘keywords’ feature.  It was great to be able to do this as I had been intending to work on this last week but just didn’t have the time.  I took Rhona’s keywords spreadsheet, which had page ID in one column and keywords separated by a semi-colon in another and created two database tables to hold the information (one for information about keywords and a joining table to link keywords to individual pages).  I then wrote a little script that went through the spreadsheet, extracted the information and added it to my database.  I then set to work on adding the actual feature to the website.

The index page of the Gazetteer now has a section where all of the keywords are listed.  There are more than 200 keywords so it’s a lot of information.  Currently the keywords appear like ‘bricks’ in a scrollable section, but this might need to be updated as it’s maybe a bit much information.  If you click on a keyword a page loads that lists all of the pages that the keyword is associated with.  When you load a specific page, either from the keyword page or from the regular browse option, there’s now a section above the page image that lists the associated keywords.  Clicking on one of these loads the keyword’s page, allowing you to access any other pages that are associated with it.  It’s a pretty simple system but it works well enough.  The actual keywords need a bit of work, though, as some are too specific and there are some near duplications due to typos and things like that.  Rhona is going to send me an updated spreadsheet and I will hopefully upload this next week.

Oh yes, it was five years ago this week that I started in this post.  How time flies.

Week Beginning 28th August 2017

This week was rather a hectic one as I was contacted by many people who wanted my help and advice with things.  I think it’s the time of year – the lecturers are returning from their holidays but the students aren’t back yet so they start getting on with other things, meaning busy times for me.  I had my PDR session on Monday morning, so I spent a fair amount of time at this and then writing things up afterwards.  All went fine, and it’s good to know that the work I do is appreciated.  After that I had to do a few things for Wendy for Mapping Metaphor.  I’d forgotten to run my ‘remove duplicates’ script after I’d made the final update to the MM data, which meant that many of the sample lexemes were appearing twice.  Thankfully Wendy spotted this and a quick execution of my script removed 14,286 duplicates in a flash.  I also had to update some of the text on the site, update the way search terms are highlighted in the HT to avoid links through from MM highlighting multiple terms.  I also wrote a little script that displays the number of strong and weak metaphorical connections there are for each of the categories, which Wendy wanted.

My big task for the week was to start on the redevelopment of the ARIES app.  I had been expecting to receive the materials for this several weeks earlier as Marc wanted the new app to be ready to launch at the beginning of term.  As I’d heard nothing I assumed that this was no longer going to happen, but on Monday Marc gave me access to the files and said the launch must still go ahead at the start of term.  There is rather a lot to do and very little time to do it in, especially as preparing stuff for the App Store takes so much time once the app is actually developed.  Also, Marc is still revising the materials so even though I’m now creating the new version I’m still going to have to go back and make further updates later on.  It’s not exactly an ideal situation.  However, I did manage to get started on the redevelopment on Tuesday, and spent pretty much all of my time on Tuesday, Wednesday and Thursday on this task.  This involved designing a new interface based on the colours found in the logo file, creating the structure of the app, and migrating the static materials that the team had created in HTML to the JSON file I’m creating for the app contents.  This included creating new styles for the new content where required and testing things out on various devices to make sure everything works ok.  I also implemented two of the new quizzes, which also took quite a bit of time, firstly because I needed to manually migrate the quiz contents to a format that my scripts could work with and secondly because although the quizzes were similar to ones I’ve written before they were not identical in structure, so needed some reworking in order to meet the requirements.  I’m pretty happy with how things are developing, but progress is slow.  I’ve only completed the content for three subsections of the app, and there are a further nine sections remaining.  Hopefully the pace will quicken as I proceed, but I’m worried that the app is not going to be ready for the start of term, especially as the quizzes should really be tested out by the team and possibly tweaked before launch.

I spent most of Friday this week writing the Technical Plan for Thomas Clancy’s new place-name project.  Last week I’d sent off a long list of questions about the project and Thomas got back to me with some very helpful answers this week, which really helped in writing the plan.  It’s still only a first version and will need further work, but I think the bulk of the technical issues have been addressed now.

Other than these tasks, I responded to a query from Moira Rankin from the Archives about an old project I was involved with, I helped Michael Shaw deal with some more data for The People’s Voice project, I had a chat to Catriona MacDonald about backing up The People’s Voice database, I looked through a database that Ronnie Young had sent me, which I will be turning into an online resource sometime soon (hopefully), I replied to Gerry McKeever about a project he’s running that’s just starting up which I will be involved with, and I replied to John Davies in History about a website query he had sent me.  Unfortunately I didn’t get a chance to continue with the Edinburgh Gazetteer work I’d started last week, but I’ll hopefully get a chance to do some further work on this next week.

Week Beginning 31st July 2017

This week was another four-day week for me as I was on holiday on Friday.  I will also be on holiday all next week.  As I am currently between any pressing deadlines and I didn’t want to start anything major before my holiday, I decided to return to the migration of one of the old STELLA resources to the University’s T4 website this week.  The resource in question is STARN (the Scots Teaching and Resource Network).  It’s a collection of Scottish literary and non-literary materials that was mainly compiled in the 90s.  Although the old site mostly still worked it looked very old fashioned and contained many broken links.  I had started to migrate the site across to T4 before Christmas last year during a bit of slack time I had, but as things got busier I had to leave the migration half done and focus on current research projects instead.  When I returned to it this week I discovered I was right in the middle of migrating Sir Walter Scott’s Waverley novels, which was something of a mammoth task.  There were countless chapters that each needed their own pages, then I needed to add ‘next’ and ‘previous’ links to all of these after I’d created the pages, then I needed to create contents pages and a variety of ancillary pages.  It was a tedious, time-consuming and pretty brainless task, but there is a certain amount of satisfaction to be gained from getting it all done.    You can now access the STARN resource here: http://www.gla.ac.uk/schools/critical/aboutus/resources/stella/projects/starn/

I also spent a bit of time this week speaking to Alison Wiggins about her upcoming AHRC project that starts in September and I will be involved with for a small amount of my time.  I also set up a subdomain for Stuart Gillespie’s project.  I’m going to be helping out on an interview panel for a post in another School within the College in September and I spent a bit of time going through the applications for this too.  There’s not really much else to say about the work I did this week.  Once I’m back after my holiday I’ll need to focus on the new version of the ARIES app that is due to launch in September (all being well) and I need to get back into developing the atlas for the SCOSYA project.

Week Beginning 24th July 2017

I spent Monday this week creating the Android version of the ‘Basics of English Metre’ app, which took a little bit of time as the workflow for creating and signing apps for publication had completely changed since the last time I created an app.  The process now uses Android Studio, and once I figured out how it all worked it was actually a lot easier than the old way that used to involve several command-line tools such as zipalign.  By the end of the day I had submitted the app and on Tuesday both the iOS and the Android version had been approved and had been added to the respective stores.  You can see the iOS version here: https://itunes.apple.com/us/app/the-basics-of-english-metre/id1262414928?mt=8 and the Android version here: https://play.google.com/store/apps/details?id=com.gla.stella.metre&hl=en_GB&pcampaignid=MKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1.  The Web version is also available here: http://www.arts.gla.ac.uk/stella/apps/web/metre/.

On Friday I met with Stuart Gillespie in English Literature to discuss a new website he requires.  He has a forthcoming monograph that will be published by OUP and he needs a website to host an ‘Annexe’ for this publication.  Initially it will just be a PDF but it might develop into something bigger with online searches later on.  Also this week I had a further email conversation with Thomas Clancy about some potential place-name projects that might use the same system as the REELS project and I had a chat with someone from the KEEP archive if Suffolk about hosting the Woolf short story digital edition I’ve created.

I spent the rest of the week getting back into the visualisations I’ve been making using data from the Historical Thesaurus for the Linguistic DNA project.  It took me some time to read through all of the documentation again, look through previously sent emails and my existing code and figure out where I’d left things off several weeks ago.  I created a little ‘to do’ list of things I need to do with the visualisations.  My ‘in progress’ versions of the sparklines went live when the new version of the Historical Thesaurus was launched at the end of June (see http://historicalthesaurus.arts.gla.ac.uk/sparklines/) but these still need quite a bit of work, firstly to speed up their generation and secondly to make sure the data actually makes sense when a period other than the full duration is specified.  The pop-ups that appear on the visualisations also need to be reworked when looking at shorter periods too as the statistics contained currently refer to the full duration.

I didn’t actually tackle any of the above during this week as instead I decided to look into creating a new set of visualisations for ‘Deviation in the LDNA period’ instead.  Marc had created a sort of heatmap for this data in Excel and what I needed to do was create a dynamic, web-based version of this.  I decided to use the always useful D3.js library for these and rather handily I found an example heatmap that I could use as a basis for further work:  http://bl.ocks.org/tjdecke/5558084.  Also via this I found some very handy colour scales that I could use for the heatmap and I’ll no doubt use for future visualisations: https://bl.ocks.org/mbostock/5577023

The visualisation I created is pretty much the same as the spreadsheet – increasingly darker shades of green representing positive numbers and increasingly darker shades of blue representing negative numbers.  There are columns for each decade in the LDNA period and rows for each Thematic Heading.

I’ve split the visualisation up based on the ‘S1’ code.  It defaults to just showing the ‘AA’ headings but using the drop-down list you can select another heading, e.g. ‘AB’ and the visualisation updates, replacing the data.  This calls a PHP script that generates new data from the database and formats it as a CSV file.  We could easily offer up the CSV files to people too if they would want to reuse the data.

Note that not all of the ‘S1’ Thematic Headings appear in the spreadsheet or have ‘glosses’.  E.g. ‘AJ Matter’ is not in the spreadsheet and has no ‘gloss’ so I’ve had to use ‘AJ01 Alchemy’ as the ‘group’ in the drop-down list, which is probably not right.  Where there is no ‘S1’ heading (or no ‘S1’ heading that has a ‘gloss’) the ‘S2’ heading appears instead.

Here’s a screenshot of the visualisation, showing Thematic Heading group ‘AE Animals’:

In the ‘live’ visualisation (which I can’t share the URL of yet) if you hover over a thematic heading code down the left-hand edge of the visualisation a pop-up appears containing the full ‘gloss’ so you can tell what it is you’re looking at.  Similarly, if you hover over one of the cells a pop-up appears, this time containing the decade (helpful if you’ve scrolled down and the column headings are not visible) and the actual value contained within the cell.

Rather than make the cells square boxes as in the example I started with, I’ve made the boxes rectangles, with the intention of giving more space between rows and hopefully making it clearer that the data should primarily be read across the way.  I have to say I rather like the look of the visualisation as it brings to mind DNA sequences, which is rather appropriate for the project.

I experimented with a version of the page that had a white background and another that had a black background.  I think the white background actually makes it easier to read the data, but the black background looks more striking and ‘DNA sequency’ so I’ve added in an option to switch from a ‘light’ theme to a ‘dark’ theme, with a nice little transition between the two.  Here’s the ‘dark’ theme selected for ‘AB Life’:

There’s still probably some further work to be done on this, e.g. allowing users to in some way alter the cell values based on limits applied, or allowing users to click through from a cell pop-up to some actual words in the HT or something.  I could also add in a legend that shows what values the different shades represent.  I wasn’t sure whether this was really needed as you can tell by hovering over the boxes anyway.  I’ll see what Marc and Fraser suggest when they have a chance to use the visualisations.

Week Beginning 17th July 2017

This was my first week back after a very relaxing two weeks of holiday, and in fact Monday this week was a holiday too so it was a bit of a short week.  I spent some time doing the usual catching up with emails and issues that had accumulated in my absence, including updating access rights for the SciFiMedHums site, investigating an issue with some markers not appearing on the atlas for SCOSYA, looking into an issue that Luca had emailed me about, fixing some typos on the Woolf short story site and speaking to the people at the KEEP archive about hosting the site, and giving some feedback on the new ARIES publicity materials.  I also spent the best part of a day on AHRC review duties.

On Tuesday I met with Kirsteen McCue and her RA Brianna Robertson about a new project that is starting up about romantic national song.  The project will need a website and some sort of interactive map so we met to discuss this.  Kirsteen was hoping I’d be able to set up a site similar to the Burns sites I’ve done, but as I’m no longer allowed to use WordPress this is going to be a little difficult.  We’re going to try and set something up within the University’s T4 structure, but it might not be possible to get something working the way Kirsteen was hoping for.  I sent a long email to the University’s Web Team asking for advice and hopefully they’ll get back to me soon.

I spent the rest of the week returning to App development.  I’m going to be working on a new version of the ARIES app soon, so I thought it would be good to get everything up to date before this all starts up.  As I expected, since I last did any app development all the technical stuff has changed – a new version of Apache Cordova, new dependencies, new software to install, updates to XCode, a new requirement to install Android Studio etc etc.  Getting all of this infrastructure set up took quite a bit of time, especially the installation of a new piece of required software called ‘Cocoabeans’ that took an eternity to set up.

With all this in place I then focussed on creating the app version of ‘The Basics of English Metre’, which is a task that has been sitting in my ‘to do’ list for many months now.  I managed to create the required iOS and Android versions and installed them on my devices for testing.  All appeared to be working fine so I then set to work creating all of the files that are necessary to actually publish the App online.  I started with the iOS version.  This required the creation of 14 icon files and 10 launch screen images, which was a horrible tedious task.  I then needed to create several screenshots of the App store., which required getting screenshots from an iPad Pro (which I don’t have).  Thankfully XCode has an iOS simulator, which you can use to boot up your app and get screenshots.  However, although the simulator was working for the app earlier in the week, when I came to take the screenshots the app build just kept on failing when deploying to the simulator.  Rather strangely, the app would build just fine when deploying to me actual iPad, and also when building to an Archive file for submission to the store.  I spent ages trying to figure out what the problem was, but just couldn’t get to the bottom of it.  In the end I had to create a new version of the app, and this thankfully worked, so I guess there was some sort of conflict or corruption in the code for the first version.  With this out of the way I was able to take the screenshots, complete the App Store listings, upload my app file and submit the app for inclusion.  I managed to get this done on Friday afternoon so hopefully sometime next week the app will be available for download.  I didn’t have time to complete and submit the Android version of the app, so this is what I’ll focus on at the start of next week.

Week Beginning 19th June 2017

I decided this week to devote some time to redevelop the Thesaurus of Old English, to bring it into line with the work I’ve been doing to redevelop the main Historical Thesaurus website.  I had thought I wouldn’t have time to do this before next week’s ‘Kay Day’ event but I decided that it would be better to tackle the redevelopment whilst the changes I’d made for the main site were still fresh in my mind, rather than coming back to it in possibly a few months’ time, having forgotten how I implemented the tree browse and things like that.  It actually took me less time than I had anticipated to get the new version up and running, and by the end of Tuesday I had a new version in place that was structurally similar to the new HT site.  We will hopefully be able to launch this alongside the new HT site towards the end of next week.

I sent the new URL to Carole Hough for feedback as I was aware that she had some issues with the existing TOE website.  Carole sent me some useful feedback, which led to me making some additional changes to the site – mainly to the tree browse structure.  The biggest issue is that the hierarchical structure of TOE doesn’t quite make sense.  There are 18 top-level categories, but for some reason I am not at all clear about each top-level category isn’t a ‘parent’ category but is in fact a sibling category to the ones that are one level down.  E.g, logically ’04 Consumption of food/drink’ would be the parent category of ’04.01’, ’04.02’ etc but in the TOE this isn’t the case, rather ’04.01’, ’04.02’ should sit alongside ‘04’.  This really confuses both me and my tree browse code, which expects categories ‘xx.yy’ to be child categories of ‘xx’.  This led to the tree browse putting categories where logically they belong, but within the confines of the TOE make no sense – e.g. we ended up with ’04.04 Weaving’ within ’04 Consumption of food/drink’!

To confuse matters further, there are some additional ‘super categories’ that I didn’t have in my TOE database but apparently should be used as the real 18 top-level categories.  Rather confusingly these have the same numbers as the other top-level categories.  So we now have ’04 Material Needs’ that has a child category ’04 Consumption of food/drink’ that then has ’04.04 Weaving’ as a sibling and not as a child as the number would suggest.  This situation is a horrible mess that makes little sense to a user, but is even harder for a computer program to make sense of.  Ideally we should renumber the categories in a more logical manner, but apparently this isn’t an option.  Therefore I had to hack about with my code to try and allow it to cope with these weird anomalies.  I just about managed to get it all working by the end of the week but there are a few issues that I still need to clear up next week.  The biggest one is that all of the ‘xx.yy’ categories and their child categories are currently appearing in two places – within ‘xx’ where they logically belong and beside ‘xx’ where this crazy structure says they should be placed.

In addition to all this TOE madness I also spent some further time tweaking the new HT website, including updating the quick search box so the display doesn’t mess up on narrow screens, making some further tweaks to the photo gallery and making alterations to the interface.  I also responded to a request from Fraser to update one of the scripts I’d written for the HT OED data migration that we’re still in the process of working through.

In terms of non-thesaurus related tasks this week, I was involved in a few other projects.  I had to spend some time on some AHRC review duties.  I also fixed an issue that had crept into the SCOTS and CMSW Corpus websites since their migration:  the ‘download corpus as a zip’ issue was no longer working due to the PHP code using an old class to create the zip that was not compatible with the new server.  I spent some time investigating this and finding a new way of using PHP to create zip files.  I also locked down the SPADE website admin interface to IP address ranges of our partner institutions and fixed an issue with the SCOSYA questionnaire upload facility.  I also responded to a request for information about TEI XML training from a PhD student and made a tweak to a page of the DSL website.

I spent the remainder of my week looking at some app issues.  We are hopefully going to be releasing a new and completely overhauled version of the ARIES app by the end of the summer and I had been sent a document detailing the overall structure of the new site.  I spent a bit of time creating a new version of the web-based ARIES app that reflected this structure, in preparation for receiving content.  I also returned to the Metre app, that I’ve not done anything about since last year.  I added in some explanatory text and I am hopefully going to be able to start wrapping this app up and deploying it to the App and Play stores soon.  But possibly not until after my summer holiday, which starts the week after next.