I participated in the UCU strike action on Wednesday to Friday this week, so it was a two-day working week for me. During this time I gave some help to the students who are migrating the International Journal of Scottish Theatre and Screen and talked to Gerry Carruthers about another project he’s hoping to put together. I also passed on information about the DNS update to the DSL’s IT people, added a link to the DSL’s new YouTube site to the footer of the DSL site and dealt with a query regarding accessing the DSL’s Google Analytics data. I also spoke with Luca about arranging a meeting with him and his line manager to discuss digital humanities across the college and updated the listings for several Android apps that I created a few years ago that had been taken down due to their information being out of date. As central IT services now manages the University Android account I hadn’t received notifications that this was going to take place. Hopefully the updates have done the trick now.
Other than this I made some further updates to the Anglo-Norman Dictionary’s locution search that I created last week. This included changing the ordering to list results by the word that was search for rather than by headword, changing the way the search works so that a wildcard search such as ‘te*’ now matches the start of any word in the locution phrase rather than just the first work and fixing a number of bugs that had been spotted.
I spent the rest of my available time starting to work on an interactive version of the radar diagram for the Historical Thesaurus. I’d made a static version of this a couple of months ago which looks at a the words in an HT category by part of speech and visualises how the numbers of words in each POS change over time. What I needed to do was find a way to allow users to select their own categories to visualise. We had decided to use the broader Thematic Categories for the feature rather than regular HT categories so my first task was to create a Thematic Category browser from ‘AA The World’ to ‘BK Leisure’. It took a bit of time to rework the existing HT category browser to work with thematic categories, and also to then enable the selection of multiple categories by pressing on the category name. Selected categories appear to the right of the browser, and I added in an option to remove a selected category if required. With this in place I began work on the code to actually grab and process the data for the selected categories. This finds all lexemes and their associated dates for each lexeme in each HT category in each of the selected thematic categories. For now the data is just returned and I’m still in the middle of processing the dates to work out which period each word needs to appear in. I’ll hopefully find some time to continue with this next week. Here’s a screenshot of the browser:
It was another mostly SCOSYA week this week, ahead of the launch of the project that was planned for the end of next week. However, on Friday this week I bumped into Jennifer who said that the launch will now be pushed back into December. This is because our intended launch date was the last working day before the UCU strike action begins, and is a bad time to launch the project, for reasons of publicity, engaging with other scholars and risks associated with technical issues that might crop up which might not be able to be sorted until after the strike. As there’s a general election soon after the strike is due to end, it looks like the launch is going to be pushed back until closer to Christmas. But as none of this transpired until Friday I still spent most of the week until then making what I thought were last-minute tweaks to the website and fixing bugs that had cropped up during user testing.
This included going through all of the points raised by Gary following the testing session he had arranged with his students in New York the week before, and meeting with Jennifer, E and Frankie to discuss how we intended to act on the feedback, which was all very positive but did raise a few issues relating to the user interface, the data and the explanatory text.
Also this week I had a further chat with Luca about the API he’s building, and a DMP request that came his way, and arranged for the App and Play store account administration to be moved over the Central IT Services. I also helped Jane Roberts with an issue with the Thesaurus of Old English and had a chat with Thomas Clancy and Gilbert Markus about the Place-names of Kirkcudbrightshire project, which I set the systems up for last year and is now nearing completion and requiring some further work to develop the front-end.
I also completed an initial version of a WordPress site for Corey Gibson’s bibliography project and spoke to Eleanor Capaldi about how to get some images for her website that I recently set up. I also spent a bit of time upgrading all of the WordPress sites I manage to the latest version. Also this week I had a chat with Heather Pagan about the Anglo-Norman Dictionary data. She now has access to the data that powers the current website and gave me access to this. It’s great to finally know that the data has been retrieved and to get a copy of it to work with. I spent a bit of time looking through the XML files, but we need to get some sort of agreement about how Glasgow will be involved in the project before I do much more with it.
I had a bit of an email chat with the DSL people about adding a new ‘history’ field to their entries, something that will happen through the new editing interface that has been set up for them by another company, but will have implications for the website once we reach the point of adding the newly edited data from their new system to the online dictionary. I also arranged for the web space for Rachel Smith and Ewa Wanat’s project to be set up and spent a bit of time playing around with a new interface and design for the Digital Humanities Network website (https://digital-humanities.glasgow.ac.uk/) which is in desperate need of a makeover.
I split most of my time this week between the SCOSYA project and the Historical Thesaurus. The launch of the SCOSYA atlases is scheduled to take place in November and I had suggested to Jennifer that it might be good to provide access to the project’s data via tables rather than through the atlas interfaces. This is because although the atlases look great and are a nice interactive way of accessing and visualising the data, some people prefer looking at tables of data instead, and other people may struggle to use the interactive atlases due to accessibility issues, but may still want to be able to view the project’s data. We will of course provide free access to the project’s API, through which all of the data can be accessed as CSV or JSON files, or can even be incorporated into a completely new interface, but I thought it might be useful if we provided text-based access to the data through the project’s front-end as well. Jennifer agreed that this would be useful, so I spent some time writing a specification document for the new features, sending it to the team for feedback and developing the new features.
I created four new features. First was a table of dialect samples, which lists all of the locations that have dialect sample recordings and provides access to these recordings and the text that accompanies them, replicating the data as found on the ‘home’ map of the public atlas. The second feature provides a tabular list of all of the locations that have community voice recordings. Clicking on a location then displays the recordings and the transcriptions of each, as the following screenshot shows:
The third new feature lists all of the examples that can be browsed for through the public atlas. You can then click on one of these examples to listen to the example sound clips of the example and to view a table of results for all of the questionnaire locations. Users can also click through to view this example on the atlas itself too, as I figured that some people might want to view the results as a table but then see how these look on the atlas too. The following screenshot shows the ‘explore’ feature for a particular example:
The fourth new feature replicates the full list of examples as found in the linguists’ atlas. There are many examples nested within parent and sub-parent categories and it can be a bit difficult to get a clear picture of what is available through the nested menus in the atlas, so this new feature provides access to a complete list of the examples that is fully expanded and more easy to view, as the following screenshot demonstrates:
It’s then possible to click on an example to view the results of this example for every location in a table, again with a link through to the result on the atlas, which then enables the user to customise the display of results further, for example focussing on older or younger speakers or limiting the display to particular rating levels.
Finally for the project this week I met with Jennifer and E to discuss the ancillary pages and text that need to be put in place before the launch, and we discussed the launch itself and what this would involve.
For the HT I generated some datasets that an external researcher had requested from the Thesaurus of Old English data, and I generated some further datasets from the main HT database for another request. I also started to implement a system to generate the new dates table. I created the necessary table and wrote a function that takes a lexeme and goes through all of the 19 date fields to generate the rows that would need to be created for the lexeme. As of yet I haven’t set this running on the whole dataset, but instead I’ve created a test script that allows you to pass a catid and view all of the date rows that would be created for each lexeme in the category so I (and Marc and Fraser) can test things out. I’ve tested it out with categories that have some complicated date structures and so far I’ve not encountered any unexpected behaviour, apart from one thing: Some lexemes have a full date such as ‘1623 Dict. + 1642–1711’. The script doesn’t analyse the ‘fulldate’ field but instead looks at each of the actual date fields. There is only one ‘label’ field so it’s not possible to ascertain that in this case the label is associated with the first date. Instead the script always associates the label with the last date that a lexeme has. I’m not sure how common it is for a label to appear in the middle of a full date, but it definitely crops up fairly regularly when I load a random category on the HT homepage, always as ‘Dict.’ so far. We’ll need to see what we can do about this, if it turns out to be important, which I guess it probably will.
Also this week I performed some App Manager duties, had a conversation with Dauvit Broun, Nigel Leask and the RDM team about the ArtsLab session on data management plans next week, and spoke to Ann Ferguson of the DSL about how changes to the XML structure of entries will be reflected in the front-end of the DSL.
I’d taken Tuesday off this week to cover the last day of the school holidays so it was a four-day week for me. It was a pretty busy four days, though, involving many projects. I had some app related duties to attend to, including setting up a Google Play developer account for people in Sports and Recreation and meeting with Adam Majumdar from Research and Innovation about plans for commercialising apps in future. I also did some further investigation into locating the Anglo-Norman Dictionary data, created a new song story for RNSN and read over Thomas Clancy’s Iona proposal materials one last time before the documents are submitted. I also met with Fraser Dallachy to discuss his Scots Thesaurus plans and will spend a bit of time next week preparing some data for him.
Other than these tasks I split my remaining time between SCOSYA and DSL. For SCOSYA we had a team meeting on Wednesday to discuss the public atlas. There is only about a month left to complete all development work on the project and I was hoping that the public atlas that I’d been working on recently was more or less complete, which would then enable me to move on to the other tasks that still need to be completed, such as the experts interface and the facilities to manage access to the full dataset. However, the team have once again changed their minds about how they want the public atlas to function and I’m therefore going to have to devote more time to this task than I had anticipated, which is rather frustrating at this late stage. I made a start on some of the updates towards the end of the week, but there is still a lot to be done.
For DSL we finally managed to sort out the @dsl.ac.uk email addresses, meaning the DSL people can now use their email accounts again. I also investigated and fixed an issue with the ‘v3’ version of the API which Ann Ferguson had spotted. This version was not working with exact searches, which use speech marks. After some investigation I discovered that the problem was being caused by the ‘v3’ API code missing a line that was present in the ‘v2’ API code. The server automatically escapes quotes in URLs by adding a preceding slash (\). The ‘v2’ code was stripping this slash before processing the query, meaning it correctly identified exact searches. As the ‘v3’ code didn’t get rid of the slashes it wasn’t finding the quotation mark and was not treating it as an exact search.
I also investigated why some DSL entries were missing from the output of my script that prepared data for Solr. I’d previously run the script on my laptop, but running it on my desktop instead seemed to output the full dataset including the rows I’d identified as being missing from the previous execution of the script. Once I’d outputted the new dataset I sent it on to Raymond for import into Solr and then I set about integrating full-text searching into both ‘v2’ and ‘v3’ versions of the API. This involved learning how Solr uses wildcard characters and Boolean searches, running some sample queries via the Solr interface and then updating my API scripts to connect to the Solr interface, format queries in a way that Solr could work with, submit the query and then deal with the results that Solr outputs, integrating these with fields taken from the database as required.
Other than the bibliography side of things I think that’s the work on the API more or less complete now (I still need to reorder the ‘browse’ output). What I haven’t done yet is to work on the advanced search pages of the ‘new’ and ‘sienna’ versions of the website to actually work with the new APIs, so as of yet you can’t perform any free-text searches through these interfaces but only directly through the APIs. Working to connect the front-ends fully to the APIs is my next task, which I will try to start on next week.
Other than ARIES work, I made some further changes to the Edinburgh Gazetteer keywords, replacing the old list of keywords with a much trimmed down list that Rhona supplied. I think this works much better than the previous list, and things are looking good. I also helped Alison Wiggins with some information she wanted to add to the Digital Humanities website, and I spent about half a day working with the Mapping Metaphor data, generating new versions of all of the JSON files that are required for the ‘Metaphoric’ app and testing the web version of this out. It looks like everything is working fine with the full dataset, so next week I’ll hopefully publish a new version of the app that contains this data. I also started working on the database of Burns’ paper for Ronnie Young, firstly converting his Access database into an online MySQL version and then creating a simple browse interface for it. There’s still lots more to be done for this but I need to meet with Ronnie before I can take this further.
The rest of my week was taken up with meetings. On Wednesday morning I was on an interview panel for a developer post in another part of the college. I also met with Gerry McKeever in the afternoon to discuss his new British Academy funded ‘Regional Romanticism’ project. I’ll be working with him to set up a website for this, with some sort of interactive map being added in sometime down the road. I spent Friday morning attending a network meeting for Kirsteen McCue’s Romantic National Song Network. It was interesting to hear more about the project and to participate in the discussions about how the web resource for this project will work. There were several ideas for where the focus for the online aspect of the project should lie, and thankfully by lunchtime we’d reached a consensus about this. I can’t say much more about it now, but it’s going to be using some software I’ve not used before but am keen to try out, which is great.
I spent a lot of this week continuing with the redevelopment of the ARIES app and thankfully after laying the groundwork last week (e.g. working out the styles and the structure, implementing a couple of exercise types) my rate of progress this week was considerably improved. In fact, by the end of the week I had added in all of the content and had completed an initial version of the web version of the app. This included adding in some new quiz types, such as one that allows the user to reorder the sentences in a paragraph by dragging and dropping them, and also a simple multiple choice style quiz. I also received some very useful feedback from members of the project team and made a number of refinements to the content based on this.
This included updating the punctuation quiz so that if you get three incorrect answers in a quiz a ‘show answer’ button is displayed. Clicking on this puts in all of the answers and shows the ‘well done’ box. This was rather tricky to implement as the script needed to reset the question, including removing all previous answers, ticks, and resetting the initial letter case as if you select a full stop the following letter is automatically capitalised. I also implemented a workaround for answers where a space is acceptable. These no longer count towards the final tally of correct answers, so leaving a space rather than selecting a comma can now result in the ‘well done’ message being displayed. Again, this was rather tricky to implement and it would be good if you could test out this quiz thoroughly to make sure there aren’t any occasions where the quiz breaks.
I also improved navigation throughout the app. I added ‘next’ buttons to all of the quizzes, which either take you to the next section, or to the next part of the quiz, as applicable. I think this works much better than just having the option to return to the page the quiz was linked from. I also added in a ‘hamburger’ button to the footer of every page within a section. Pressing on this takes you to the section’s contents page, and I added ‘next’ and ‘previous’ buttons to the contents pages too, so you can navigate between sections without having to go back to the homepage.
I spent a bit of time fixing the drag / drop quizzes so that the draggable boxes were constrained to each exercise’s boundaries. This seemed to work great until I got to the references quiz, which has quite long sections of draggable text. With the constraint in place it became impossible for the part of the draggable button that triggers the drop to reach the boxes nearest the boundaries of the question as none of the button could pass the borders. So rather annoyingly I had to remove this feature and just allow people to drag the buttons all over the page. But dropping a button from one question into another will always give you an incorrect answer now, so it’s not too big a problem.
With all of this in place I’ll start working on the app version of the resource next week and will hopefully be able to submit it to the app stores by the end of the week, all being well.
In addition to my work on ARIES, I completed some other tasks for a number of other projects. For Mapping Metaphor I created a couple of scripts for Wendy that output some statistics about the metaphorical connections in the data. For the Thesaurus of Old English I created a facility to enable staff to create new categories and subcatetories (previously it was only possible to edit existing categories or add / edit / remove words from existing categories). I met with Nigel Leask and some of the Curious Travellers team on Friday to discuss some details for a new post associated with this project. I had an email discussion with Ronnie Young about the Burns database he wants me to make an online version of. I also met with Jane Stuart-Smith and Rachel MacDonald, who is the new project RA for the SPADE project, and set up a user account for Rachel to manage the project website. I had a chat with Graeme Cannon about a potential project he’s helping put together that may need some further technical input and I updated the DSL website and responded to a query from Ann Ferguson regarding a new section of the site.
I also spent most of a day working on the Edinburgh Gazetteer project, during which I completed work on the new ‘keywords’ feature. It was great to be able to do this as I had been intending to work on this last week but just didn’t have the time. I took Rhona’s keywords spreadsheet, which had page ID in one column and keywords separated by a semi-colon in another and created two database tables to hold the information (one for information about keywords and a joining table to link keywords to individual pages). I then wrote a little script that went through the spreadsheet, extracted the information and added it to my database. I then set to work on adding the actual feature to the website.
The index page of the Gazetteer now has a section where all of the keywords are listed. There are more than 200 keywords so it’s a lot of information. Currently the keywords appear like ‘bricks’ in a scrollable section, but this might need to be updated as it’s maybe a bit much information. If you click on a keyword a page loads that lists all of the pages that the keyword is associated with. When you load a specific page, either from the keyword page or from the regular browse option, there’s now a section above the page image that lists the associated keywords. Clicking on one of these loads the keyword’s page, allowing you to access any other pages that are associated with it. It’s a pretty simple system but it works well enough. The actual keywords need a bit of work, though, as some are too specific and there are some near duplications due to typos and things like that. Rhona is going to send me an updated spreadsheet and I will hopefully upload this next week.
Oh yes, it was five years ago this week that I started in this post. How time flies.
This week was rather a hectic one as I was contacted by many people who wanted my help and advice with things. I think it’s the time of year – the lecturers are returning from their holidays but the students aren’t back yet so they start getting on with other things, meaning busy times for me. I had my PDR session on Monday morning, so I spent a fair amount of time at this and then writing things up afterwards. All went fine, and it’s good to know that the work I do is appreciated. After that I had to do a few things for Wendy for Mapping Metaphor. I’d forgotten to run my ‘remove duplicates’ script after I’d made the final update to the MM data, which meant that many of the sample lexemes were appearing twice. Thankfully Wendy spotted this and a quick execution of my script removed 14,286 duplicates in a flash. I also had to update some of the text on the site, update the way search terms are highlighted in the HT to avoid links through from MM highlighting multiple terms. I also wrote a little script that displays the number of strong and weak metaphorical connections there are for each of the categories, which Wendy wanted.
My big task for the week was to start on the redevelopment of the ARIES app. I had been expecting to receive the materials for this several weeks earlier as Marc wanted the new app to be ready to launch at the beginning of term. As I’d heard nothing I assumed that this was no longer going to happen, but on Monday Marc gave me access to the files and said the launch must still go ahead at the start of term. There is rather a lot to do and very little time to do it in, especially as preparing stuff for the App Store takes so much time once the app is actually developed. Also, Marc is still revising the materials so even though I’m now creating the new version I’m still going to have to go back and make further updates later on. It’s not exactly an ideal situation. However, I did manage to get started on the redevelopment on Tuesday, and spent pretty much all of my time on Tuesday, Wednesday and Thursday on this task. This involved designing a new interface based on the colours found in the logo file, creating the structure of the app, and migrating the static materials that the team had created in HTML to the JSON file I’m creating for the app contents. This included creating new styles for the new content where required and testing things out on various devices to make sure everything works ok. I also implemented two of the new quizzes, which also took quite a bit of time, firstly because I needed to manually migrate the quiz contents to a format that my scripts could work with and secondly because although the quizzes were similar to ones I’ve written before they were not identical in structure, so needed some reworking in order to meet the requirements. I’m pretty happy with how things are developing, but progress is slow. I’ve only completed the content for three subsections of the app, and there are a further nine sections remaining. Hopefully the pace will quicken as I proceed, but I’m worried that the app is not going to be ready for the start of term, especially as the quizzes should really be tested out by the team and possibly tweaked before launch.
I spent most of Friday this week writing the Technical Plan for Thomas Clancy’s new place-name project. Last week I’d sent off a long list of questions about the project and Thomas got back to me with some very helpful answers this week, which really helped in writing the plan. It’s still only a first version and will need further work, but I think the bulk of the technical issues have been addressed now.
Other than these tasks, I responded to a query from Moira Rankin from the Archives about an old project I was involved with, I helped Michael Shaw deal with some more data for The People’s Voice project, I had a chat to Catriona MacDonald about backing up The People’s Voice database, I looked through a database that Ronnie Young had sent me, which I will be turning into an online resource sometime soon (hopefully), I replied to Gerry McKeever about a project he’s running that’s just starting up which I will be involved with, and I replied to John Davies in History about a website query he had sent me. Unfortunately I didn’t get a chance to continue with the Edinburgh Gazetteer work I’d started last week, but I’ll hopefully get a chance to do some further work on this next week.
This week was another four-day week for me as I was on holiday on Friday. I will also be on holiday all next week. As I am currently between any pressing deadlines and I didn’t want to start anything major before my holiday, I decided to return to the migration of one of the old STELLA resources to the University’s T4 website this week. The resource in question is STARN (the Scots Teaching and Resource Network). It’s a collection of Scottish literary and non-literary materials that was mainly compiled in the 90s. Although the old site mostly still worked it looked very old fashioned and contained many broken links. I had started to migrate the site across to T4 before Christmas last year during a bit of slack time I had, but as things got busier I had to leave the migration half done and focus on current research projects instead. When I returned to it this week I discovered I was right in the middle of migrating Sir Walter Scott’s Waverley novels, which was something of a mammoth task. There were countless chapters that each needed their own pages, then I needed to add ‘next’ and ‘previous’ links to all of these after I’d created the pages, then I needed to create contents pages and a variety of ancillary pages. It was a tedious, time-consuming and pretty brainless task, but there is a certain amount of satisfaction to be gained from getting it all done. You can now access the STARN resource here: http://www.gla.ac.uk/schools/critical/aboutus/resources/stella/projects/starn/
I also spent a bit of time this week speaking to Alison Wiggins about her upcoming AHRC project that starts in September and I will be involved with for a small amount of my time. I also set up a subdomain for Stuart Gillespie’s project. I’m going to be helping out on an interview panel for a post in another School within the College in September and I spent a bit of time going through the applications for this too. There’s not really much else to say about the work I did this week. Once I’m back after my holiday I’ll need to focus on the new version of the ARIES app that is due to launch in September (all being well) and I need to get back into developing the atlas for the SCOSYA project.
I spent Monday this week creating the Android version of the ‘Basics of English Metre’ app, which took a little bit of time as the workflow for creating and signing apps for publication had completely changed since the last time I created an app. The process now uses Android Studio, and once I figured out how it all worked it was actually a lot easier than the old way that used to involve several command-line tools such as zipalign. By the end of the day I had submitted the app and on Tuesday both the iOS and the Android version had been approved and had been added to the respective stores. You can see the iOS version here: https://itunes.apple.com/us/app/the-basics-of-english-metre/id1262414928?mt=8 and the Android version here: https://play.google.com/store/apps/details?id=com.gla.stella.metre&hl=en_GB&pcampaignid=MKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1. The Web version is also available here: http://www.arts.gla.ac.uk/stella/apps/web/metre/.
On Friday I met with Stuart Gillespie in English Literature to discuss a new website he requires. He has a forthcoming monograph that will be published by OUP and he needs a website to host an ‘Annexe’ for this publication. Initially it will just be a PDF but it might develop into something bigger with online searches later on. Also this week I had a further email conversation with Thomas Clancy about some potential place-name projects that might use the same system as the REELS project and I had a chat with someone from the KEEP archive if Suffolk about hosting the Woolf short story digital edition I’ve created.
I spent the rest of the week getting back into the visualisations I’ve been making using data from the Historical Thesaurus for the Linguistic DNA project. It took me some time to read through all of the documentation again, look through previously sent emails and my existing code and figure out where I’d left things off several weeks ago. I created a little ‘to do’ list of things I need to do with the visualisations. My ‘in progress’ versions of the sparklines went live when the new version of the Historical Thesaurus was launched at the end of June (see http://historicalthesaurus.arts.gla.ac.uk/sparklines/) but these still need quite a bit of work, firstly to speed up their generation and secondly to make sure the data actually makes sense when a period other than the full duration is specified. The pop-ups that appear on the visualisations also need to be reworked when looking at shorter periods too as the statistics contained currently refer to the full duration.
I didn’t actually tackle any of the above during this week as instead I decided to look into creating a new set of visualisations for ‘Deviation in the LDNA period’ instead. Marc had created a sort of heatmap for this data in Excel and what I needed to do was create a dynamic, web-based version of this. I decided to use the always useful D3.js library for these and rather handily I found an example heatmap that I could use as a basis for further work: http://bl.ocks.org/tjdecke/5558084. Also via this I found some very handy colour scales that I could use for the heatmap and I’ll no doubt use for future visualisations: https://bl.ocks.org/mbostock/5577023
The visualisation I created is pretty much the same as the spreadsheet – increasingly darker shades of green representing positive numbers and increasingly darker shades of blue representing negative numbers. There are columns for each decade in the LDNA period and rows for each Thematic Heading.
I’ve split the visualisation up based on the ‘S1’ code. It defaults to just showing the ‘AA’ headings but using the drop-down list you can select another heading, e.g. ‘AB’ and the visualisation updates, replacing the data. This calls a PHP script that generates new data from the database and formats it as a CSV file. We could easily offer up the CSV files to people too if they would want to reuse the data.
Note that not all of the ‘S1’ Thematic Headings appear in the spreadsheet or have ‘glosses’. E.g. ‘AJ Matter’ is not in the spreadsheet and has no ‘gloss’ so I’ve had to use ‘AJ01 Alchemy’ as the ‘group’ in the drop-down list, which is probably not right. Where there is no ‘S1’ heading (or no ‘S1’ heading that has a ‘gloss’) the ‘S2’ heading appears instead.
Here’s a screenshot of the visualisation, showing Thematic Heading group ‘AE Animals’:
In the ‘live’ visualisation (which I can’t share the URL of yet) if you hover over a thematic heading code down the left-hand edge of the visualisation a pop-up appears containing the full ‘gloss’ so you can tell what it is you’re looking at. Similarly, if you hover over one of the cells a pop-up appears, this time containing the decade (helpful if you’ve scrolled down and the column headings are not visible) and the actual value contained within the cell.
Rather than make the cells square boxes as in the example I started with, I’ve made the boxes rectangles, with the intention of giving more space between rows and hopefully making it clearer that the data should primarily be read across the way. I have to say I rather like the look of the visualisation as it brings to mind DNA sequences, which is rather appropriate for the project.
I experimented with a version of the page that had a white background and another that had a black background. I think the white background actually makes it easier to read the data, but the black background looks more striking and ‘DNA sequency’ so I’ve added in an option to switch from a ‘light’ theme to a ‘dark’ theme, with a nice little transition between the two. Here’s the ‘dark’ theme selected for ‘AB Life’:
There’s still probably some further work to be done on this, e.g. allowing users to in some way alter the cell values based on limits applied, or allowing users to click through from a cell pop-up to some actual words in the HT or something. I could also add in a legend that shows what values the different shades represent. I wasn’t sure whether this was really needed as you can tell by hovering over the boxes anyway. I’ll see what Marc and Fraser suggest when they have a chance to use the visualisations.
This was my first week back after a very relaxing two weeks of holiday, and in fact Monday this week was a holiday too so it was a bit of a short week. I spent some time doing the usual catching up with emails and issues that had accumulated in my absence, including updating access rights for the SciFiMedHums site, investigating an issue with some markers not appearing on the atlas for SCOSYA, looking into an issue that Luca had emailed me about, fixing some typos on the Woolf short story site and speaking to the people at the KEEP archive about hosting the site, and giving some feedback on the new ARIES publicity materials. I also spent the best part of a day on AHRC review duties.
On Tuesday I met with Kirsteen McCue and her RA Brianna Robertson about a new project that is starting up about romantic national song. The project will need a website and some sort of interactive map so we met to discuss this. Kirsteen was hoping I’d be able to set up a site similar to the Burns sites I’ve done, but as I’m no longer allowed to use WordPress this is going to be a little difficult. We’re going to try and set something up within the University’s T4 structure, but it might not be possible to get something working the way Kirsteen was hoping for. I sent a long email to the University’s Web Team asking for advice and hopefully they’ll get back to me soon.
I spent the rest of the week returning to App development. I’m going to be working on a new version of the ARIES app soon, so I thought it would be good to get everything up to date before this all starts up. As I expected, since I last did any app development all the technical stuff has changed – a new version of Apache Cordova, new dependencies, new software to install, updates to XCode, a new requirement to install Android Studio etc etc. Getting all of this infrastructure set up took quite a bit of time, especially the installation of a new piece of required software called ‘Cocoabeans’ that took an eternity to set up.
With all this in place I then focussed on creating the app version of ‘The Basics of English Metre’, which is a task that has been sitting in my ‘to do’ list for many months now. I managed to create the required iOS and Android versions and installed them on my devices for testing. All appeared to be working fine so I then set to work creating all of the files that are necessary to actually publish the App online. I started with the iOS version. This required the creation of 14 icon files and 10 launch screen images, which was a horrible tedious task. I then needed to create several screenshots of the App store., which required getting screenshots from an iPad Pro (which I don’t have). Thankfully XCode has an iOS simulator, which you can use to boot up your app and get screenshots. However, although the simulator was working for the app earlier in the week, when I came to take the screenshots the app build just kept on failing when deploying to the simulator. Rather strangely, the app would build just fine when deploying to me actual iPad, and also when building to an Archive file for submission to the store. I spent ages trying to figure out what the problem was, but just couldn’t get to the bottom of it. In the end I had to create a new version of the app, and this thankfully worked, so I guess there was some sort of conflict or corruption in the code for the first version. With this out of the way I was able to take the screenshots, complete the App Store listings, upload my app file and submit the app for inclusion. I managed to get this done on Friday afternoon so hopefully sometime next week the app will be available for download. I didn’t have time to complete and submit the Android version of the app, so this is what I’ll focus on at the start of next week.