Week Beginning 27th March 2017

I spent about a day this week continuing to tweak the digital edition system I’m creating for the ‘New Modernist Editing’ project.  My first task was to try and get my system working in Internet Explorer, as my current way of doing things produced nothing more than a blank section of the page when using this browser.  Even though IE is now obsolete it’s still used by a lot of people and I wanted to get to the bottom of the issue.  The problem was that jQuery’s find() function when executed in IE won’t parse an XMLDocument object.  I was loading in my XML file using jQuery’s ‘get’ method, e.g.:

$.get(“xml/ode.xml”, function( xmlFile ) {

//do stuff with xml file here

}

After doing some reading about XML files in jQuery it looked like you had to run a file through parseXML() in order to work with it (see http://api.jquery.com/jQuery.parseXML/) but when I did this after the ‘get’ I just got errors.  It turns out that the ‘get’ method automatically checks the file it’s getting and if it’s an XML file is automatically runs it through parseXML() behind the scenes so the text file is already an XMLDocument object by the time you get to play with it.

Information on this page (http://stackoverflow.com/questions/4998324/jquery-find-and-xml-does-not-work-in-ie) suggested an alternative way to load the XML file so that it could be read in IE but I realised in order to get this to work I’d need to get the plain text file rather than the XMLDocument object that jQuery had created.  I therefore used the ‘ajax’ method rather than the shorthand ‘get’ method, which allowed me to specify that the returned data was be to treated as plain text and not XML:

$.ajax({

url: “xml/ode.xml”,

dataType: “text”}).done(function(xmlFile){

//do stuff with xml file here

});

This meant that jQuery didn’t automatically convert the text into an XMLDocument object and I was intending to then manually call the parseXML method for non-IE browsers and do separate things just for IE.  But rather unexpectedly jQuery’s find() function and all other DOM traversal methods just worked with the plain text, in all browsers including IE!  I’m not really sure why this is, or why jQuery even needs to bother converting XML into an XMLDocument Object if it can just work with it as plain text.  But as it appears to just work I’m not complaining.

To sum up:  to use jQuery’s find() method on an XML file in IE (well, all browsers) ensure you pass plain text to the object and not an XMLDocument object.

With this issue out of the way I set to work on adding some further features to the system.  I’ve integrated editorial notes with the transcription view, using the very handy jQuery plugin Tooltipster (http://iamceege.github.io/tooltipster/).  Words or phrases that have associated notes appear with a dashed line under them and you can click on the word to view the note and click anywhere to hide the note again.  I decided to have notes appearing on click rather than on hover because I find hovering notes a bit annoying and clicking (or tapping) works better on touchscreens too.  The following screenshot shows how the notes work:

I’ve also added in an initial version of the ‘Edition Settings’ feature.  This allows the user to decide how they would like the transcription to be laid out.  If you press on the ‘Edition Settings’ button this opens a popup (well, a jQuery UI modal dialog box, to be precise) through which you can select or deselect a number of options, such as visible line breaks, whether notes are present or not etc.  Once you press the ‘save’ button your settings are remembered as you browse between pages (but resets if you close your browser or navigate somewhere else).  We’ll eventually use this feature to add in alternatively edited views of the text as well – e.g. one that corrects all of the typos. The screenshot below shows the ‘popup’ in action:

I spent about a day on AHRC duties this week and did a few other miscellaneous tasks, such as making the penultimate Burns ‘new song of the week’ live (see http://burnsc21.glasgow.ac.uk/when-oer-the-hill-the-eastern-star/) and giving some advice to Wendy Anderson about OCR software for one of her post-grad students.  I had a chat with Kirsteen McCue about a new project she is leading that’s starting up over the summer and I’ll need to give some input into.  I also made a couple of tweaks to the content management system for ‘The People’s Voice’ project following on from our meeting last week.  Firstly, I added new field called ‘sound file’ to the poem table.  This can be used to add in the URL of a sound file for the poem.  I updated the ‘browse poems’ table to include a Y/N field for whether there is a sound file present so that the project team can therefore order the table by the column and easily find all of the poems that have sound files.  The second update I made was to the ‘edit’ pages for a person, publication or library.  These now list the poems that the selected item is associated with.  For people there are two lists, one for people associated as authors and another for people who feature in the poems.  For libraries there are two lists, one for associated poems and another for associated publications.  Items in the lists are links that take you to the ‘edit’ page for the listed poem / publication.  Hopefully this will make it easier for the team to keep track of which items are associated with which poems.

I also met with Gary this week to discuss the new ‘My Map Data’ feature I implemented last week for the SCOSYA project.  It turns out that display of uploaded user data isn’t working in the Safari browser that Gary tends to use, so he had been unable to see how the feature works.  I’m going to have to investigate this issue but haven’t done so yet.  It’s a bit of a strange one as the data all uploads fine – it’s there in the database and is spat out in a suitable manner by the API, but for some reason Safari just won’t stick the data on the map.  Hopefully it will be a simple bug to fix.  Gary was able to use the feature by switching to Chrome and is now trying it out and will let me know of any issues he encounters.  He did encounter one issue in that the atlas display is dependent on the order of the locations when grouping ratings into averages.  The file he uploaded had locations spread across the file and this meant there were several spots for certain locations, each with different average rating colours.  A simple reordering of his spreadsheet fixed this, but it may be something I need to ensure gets sorted programmatically in future.

I also spent a bit of time this week trying to write down a description of how the advanced attribute search will work.  I emailed this document to Gary and he is going to speak to Jennifer about it.  Gary also mentioned a new search that will be required – a search by participant rather than by location.  E.g. show me the locations where ‘participant a’ has a score of 5 for both ‘attribute x’ and ‘attribute y’.  Currently the search is just location based rather than checking that individual participants exhibit multiple features.

There was also an issue with the questionnaire upload facility this week.  For some reason the questionnaire upload was failing to upload files, even though there were no errors in the files.  After a bit of investigation it turned out that the third party API I’m using to grab the latitude and longitude was down, and without this data the upload script gave an error.  The API is back up again now, but at the time I decided to add in a fallback.  If this first API is down my script now attempts to connect to a second API to get the data.

I spent the rest of the week continuing to work on the new visualisations of the Historical Thesaurus data for the Linguistic DNA project.  Last week I managed to create ‘sparklines’ for the 4000 thematic headings.  This week I added red dots to the sparklines to mark where the peak values are.  I’ve also split the ‘experiments’ page into different pages as I’m going to be trying several different approaches.  I created an initial filter for the sparklines (as displaying all 4000 on one page is probably not very helpful).  This filter allows users to do any combination of the following:

Select an average category size range (between average size ‘x’ and average size ‘y’)

Select a period in which the peak decade is reached (between decade ‘x’ and decade ‘y’)

Select a minimum percentage rise of average

Select a minimum percentage fall of average (note that as this is negative values the search will bring back everything with a value less than or equal to the value you enter).

This works pretty nicely, so example the following screenshot shows all headings that have an average size of 50 or more and have a peak between 1700 and 1799:

With this initial filter option in place I started work on more detailed options that can identify peaks and plateaus and things like that.  The user first selects a period in which they’re interested (which can be the full date range) and this then updates the values that are possible to enter in a variety of fields by means of an AJAX call.  This new feature isn’t operational yet and I will continue to work on it next week, so I’ll have more to say about it in the next report.