A short report this week due to time constraints. I continued development of the Grammar app this week, although I have got rather bogged down in some of the more complex exercises. There is a lot to try and sort out and shape into something that is understandable and easy to use. I’m getting there, but it is taking some time.
On Monday I attended the SCS research web pages follow-up meeting, and the pages have definitely improved a lot since the first meeting. Plans for further tweaks and improvements were discussed, and I have a couple of tasks to do relating to Digital Humanities. On Tuesday I had a meeting with Flora and Arts Support about the Historical Thesaurus servers and the migration of websites from the old server to the new. Chris is on the case and hopefully we’ll be able to dump the old server soon.
Also on Tuesday I met with the Burns people about the timeline mock-ups I had put together. Of the two mock-ups people generally preferred the first one, which has events grouped into multiple ‘streams’ so this is the one that I will be further developing. I created an Excel spreadsheet template that the content creators will use and once we have a body of timeline entries I will create a little PHP script that will convert this into the JSON structure required by the timeline software.
On Wednesday I attended the School meeting, and I spoke briefly to the other attendees about my role and the projects I’m involved with. Not much else of note to report this week.
I spent Monday and Tuesday this week focussing on the Timeline for the Burns project, which I’ve built using TimeGlider (http://timeglider.com/). It’s a really nice interface and it’s possible to make some very visually appealing timelines using data stored in the JSON format. I’ve created two test versions of the Timeline so far. The first one has two different ‘streams’ – one for life events and one for prose events. Any number of streams can be supported, with the possibility of turning streams on or off. Timeglider allows you to include a ‘legend’, which allows you to filter events based on their icon. Unfortunately I haven’t been able to get this working when the timeline has multiple streams. I’ve created a second version of the timeline that doesn’t use multiple streams, in order to test out the legend. The data in the timeline is the same, only prose events are not kept separate. Instead, I’ve created a number of categories for the entries (e.g. life, locations, publications, tours). Entries in the timeline have the appropriate icon next to them and you can filter the timeline to include only those categories you are interested in using the legend. It’s a shame the legend doesn’t seem to work with multiple streams. It is also possible to apply ‘tags’ to events and limit a search based on a tag, although I can’t seem to get this working in any of the Timeglider examples. I haven’t tried it out in our timeline yet. I can’t post the URL to the test versions here yet as I ‘borrowed’ some sample images for test purposes that could be under copyright control and I don’t want to get into any bother. I’m meeting with the Burns people to discuss the Timeline options next week though, and hopefully we will have a good ‘work in progress’ timeline to show soon.
Also this week I discussed upcoming projects with two people – Rhona Brown in Scottish Literature and Justin Livingston in English Literature. I gave each of them what I hope was some good advice and hopefully their bids will be successful.
I spent the bulk of Wednesday and Thursday working on the Grammar app – finally beginning the exercises. It took a while to get properly started on these exercises as they required some serious thought, but I’m quite happy with how they are progressing now. I have completed the exercises for section two, which asks users to supply part of speech labels, and section three, where users have to supply phrase labels. The CSS elements I created for displaying all the labels in the Grammar book have turned out to be equally suitable for the exercises too. Each word is contained in a floating <div> that is between one and three lines tall (depending on the content). Modifier and headword labels can go on the top row, words and brackets in the middle, and phrase / part of speech on the bottom row. The floating nature of the divs ensures the words wrap nicely on all browser widths and it also ensures the labels will always be properly positioned in relation to the words.
For the exercises all I had to do was place some sort of user input option in place of the label. I started off just using a standard HTML select box, but this looked pretty ugly. After a few other experiments I decided on using a JqueryMobile popup containing a row of buttons in a control group, one button for each part of speech or label. Pressing on an empty, dotted space in the page opens up the popup, pressing a button closes the popup and enters the content into the dotted space. It’s remarkably simple and works really well.
You can see the exercises here: http://www.arts.gla.ac.uk/STELLA/briantest/grammar/exercise-2-1-parts-of-speech.html
Friday morning was mostly taken up with meetings. We had a useful meeting for the Mapping Metaphor project where we discussed visualisation options. The main outcome of the meeting was that we really need to come up with a requirements document before we can tell which possible software solutions might be suitable. This document will be created in the next couple of weeks. After this meeting we had a further meeting to discuss the revamp of the Historical Thesaurus web site. I’m going to be developing a new front end of this, with Flora working on the back end stuff. We’ll be aiming to get this all done by the end of the summer.
In the afternoon I did some further investigation into the Quicktime issue in the SCOTS site. I’ve been looking some more at the Quicktime files themselves and I’ve figured out why they won’t play but the test .mov file I downloaded from another source worked ok. It’s because the SCOTS .mov files are just containers holding pointers to a media streaming server. For example, opening up 1448.mov in a text editor shows several links to rtsp://streaming.scottishcorpus.ac.uk/v2/hq/1448.mp4.
So when the SCOTS .mov file is opened what Quicktime is actually doing is opening a network connection using the rtsp:// protocol and downloading the content, incrementally. This protocol uses a different port from standard web connections (HTTP uses port 80, RTSP uses 554).
I don’t know if this is a recent change (and Googling hasn’t enlightened me) but Windows Firewall by default blocks port 554, meaning any RTSP requests bring up a Firewall warning. At least we know why the Firewall warning was coming up now. And I think most PC users (who can approve Firewall exceptions) should only get the warning once. We could maybe put a warning up on the site about this I suppose. I don’t think we’ll be able to ‘fix’ it as it requires the user’s PC settings to be altered.
Using the options down the left-hand side you can view the metaphors related to light, plus only those that have been categorised as ‘strong’ or ‘weak’. You can also view the combined metaphors for beauty and light – either showing all, strong, weak or only those metaphors that relate to both beauty and light.
The graph itself can be scrolled around and zoomed in and out of like Google Maps – Click and hold and move the mouse to scroll, use the scroll wheel to zoom in and out. Brighter lines and bigger dots indicate ‘strong’ metaphors. If you hover over a node you can see its ID plus the number of connections. Click on a node to view connection details in the right-hand column. If you click and hold on a node you can drag it about the screen – useful for grouping nodes or simply moving some out of the way to make room. Note that you can do this on the central node too, which you’ll almost certainly have to do on the ‘connections to both beauty and light’ graph.
I think there would be some pretty major benefits to using this script for the project:
2: The data used is in the JSON format, which can easily be constructed from database queries or CSV files – I made a simple PHP script to convert Ellen’s CSV files to the necessary format (an example of one of the source files can be viewed here: http://www.arts.gla.ac.uk/STELLA/briantest/mm/Jit/Examples/ForceDirected/light-strong.json)
3: it would pretty straightforward to make the graphs more interactive by taking user input and generating JSON files based on this
4: Updating the code shouldn’t be too tricky – for example in addition to showing connections in the right-hand column when a secondary node linked to both beauty and light (e.g. ‘love’) is clicked on, we can provide options to make this node the centre of a new graph, or add it as a new ‘primary node’ to display in addition to beauty and light. Another example: users could remove nodes they are not interested in to ‘declutter’ the graph.
There are some possible downsides too:
1: People might want something that looks a bit fancer (having said that it is possible to customise all elements of the look and feel)
2: It probably won’t scale very well if you need to include a lot more data than these examples show
3: It doesn’t appear to be possible to manually define the length of certain lines (e.g. to make ‘strong’ connections appear in one circle, ‘weak’ ones further out).
4: The appearance of the graph is random each time it loads – sometimes the layout of the nodes is much nicer than other times.
5: All processing (other than the generation of the JSON source files) takes place at the client side so low powered devices will possibly struggle with the graphs (e.g. tablets, netbooks, old PCs)
On Wednesday I completed the required updates to ARIES, specifically adding in the ‘no highlight’ script to all exercises to avoid exercise contents being highlighted when users quickly click the exercise boxes. I also added in a facility to enable users to view the correct answers in stage 2 of the monstrous ‘further punctuation’ exercises. If you check your answers once and don’t manage to get everything right a link now appears that when clicked on highlights all the required capital letters in bold, green text and places all the punctuation in the right places.
I spent a bit of time continuing to work on the technical plan for the Bess of Hardwick follow-on project, but I’m not making particularly good progress with it. I think it’s because the deadline for getting the bid together is now the summer and it’s more difficult to complete things when there’s no imminent deadline! I will try to get this done soon though.
I returned to the ‘Grammar’ app this week and finally managed to complete all the sections of the ‘book’. I’ve now started on the exercises, but haven’t got very far with them as yet. I also started work on the Burns Timeline, after Pauline sent me the sample content during the week. I should have something to show next week.
I was on holiday on Monday and Tuesday this week – spent a lovely couple of days at a hotel on Loch Lomondside with gloriously sunny weather. On Wednesday I worked from home as I usually do, and I spent most of the day updating Exercise 1 of the ‘New Words for Old’ page of ARIES. Previously this exercise asked the user to get a friend to read out some commonly mis-spelled words but last week I recorded Mike MacMahon reading out the words with the aim of integrating these sound clips into the exercise. I completed the reworking of the exercise, using the very handy HTML5 <audio> tag to place an audio player within the web page. The <audio> tag is wonderfully simple to use and allows sound files to be played in a web page without requiring any horrible plugin such as Quicktime. It really is a massive leap forwards. Of course different browsers support (or I should say don’t support) different sound formats, so it does mean sound files need to be stored in multiple formats (MP3 and OGG cover all major browsers) but as we only have 12 very short sound clips this duplication is inconsequential.
Originally I had intended for the exercise to have a sound player and then a simple text box where users could enter their spelling using their device’s default keyboard. However, I realised that this wouldn’t work as web browsers and smartphone onscreen keyboards tend to have inbuilt spell-checkers that would auto-correct or highlight any mis-spelled words, thus defeating the purpose of the exercise. Instead I created my own onscreen keyboard for the exercise. Users have to press on a letter and then it appears in the ‘answer’ section of the page. It’s not as swish as a smartphone’s inbuilt onscreen keyboard and it is a bit slow at registering key presses, but I think for the exercise it should be sufficient. You can try out the ‘app’ version of the exercise here: http://www.arts.gla.ac.uk/STELLA/briantest/aries/spelling-5-new-words-for-old.html
On Thursday morning I attended a symposium on ‘Video Games and Learning’ (see http://gameslearning.eventbrite.co.uk/) that my HATII colleague Matthew Barr had organised. It was a really excellent event, featuring three engaging speakers with quite different backgrounds and perspectives on the use of video game technology to motivate and educate learners. I managed to pick up quite a few good pieces of advice for developing interactive educational tools that could be very useful when developing future STELLA applications.
For the rest of Thursday I had a brief look at the Mapping Metaphor data that Ellen had sent me, I emailed Mike Pidd at Sheffield about getting my ‘mobile Bess’ interface available through the main Bess of Hardwick site as Sheffield begin the final push towards launching the site. I spent the remainder of Thursday and a fair amount of Friday working on the technical plan for the bid for the follow-on Bess of Hardwick project. Writing the plan has been quite slow going as in writing it I am having to think through a lot of the technical issues that will affect the project as a whole. I made some good progress though and I hope to have a first draft of the plan completed next week.
My final task of the week was to try and figure out why certain computers are giving Firewall warnings when users attempt to play the SCOTS Corpus sound clips (for example this one: http://www.scottishcorpus.ac.uk/corpus/search/document.php?documentid=1448). Marc encountered the problem on a PC in a lecture room and as he didn’t have admin rights on the PC he couldn’t accept the Firewall exception and therefore couldn’t play the sound clips. I’ve discovered that there must be an issue with Quicktime or the .mov files themselves as the Firewall warning still pops up even when you save the sound file to the desktop and play it directly through Quicktime rather than through the browser.
Rather strangely I downloaded a sample .mov file from somewhere else and it works fine, which does lead me to believe there may be an issue with a codec. I’ve asked Arts Support to check whether Quicktime on the PC needs an update, although its version number suggests that this isn’t the case. I’ve also looked through the SCOTS documentation to see if there is any mention of codecs but there’s no indication that anything unusual was used. I will continue to investigate this next week.