I was on holiday last week – I forgot to mention that in my previous post. On returning after a week off there was quite a mountain of things to get through, for example fixing the ICOS2014 website, which had managed to lose a lot of its formatting and layout during a server migration and site upgrade. I also began setting up the website for another project for Carole (although there is still a lot to do for this).
I prepared for and attended a few meetings this week, including a Mapping Metaphor project meeting, a SAMUELS meeting and the first meeting of the ‘Local Advisory Group’ for the newly started ‘Historical Thesaurus of Scots’ project. I also provided some help and advice for Vivien Williams regarding sound files and website matters for the new ‘Robert Burns Choral Settings’ project.
Thursday was another Strike day, but most of the week was spent researching and writing the Technical Plan for a new place-names survey project involving Carole, Thomas Clancy and Simon Taylor. I attended a further meeting with the people involved in putting the bid together on Tuesday, where lots of my questions were answered and most of the outstanding matters related to technical issues were cleared up. I had a very useful e-mail conversation with Chris Fleet of the NLS about using their historical maps for this project, and I also had a detailed and very worthwhile phonecall with Shaun Hare, the technical developer of the ‘Digital Exposure of English Place-names’ (DEEP) project about the technical underpinnings of their online resource, which is in many ways similar to the resource that our project wants to make available. I spent the bulk of Friday writing the Technical Plan and managed to email it to the other project people before the end of the day. There’s not much more I can report about the contents of the bid or the Technical Plan at this stage, but here’s hoping it’s successful.
I got stuck into the re-importing of the Historical Thesaurus data this week, a massive undertaking with lots of different steps involved. Thankfully the last time I performed this task I documented the steps that were involved, which made things easier, although a number of these steps needed to be updated. We’d previously noticed a strange situation whereby Old English words that had initial ashes and thorns were losing these characters somewhere between the export from Access and the import into MySQL. I managed to track down what was causing this, which turned out to be a bug in PHP itself (see https://bugs.php.net/bug.php?id=55507). The function used to process CSV files doesn’t like unusual characters at the beginning of fields and these characters vanish. This was a bit of a problem as I was relying on this function to get access to the data. I fixed things by a simple hack – I added some extra characters to the beginning of the necessary fields, then stripped these out again after PHP had done its processing.
I also tackled and completed most of the other data related tasks this week as well, such as stripping out the initial full stops from the subcategory names, ensuring search term variants were generated with and without apostrophes and also that search term variants were generated for words with hyphens (additional forms were created with a space instead of the hyphen and also with no character – e.g. fire-place, fire place, fireplace).
I also managed to get a bit of XSLT working with the XML representation of the HT data in order to extract an up to date list of categories that have no words. These are not present in the Access database but are needed for the website in order to allow users to browse up and down the hierarchy. The XSLT worked well and I managed to get my hands on the more than 10,000 empty categories that exist. Running these through a little PHP script resulted in the categories getting added to the database.
I also did quite a bit of further work with search term variants. I ensured the full word appears in the search term table, in addition to any forms split off from this. I also completely reworked the scripts that deal with brackets so that every permutation of bracketed letters gets saved as a search term – not just ‘all bracketed letters’ and ‘no bracketed letters’. It was pretty tricky to develop this script but it appears to work well. I also reworked the script that deals with hyphens in words and managed to automatically build up full versions of words where only partial sections were located – e.g. wood-sear/-seer/-sere is split up to give three variants: wood-sear, wood-seer and wood-sere. There are still just over 300 hyphen words that will need to be manually fixed, but that’s not too bad considering there are almost 100,000 hyphenated words in the system.
I also began looking into the stop words and I created a script that can process these – removing words such as ‘the’ and storing a new variant without it. However, I need feedback from Marc and Christian as to how this should work before I execute it. I also looked into updating the date fields, but again I need to speak to Marc about these before I proceed. I did make a start updating the layout of the date search boxes, but I haven’t finished this yet.
Some non-HT things I did this week included attending a Burns seminar, which was hugely interesting, very enjoyable and a good opportunity to catch up with the Burns people. I also had a meeting with Charlotte Methuen about a possible project she is putting together. I had an email conversation with Nigel Leask about a project he is developing and I make a couple of minor tweaks to the ICOS 2014 website for Daria too.
As with last week, I spent most of this week working on the Historical Thesaurus redevelopment. The focus this week was on the search options, firstly generating scripts that would be able to extract all individual word variants and store these as separate entries in a database for search purposes, and secondly working on the search front end.
In addition to extracting forms separated by a slash the script also looks for brackets and generates versions of words with and without brackets – so for example hono(u)r results in two variants – honour and honor. This would then allow exact words to be matched as well as allow for wildcard searches. The script works well in most instances, but there are some situations where the way in which the information has been stored makes automated extraction difficult, for example ‘weather-gleam/glim’, ‘champian/-ion’, ‘(ge)hawian (on/to)’. In these cases the full version of the word / phrase is not repeated after the slash, and it would be very difficult to establish rules to determine what the script would do with the part after the slash. Christian, Marc and I met on Thursday to discuss what might be done about this, including using a list of ‘stop words’ that the search script would ignore (e.g. prepositions). I will also look into situations where hyphens appear after a slash to see if there is a way to automate what happens to these words. It is looking like at least some manual editing of words will be required at some point, however.
During the week I ran my script to generate search terms, resulting in 855,810 forms. The majority of these will have been extracted successfully, and I estimate that there are maybe 3-4000 words that might need to be manually fixed at some point. However, even with these words it is likely that a wildcard search would still successfully retrieve the word in question.
I spent most of my remaining time on HT matters working on the category selection page and the quick search. I have now managed to get a quick search up and running that searches words and category headings and uses asterisks for wildcards at the beginning and end of a search term. The quick search leads to the category selection page which pulls out all matching categories and lexemes. It creates a ‘recommended’ section which includes lexemes where the search term appears in both the lexeme and the category heading, and a big long list of all other returned hits underneath. I have also added in pagination for results too. Marc and Christian are wanting the results list to be split into sections where the search term appears in the lexeme and then where it appears only in the category, which I will do next week. The search is still a bit slow at the moment and I’ll need to look into optimising it soon, either by using more indexes or by generating cached versions of search results.
In addition to this I responded to a query about developing a project website that was sent to me by Charlotte Metheun in Theology and I provided some advice to someone in another part of the university who was wanting to develop a Google Maps style interface similar to the one I made for Archive Services. I also made some further updates to the ICOS 2014 website, adding in the banner and logo images and making a few other visual tweaks. My input into this website is now pretty much complete. I also arranged to meet Jean to discuss finalising the Digital Humanities Network website, and I signed up as a ‘five minute speaker’ for the Digital Humanities website launch. I’ll be talking about the redevelopment of the STELLA Teaching resources.
This week was predominantly spent working on the Historical Thesaurus redesign, both the database and the page design for the front end. For the database I created a bunch of upload and data processing scripts to get the almost 800,000 rows of data from the Access database into the new MySQL database that will be used to power the website. Despite stating last week that I wouldn’t change the structure of the data, this week I decided to do just that by moving the 13 fields that make up category information to a dedicated category table rather than having this information as part of the lexeme table. Splitting the information up reduces the amount of needlessly repeated data – for example there are up to 50 lexemes in each category and previously all 13 category fields were being repeated up to 50 times whereas now the information is stored once and then linked to the lexeme table, which is much neater.
By the end of the week I had all of the data migrated and moved into the new database structure, with a number of indices in place to make data retrieval speedier too. One slight issue with the data was that ‘empty’ categories in the hierarchy (i.e. ones that don’t have any associated lexemes) are not present in the Access database. This makes sense when you’re focussing on lexemes, but in order to develop a browse option or present breadcrumbs the full hierarchy is needed. For example 01.01.06.01 is ‘Europe’ and its parent category is ’01.01.06’, regions of the earth. But as this category has no lexemes of its own it isn’t represented in the database. I met with Marc on Thursday and he managed to get a complete list of the categories to me, including the ‘empty’ ones and I spent some time working on a script that would pull out the empty ones and add them to my new ‘category’ database. While doing this I came across a few errors in the data, where the full combination of headings and part of speech was not unique. I also noticed that I had somehow made an error in my database structure, missing out three parts of speech types. Rectifying this will mean reuploading all the data, which I will do next week.
In terms of front end work, I made some further possible interface designs, all of which are ‘responsive’ designs (they automatically change with screen size, meaning no separate mobile / tablet interface needs to be developed). It was a good opportunity to learn more about responsive web design. My second possible interface can be found here http://historicalthesaurus.arts.gla.ac.uk/new-design-2/ and possibly looks a bit too ‘bloggy’. I further adapted this design to use a horizontal navigation section, which you can view here: http://historicalthesaurus.arts.gla.ac.uk/new-design-3/. At the meeting with Marc on Thursday I received some feedback from him and the other people involved with the project regarding colour schemes and fonts, and as a result of this I came up with a fourth design, which will probably end up being used and can be viewed here: http://historicalthesaurus.arts.gla.ac.uk/new-design-4/. This combines the horizontal navigation of the previous design with the left-hand navigation of design number 2, and I think it looks quite appropriate.
Also this week I helped to set up the domain and provided some feedback to Daria for the ICOS2014 conference website and did some Digital Humanities Network related tasks.
Last weekend was Easter, and I took a few additional days off to recharge the old batteries. Because of this there was no weekly update last week, and this week’s is going to be relatively short too, as I only worked Wednesday to Friday. My Easter Egg count was four this year, with only one still intact.
I spent quite a bit of time this week working on the requirements for the Mapping Metaphor website, in preparation for next week’s team meeting. Wendy emailed a first draft of a requirements document to the team last week and on Wednesday I went through this in quite some detail, picking out suggestions and questions. This took up most of the day and my document had more than 50 questions in it, which I hoped wasn’t too overwhelming or disheartening. I emailed the document and arranged to meet Wendy the following day to discuss things. We had a really useful meeting where we went through each of my questions / suggestions. In a lot of cases Wendy was able to clarify things very well and my understanding of the project increased considerably. In other cases Wendy decided that my questions needed further discussion amongst the wider group and these questions will be emailed to the team before next week’s meeting. Our meeting took about two hours and was pretty exhausting but was hugely useful. It will probably take another few meetings like this with various people before we get a more concrete set of requirements that can be used as the basis for the development of the online tool.
Also this week I revisited the website I’ve set up for Carole’s ICOS2014 conference. Daria emailed me with some further suggestions and updates and I managed to get them all implemented. An interesting one was to add multilingual support to the site. I ended up using qTranslate (http://www.qianqin.de/qtranslate/) which was really easy to set up and works very well – you just select the languages you want to be represented and then your pages / posts have different title and content boxes for each language. A simple flag based language picker works well in the front end and loads the appropriate content, adding the two letter language abbreviation to the page URL too. It’s a very nice solution.
I was also contacted this week by Patricia Iolana, who is putting together an AHRC bid. She wanted me to give feedback on the Technical Plan for the bid, and I spent some time going through the bid information and commenting on the plan.
I seemed to work a little bit on many different projects this week. For Burns I wrote up my notes from last week’s meeting and did a bit more investigation into timelines and Timeglider in particular. I also noticed that there is already a Burns timeline on the new ‘Robert Burns Birthplace Museum’ website: http://www.burnsmuseum.org.uk/collections/timeline. It looks very nice, with a sort of ‘parallax scrolling’ effect in the background. It is however just a nice looking web page rather than being a more interactive timeline allowing users to search or focus on specific themes.
I spent a bit more time this week working on the technical plan for the follow-on project for Bess of Hardwick, although Alison is now wanting to submit this in July rather than ASAP so we have the luxury of time in which to really think some ideas through. I’m still hoping to get an initial version of the plan completed by the end of next week, however. I also spent a little time going over the mobile Bess interface I made as it looks like Sheffield might be about ready to implement my updates as they launch the main Bess site.
I also worked a little bit more on the ICOS2014 website and responsive web design. I’ve got a design I’m pretty happy with now that works on a wide variety of different screen sizes. I still need some banner images to play around with but things are looking promising.
Once I realised what was causing the problem I could replicate it on my PC and tackle the issues. Even though only 1% of web users still use IE7 I wanted to ensure ARIES worked on this older browser. It took some time but I managed to update all the exercises so they work in both old and new browsers.
Also for ARIES this week I recorded Mike MacMahon speaking some words that I will use in an exercise in the spelling section of ARIES. Users will be able to play sound clips to hear a word being spoken and then they will be asked to type the word as they think it should be spelled. It was my first experience of the Sound Studio and Rachel Smith very kindly offered to show me how everything worked. On Friday we did the recordings and everything went pretty smoothly. Now I need to make the exercise and embed the sound files!
Also this week I attended a HATII developers meeting. This was a chance for the techy people involved in digital humanities projects to get together to discuss their projects and the technologies they are using. Chris from Arts Support was also there and it was really useful to hear from the other developers. It is hoped that these meetings will become a regular occurrence and will be expanded out to all developers working in DH across the university. We should also be getting a mailing list set up for DH developers, and also a wiki or other such collaborative environment. I also pointed people in the direction of the new DH at Glasgow website and asked people to sign up to this as technical experts.
Finally this week I did a little bit more work with the data Susan Rennie sent me for the Scots Glossary project that we are putting together. I made some updates to the technical documentation I had previously sent her and mapped out in more detail a data schema for the project.
I’m on holiday on Monday and Tuesday next week but will be back to it next Wednesday.
I had an afternoon of meetings on Friday so it’s another Monday morning blog post from me. It was another busy week for me, more so because my son was ill and I had to take Tuesday off as holiday to look after him. This meant trying to squeeze into four days what I had hoped to tackle in five, which led to me spending a bit less time than I would otherwise have liked on the STELLA app development this week. I did manage to spend a few hours continuing to migrate the Grammar book to HTML5 but there are still a couple of sections still to do. I’m currently at the beginning of Section 8.
I did have a very useful meeting with Christian Kay regarding the ARIES app on Monday, however. Christian has been experiencing some rather odd behaviour with some of the ARIES exercises in the web browser on her office PC and I offered to pop over and investigate. It all centres around the most complicated exercise of all – the dreaded ‘Test yourself’ exercise in the ‘Further Punctuation’ section (see how it works for you here: http://www.arts.gla.ac.uk/STELLA/briantest/aries/further-punctuation-6-test-yourself.html). In stage 2 of the exercises clicking on words fails to capitalise them while in stage 3 adding an apostrophe also makes ‘undefined’ appear in addition to the apostrophe. Of course these problems are only occurring in Internet Explorer, but very strangely I am unable to replicate the problems in IE9 in Windows 7, IE9 in Windows Vista and IE8 in Windows XP! Christian is using IE8 in Windows 7, and it looks like I may have to commandeer her computer to try and fix the issue. As I am unable to replicate it on the three Windows machines I have access to it’s not really possible to try and fix the issue any other way.
Christian also noted that clicking quickly multiple times to get apostrophes or other punctuation to appear was causing the text to highlight, which is a bit disconcerting. I’ve implemented a fix for this that blocks the default ‘double click to highlight’ functionality for the exercise text. It’s considered bad practice to do such a thing (jQuery UI used to provide a handy function that did this very easily but they removed it – see http://api.jqueryui.com/disableSelection/ ) but in the context of the ARIES exercise its use is justifiable.
I also spent a little bit of time this week reworking the layout for the ICOS2014 conference website, although there is still some work to do with this. I’ve been experimenting with responsive web design, whereby the interface automatically updates to be more suitable on smaller screens (e.g. mobile devices). This is currently a big thing in interface design so it’s good for me to get a bit of experience with the concepts.
Following on from my meeting with Susan Rennie last week I created a three page technical specification document for the project that she is hoping to get funding for. This should hopefully include sufficient detail for the bid she is putting together and gives us a decent amount of information about how the technology used for the project will operate. Susan has also sent me some sample data and I will begin working with this to get some further, more concrete ideas for the project.
I also began work on the technical materials for the bid for the follow-on project for Bess of Hardwick. This is my first experience with the AHRC’s ‘Technical Plan’, which replaced the previous ‘Technical Appendix’ towards the end of last year. In addition to the supporting materials found on the AHRC’s website, I’m also using the Digital Curation Centre’s Data Management Planning Tool (https://dmponline.dcc.ac.uk/) which provides additional technical guidance tailored to many different funding applications, including the AHRC.
On Thursday I had a meeting with the Burns people about the choice of timeline software for the Burns Timeline that I will be putting together for them. In last week’s post I listed a few of the pieces of timeline software that I had been looking at as possibilities and at the meeting we went through the features the project requires. More than 6 categories are required, and the ability to search is a must, therefore the rather nice looking VeriteCo Timeline was ruled out. It was also decided that integration with WordPress would not be a good thing as they don’t want the Timeline to be too tightly coupled with the WordPress infrastructure, thus enabling it to have an independent existence in future if required. We decided that Timeglider would be a good solution to investigate further and the team is going to put together a sample of about 20 entries over two categories in the next couple of weeks so I can see how Timeglider may work. I think it’s going to work really well.
On Friday I met with Mark Herraghty to discuss some possibilities for further work for him and also for follow-on funding for Cullen. After that I met with Marc Alexander to discuss the bid we’re going to put together for the Chancellors’ fund to get someone to work on migrating the STELLA corpora to the Open Corpus Workbench. We also had a brief chat about the required thesaurus work and the STELLA apps. Following this meeting I had a conference call with Marc, Jeffrey Robinson and Karen Jacobs at Colorado University about Jeffrey’s Wordsworth project. It was a really useful call and Jeffrey and Karen are going to create a ‘wishlist’ of interactive audio-visual ideas for the project that I will then give technical input, in preparation for a face to face meeting in May.
Back to work after a thoroughly enjoyable two weeks off for Christmas, and still the Christmas chocolate mountain has yet to be completely devoured. It is at least getting smaller, as my waistline gets bigger.
I spent the start of this week getting back into the swing of things after the holidays, completing such tasks as getting through the email backlog and planning what to focus on in the New Year. After that was sorted the bulk of the rest of the week was spent continuing with the development of the ARIES app, which I had begun in the run-up to Christmas.
As I mentioned in my previous post, I had implemented a nice little drag and drop feature for placing words in sentences, which I managed to put to good use in the ‘Apparent Problems’ page of the ‘Apostrophe’ section. I was really happy with how this was working, but then over Christmas I managed to get my hands on an Android based tablet running the Chrome browser. In this browser my drag and drop failed to work properly: Drag an element down and the whole page starts scrolling up, making it impossible to complete the exercise. The annoying thing is that in Chrome (made by Google) the drag and drop is broken whereas using Android’s built-in browser (also made by Google) on the same device the drag and drop works! This was most frustrating and it was back to the drawing board for this section yet again.
I don’t have regular access to all the major smartphone and tablet operating systems which makes testing things rather tricky. I probably could have found a way to fix the Chrome drag and drop issue but lack access to the hardware to do so. It was primarily for this reason that I decided to abandon drag and drop entirely and revert to a more old fashioned approach for adding text to a box: simply tap the text to make it appear in the box rather than dragging and dropping. It’s less satisfying to use, but works equally as well (and is probably quicker). You can try out the new version here: http://www.arts.gla.ac.uk/STELLA/briantest/aries/the-apostrophe-3-apparent-problems.html and if you’re interested in trying out the ‘broken’ draggable version you can see it here: http://www.arts.gla.ac.uk/STELLA/briantest/aries/the-apostrophe-3-apparent-problems-draggable.html.
By the end of the week I had added all the content to the ARIES site, although I’m still in the middle of developing the biggest and most complex exercise (see below). All the exercises (apart from the big one) are now complete, although I’m not altogether happy with one or two of them. For example the exercise on this page: http://www.arts.gla.ac.uk/STELLA/briantest/aries/spelling-2-confusing-pairs.html would really have benefitted from the drag and drop functionality. Instead you have to first select the word you want to use, then click on the gap to place it. It works perfectly well, but it’s slightly clunky.
The big exercise that I’m still working on is the ‘Test Yourself’ exercise in the ‘Further Punctuation’ section. In the original site a two paragraphs of text are presented without any punctuation and users have to update the text by editing it in a text box (see http://www.arts.gla.ac.uk:8180/aries/test_yourself.jsp). This approach isn’t suitable for mobile devices as it would be a real chore to tap on parts of existing text then find the right character to add in. Instead I decided to break up each exercise into three sections, using techniques I’d already developed for previous, shorter exercises.
In stage 1 the user adds in quotation marks by tapping once on a word to add quotes to the start, twice to add quotes to the end, three times to have quotes at the beginning and end, and a fourth time to remove quotes. Users can attempt this as many times as required until they get it right, or they can skip to the second stage.
In stage 2 the user must add in full stops, capital letters, commas, exclamation marks and question marks using the same method as I developed for the ‘Basic Punctuation’ section. Users can tap on a word to make the initial letter upper or lower case and by tapping on the dotted square following a word the user can cycle through the punctuation options. As with the first stage, users can try this section as many times as they wish or can skip to the final stage. Note that the correct placement of quotation marks is displayed in the text in Stage 2 whether the user got them all correct in Stage 1 or not.
In the final stage the user must add apostrophes where applicable, again using functionality I developed in an earlier section. The text is presented at double size as in this stage users need to tap on individual letters rather than words. Tapping on a letter adds (or removes) an apostrophe after the letter. The correct placement of punctuation and capital letters is used in Stage 3 no matter what the user supplies in Stage 2.
I’ve just about finished the three stages for the first exercise and you can try it here: http://www.arts.gla.ac.uk/STELLA/briantest/aries/further-punctuation-6-test-yourself.html. I still need to do some final validation in Stage 3 and present the user with suitable feedback, and remove the ‘proceed to next stage’ button which isn’t needed. I then need to ensure the code works for the second exercise, which will take a little bit of time. After that the mobile version of ARIES will be completed and I will move on to the web version.
Also this week I gave some feedback to Carole Hough and Daria Izdebska regarding a website for the ICOS 2014 conference, which I hope was useful.
That was all for this week. Next week I will continue with ARIES and will implement the required changes to the Burns website, if the required content is provided next week.
Lots more meeting people and discussing projects this week. I spent some time reading through the project documentation for the Mapping Metaphor project, and also looked at some of the ongoing research, which is being compiled in Access databases and Excel spreadsheets. I participated in two meetings for this project, one a more general project meeting and the other more technical in nature. It seems like the technical aspects of the project are progressing nicely at this stage. They are still very much at the data input and analysis stage, and discussions about how to visualise the connections between words over time will not be focussed on until later.
I also had a very interesting meeting with Alison Wiggins about the Bess of Hardwick project and was given a preview of the website through which the letters will be made publicly available. The website, which is being produced by the University of Sheffield’s Humanities Research Institute, is looking really great, with lots of search and browse options available. I spent a little bit of time this week trying out the site and providing feedback to Alison about the functionality of the website.
Another meeting I had this week was with Carole Hough to discuss a couple of upcoming conferences that will require web presences. Nothing is imminently required for these conferences but I was able to provide some advice on how to manage the paper submission process and how the front ends of the websites could be created and managed. For handling the logistics of paper submission I recommended easychair.org, a free online conference management system that I have used for previous conferences. It’s a really handy system for keeping track of paper submissions, editorial groups and the peer review process. For the front ends I recommended setting up a WordPress instance for each site, and I spent some time looking into WordPress and the customisation options for this. There are so many modules, themes and plugins for WordPress that there really is no reason to create a conference website from scratch as everything is ready and waiting to just be tweaked and configured. I’m still not sure at this stage whether I should personally be setting up these instances or just advising on which solution to use, I’ll raise it at the next DROG meeting.
A further meeting I had this week was with Marc Alexander and Stephen Barrett, who is currently creating a Gaelic corpus within the School of Humanities. There were a lot of connections between the work Stephen is carrying out and the corpora held within SCS and we are hoping to work together to create one big corpus (with many subsets) for use by the College of Arts as a whole. We are hoping to use the Open Corpus Workbench and are attempting to get some server space set up for test purposes. I spent some time this week investigating the Open Corpus Workbench and corpus software issues in general
My final meeting of the week was with Jean Anderson, the previous head of STELLA and one of the major driving forces in Literary and Linguistic Computing projects at the University of Glasgow. We had a hugely useful chat about my role, STELLA, projects and the School and I received lots of helpful advice. Jean should be able to continue to provide advice and maybe participate in future DROG meetings, which I think would be very useful. She also proposed that the Digital Resources Owners Group should have the acronym DROOG, a reference to the slang term meaning ‘friend’ in Burgess’s ‘A Clockwork Orange’, which I think is really rather good. We just need to think what the second ‘o’ could stand for… Digital Resources Owners and Operators Group, perhaps?
Through the meeting with Jean I have a clearer picture of which STELLA teaching tools should maybe be prioritised and I’ll run this by Marc. After that I should hopefully be able to get started updating them.
After the meeting with Jean we went to look at the work being carried out in 13 University Gardens. It’s all looking really good, but things are definitely not as far advanced as I had hoped. The previous estimate of October for moving in looks more and more likely. Here’s hoping the HATII people don’t turf me out before then.
In addition to meetings I also attended the first lecture of the Literary and Linguistic Computing course and I am planning on attending a few more of these over the course of the academic year. It’s useful to see what is currently being taught on this course as although I took the course myself as an undergraduate that was a fair number of years ago and it’s interesting to get an up to date overview of the subject.