Week Beginning 20th January 2020

I spent quite a bit of time this week continuing to work on the systems for the Place-names of Mull and Ulva project.  The first thing I did was to figure out how WordPress sets the language code.  It has a function called ‘get_locale()’ that bring back the code (e.g. ‘En’ or ‘Gd’).  Once I knew this I could update the site’s footer to display a different logo and text depending on the language the page is in.  So now if the page is in English the regular UoG logo and English text crediting the map and photo are displayed whereas is the page is in Gaelic the Gaelic UoG logo and credit text is displayed.  I think this is working rather well.

With this in place I began adding in the Gaelic fields to the Content Management System.  This was a very big job, as basically wherever an English field appears (e.g. place-name, island, analysis, research notes) a Gaelic version was also required.  I did consider updating the CMS to make it fully multilingual, thus enabling any language to be plugged into it and for all fields for this language to automatically appear, but this would have been a much larger job with a much greater risk of things not working out or bugs being introduced.  As I have very limited time on this project and as it seems unlikely that our place-names CMS will be used for any other languages I went with the simplest but less technologically pleasing option of just duplicating fields for Gaelic.  Even this proved to be time consuming as I had to update the underlying database, add the new Gaelic fields to the forms in the CMS, update the logic that processes all of these  forms and update any pages that display the data.  This also included updating some forms that were loaded in via JavaScript (e.g. when adding multiple sources to a historical form) and updating some forms that get pre-populated via Ajax calls (e.g. when typing in an element and selecting an existing one from the drop-down list all of the element’s fields (including new Gaelic ones) must be loaded into the form.

I managed to get all of the new Gaelic fields added into the CMS and fully tested by Thursday and asked Alasdair to testing things out.  I also had a discussion with Rachel Opitz in Archaeology about incorporating LIDAR data into the maps and started to look at how to incorporate data from the GB1900 project for the parishes we are covering.  GB1900 (http://www.gb1900.org/) was a crowdsourced project to transcribe every place-name that appears on OS maps from 1888-1914, which resulted in more than 2.5 million transcriptions.  The dataset is available to download as a massive CSV file (more than 600Mb).  It includes place-names for the three parishes on Mull and Ulva and Alasdair wanted to populate the CMS with this data as a starting point.  On Friday I started to investigate how to access the information.  Extracting the data manually from such a large CSV file wasn’t feasible so instead I created a MySQL database and wrote a little PHP script that iterated through each line of the CSV and added it to the database.  I left this running over the weekend and will continue to work with it next week.

Also this week I continued to add new project records to the new Digital Humanities at Glasgow site.  I only have about 30 more sites to add now, and I think it’s shaping up to be a really great resource that we will hopefully be able to launch in the next month or so.

I also spent a bit of further time on the SCOSYA project.  I’d asked the university’s research data management people whether they had any advice on how we could share our audio recording data with other researchers around the world.  The dataset we have is about 117GB, and originally we’d planned to use the University’s file transfer system to share the files.  However, this can only handle files that are up to 20Gb in size, which meant splitting things up.  And it turned out to take an awfully long time to upload the files, a process we would have to do each time the data was requested.  The RDM people suggested we use the University’s OneDrive system instead.  This is part of Office365 and gives each member of staff 1TB of space, and it’s possible to share uploaded files with others.  I tried this out and the upload process was very swift.  It was also possible to share the files with users based on their email addresses, and to set expiration dates and password for file access.  It looks like this new method is going to be much better for the project and for any researchers who want to access our data.  We also set up a record about the dataset in the Enlighten Research Data repository: http://researchdata.gla.ac.uk/951/ which should help people find the data.

Also for SCOSYA we ran into some difficulties with Google’s reCAPTCHA service, which we were using to protect the contact forms on our site from spam submissions.  There was an issue with version 3 of Google’s reCAPTCHA system when integrated with the contact form plugin.  It works fine if Google thinks you’re not a spammer but if you somehow fail its checks it doesn’t give you the option of proving you’re a real person, it just blocks the submission of the form.  I haven’t been able to find a solution for this using v3, but thankfully there is a plugin that allows the contact form plugin to revert back to using reCAPTCHA v2 (the ‘I am not a robot’ tickbox).  I got this working and have applied it to both the contact form and the spoken corpus form and it works for me as someone Google somehow seems to trust and for me when using IE via remote desktop, where Google makes me select features in images before the form submits.

Also this week I met with Marc and Fraser to discuss further developments for the Historical Thesaurus.  We’re going to look at implementing the new way of storing and managing dates that I originally mapped out last summer and so we met on Friday to discuss some of the implications of this.  I’m hoping to find some time next week to start looking into this.

We received the reviews for the Iona place-name project this week and I spent some time during the week and over the weekend going through the reviews, responding to any technical matters that were raised and helping Thomas Clancy with the overall response, that needed to be submitted the following Monday.  I also spoke to Ronnie Young about the Burns Paper Database, that we may now be able to make publicly available, and made some updates to the NME digital ode site for Bryony Randall.

Week Beginning 6th January 2020

This was my first week back after a two-week break for Christmas.  On Monday I replied to a few emails, looked at the stats for SCOSYA (which had another couple of sizeable peaks over the holidays) and upgraded all of the WordPress sites that I manage to the latest version of both WordPress and the various plugins that I use.  I did encounter a strange issue with the Jetpack plugin, which caused scary error messages about the site being broken during installation, but thankfully this was just a minor issue caused by the plugin having changed its file locations.  I also arranged for a new subdomain to be set up for the new Mull and Ulva place-names project I’m setting up the technical infrastructure for.  Once this had been set up later in the week I set up an initial WordPress site of the project and migrated the content management system I’d made for the Berwickshire place-names project across and configured it for the new project.  The new project needs to be entirely bilingual, with all content available in both English and Gaelic, so I spent some time experimenting with WordPress plugins to find a solution that would allow a bilingual site with a URL structure that would also work for the non-Wordpress pages where the place-name data will be presented.  I used a plugin called Polylang, which provides a language switcher widget and facilities for every page to have multiple versions in different languages.  It provides the option to do all of this without changing the URL structure (e.g. no /EN/ for English in the URL) which is what I needed for my non-Wordpress pages to work.  It took a bit of tweaking to get the plugin working properly, as for some reason the Gaelic version of the blog post page didn’t work to start with and there were some issues with setting up a different menu for the Gaelic site, but I managed to get it all working as I’d hoped.  I then started looking into how to make the CMS multilingual.  This is going to be rather tricky as both content and labels need to be stored in both English and Gaelic, with cross-references from one to the other.  I contacted the project PI Alasdair Whyte with some questions about this and I’ll continue with the development once I hear back from him.

Other than dealing with some minor managerial issues with SCOSYA and responding to a few other emails I spent the rest of the week developing the interactive map feature for Gerry McKeever’s Regional Romanticism project.  Gerry wanted a map where sections of a novel (Paul Jones by Allan Cunningham) that relate to specific geographical areas can be associated with these locations, and a trail from one location to another throughout the novel can be visualised.  Before Christmas I’d experimented with the Storymap tool that I’d previously used for the Romantic National Song Network project (e.g. https://rnsn.glasgow.ac.uk/english-scots-and-irishmen/) only using a geographical map rather than an image.  I’d come up with a working version using the 40 entries Gerry had compiled, as the following screenshot demonstrates:

However, this version was not really going to work as Gerry wanted people to be able to control the zoom level, which Storymap doesn’t allow out of the box.  Plus we wanted the layout to be very different, to be able to categorise entries and to handle the joining lines differently.  For these reasons I decided to create my own interface, based on the interactive stories I’d recently made for the SCOSYA project, for example: https://scotssyntaxatlas.ac.uk/atlas/?j=y#6.25/57.929/-4.448/0/0/0/1

We’d been given approval by Christ Fleet of NLS Maps to use one of their georeferenced historical maps (Arrowsmith, 1807), so I created an initial version using this map as an overlay and a freely available tileset called ‘Thunderforest Pioneer’ as a full basemap.  Here’s a screenshot of this version:

In this version the map markers were just Leaflet’s default blue markers with tooltips when you hover over them and there were no connecting lines or arrows.  The story pane in the top right of the map displays the slide content and an opacity slider that allows you to see the base map through the historical map.  You can navigate between slides using the ‘Next’ and ‘Previous’ buttons.  In this version the zoom transitions between slides needed refinement and there was no highlighting of the marker that the slide corresponds to.  There is also no differentiation between marker types (‘real’, ‘fictional’ etc) and book volumes.

I also realised that there were some limitations with the Arrowsmith map.  Firstly, it is not possible to zoom into it any further than the zoom level shown in the above screenshot.  The NLS simply doesn’t provide tiles for any greater zoom level, so although I can allow users to zoom in further the overlay then disappears.  At the maximum zoom level available for the Arrowsmith map there’s not much distance between many of the markers so the connecting lines / arrows are possibly not going to work very well.  Also, the section of the Arrowsmith map where all the action happens is unfortunately not at all accurate.  If you make the overlay more transparent the area around Sandyhills is really very wrong, with markers currently identified as ‘shoreline’ massively inland, for example.   I also realised that what happens when someone clicks on a marker needs further thought as many locations are associated with multiple entries so it can’t just be a case of ‘click on a marker and view a specific entry’, even though in the current map there is a marker for each entry, even when this means multiple markers appearing at the same point on the map.

After some discussions with Gerry we decided to try an alternative historical map, the OS six-inch map from 1843-1882.  We also decided that entries should be numbered and locations should have a one to many relationship to entries, i.e. a location is stored separately from entries in the JSON data and any number of entries may connect to a specific location by referencing the entry’s ID.  This meant I had to manually reorganise the data, but this was both necessary and worth it.

With the new data and the new historical map ready to use I set about creating some further prototypes, using different Leaflet plugins to get nicely formatted lines and arrows.  One version featured straight grey lines between locations with arrows all the way along the line to show direction.  Where a location is connected in both directions there are arrows in both directions.  The line colour, number of arrows and arrow style can be changed.  Below is a screenshot of this version:

I created another version that had curved, dashed lines, which I think look nicer than the straight lines.  Unfortunately it’s not possible to add arrows to these lines, or at least it’s not without spending a long time redeveloping the plugin, and even then it might not work.  I also added in different icons here to denote location type (book for fictional, globe for real, question mark for approximate).  Icons also have different colours, as the following screenshot demonstrates:

After discussions with Gerry I decided to use the version with curved lines, and created a further version that included a full-width panel for the introduction, entry IDs appearing in the slides and also information about the location type and the volume the entry appears in.  For the final version I created I replaced the map markers with circular markers featuring the icons (retaining the colours).  This is because the mapping library Leaflet doesn’t include a method of associating IDs with markers, which makes doing things to markers programmatically rather tricky (e.g. highlighting a marker when a new slide loads or triggering the loading of data when a marker is selected).  Thankfully it is possible to pass IDs when using HTML based markers, which is what the new version features.  When you click on a map marker now two things may happen.  If the location is associated with multiple entries (e.g. The Mermaid Bay) then the story pane lists all of the entries, featuring their volume and number, the first 100 characters of the entry and a ‘view’ button that when pressed displays the entry.  If the location is associated with a single entry (e.g. Ford) then the entry loads immediately.  Map markers are now highlighted with a yellow border either when they’re clicked on or when navigating through the entries and the line joining the previous slide to the current slide now turns yellow when navigating between slides, as the following screenshot demonstrates:

With all this in place my work on the interactive map is now complete until Gerry does some further work on the data, which will probably be in a month or so.

Week Beginning 16th December 2019

This week the Scots Syntax Atlas project was officially launched, and it’s now available here: https://scotssyntaxatlas.ac.uk/ for all to use.  We actually made the website live on Friday last week so by the time of the official launch on Tuesday this week there wasn’t much left for me to do.  I spent a bit of time on Monday embedding the various videos of the atlas within the ‘Video Tour’ page and I updated the data download form.  I also ensured that North Queensferry appeared in our grouping for Fife, as it had been omitted and wasn’t appearing as part of any group.  I also created a ‘how to cite’ page in addition to the information that is already embedded in the Atlas.

The Atlas uses MapBox for its base maps, a commercial service that allows you to apply your own styles to maps.  For SCOSYA we wanted a very minimalistic map, and this service enabled us to create such an effect.  MapBox allows up to 200,000 map tile loads for free each month, but we figured this might not be sufficient for the launch period so arranged to apply some credit to the account to cover the extra users that the launch period might attract.  The launch itself went pretty well, with some radio interviews and some pieces in newspapers such as the Scotsman.  We had several thousand unique users to the site on the day it launched and more than 150,000 map tile loads during the day, so the extra credit is definitely going to be used up.  It’s great that people are using the resource and it appears to be getting some very positive feedback, which is excellent.

I spent some of the remainder of the week going through my outstanding ‘to do’ items for the place-names of Kirkcudbrightshire project.  This is another project that is getting close to completion and there were a few things relating to the website that I needed to tweak before we do so, namely:

I completely removed all references to the Berwickshire place-names project from the site (the system was based on the one I created for Berwickshire and there were some references to this project throughout the existing pages).  I also updated all of the examples in the new site’s API to display results for the KCB data.  Thomas and Gilbert didn’t want to use the icons that I’d created for the classification codes for Berwickshire so I replaced them with more simpler coloured circular markers instead.  I also added in the parish boundaries for the KCB parishes.  I’d forgotten how I’d done this for Berwickshire, but thankfully I’d documented the process.  There is an API through which the geoJSON shapes for the parishes can be grabbed: http://sedsh127.sedsh.gov.uk/arcgis/rest/services/ScotGov/AreaManagement/MapServer/1/query and through this I entered the text of the parish into the ‘text’ field and selected ‘polygon’ for ‘geometry type’ and ‘geoJSON’ for the ‘format’ and this gave me exactly what I needed.  I also needed the coordinates for where the parish acronym should appear too, and I grabbed these via an NLS map that display all of the parish boundaries (https://maps.nls.uk/geo/boundaries/#zoom=10.671666666666667&lat=55.8481&lon=-2.5155&point=0,0), finding the centre of a parish by positioning my cursor over the appropriate position and noting the latitude and longitude values in the bottom right corner of the map (the order of these needed to be reversed to be used in Leaflet).

I also updated the ‘cite’ text, rearranged the place-name record page to ensure that most data appears above the map and that the extended elements view (including element certainty) appeared.  I also changed the default position and zoom level of the results map to ensure that all data (apart from a couple of outliers) are visible by default and rearranged the advanced search page, including fixing the ‘parish’ part of the search.  I also added the ‘download data for print’ facility to the CMS.

Also this week I met with Alasdair Whyte from Celtic and Gaelic to discuss his place-names of Mull project.  I’m going to be adapting my place-names system for his project and we discussed some of the further updates to the system that would be required for his project.  The largest of these will be making the entire site multilingual.  This is going to be a big job as every aspect of the site will need to be available in both Gaelic and English, including search boxes, site text, multilingual place-names, sources etc.  I’ll probably get started on this in the new year.

Also this week I fixed a couple of further issues regarding certificates for Stuart Gillespie’s NRECT site and set up a conference website that is vaguely connected to Critical Studies (https://spheres-of-singing.gla.ac.uk/).

I spent the rest of the week continuing with the redevelopment of the Digital Humanities Network site.  I completed all of the development of this (adding in a variety of browse options for projects) and then started migrating projects over to the new system.  This involved creating new icons, screenshots and banner images for each project, checking over the existing data and replacing it where necessary and ensuring all staff details and links are correct.  By the end of the week I’d migrated 25 projects over, but there are still about 75 to look over.  I think the new site is looking pretty great and is an excellent showcase of the DH projects that have been set up at Glasgow.  It will be excellent once the new site is ready to go live and can replace the existing outdated site.

This is the last week of work for me before the Christmas holidays so there will be no further posts until the New Year.  If anyone happens to be reading this I wish you a very merry Christmas.

Week Beginning 9th December 2019

After the disruption of the recent strike, this was my first full week back at work, and it was mostly spent making final updates to the SCOSYA website ahead of next Tuesday’s official launch.  We decided to make the full resource publicly available on Friday as a soft launch so if you would like to try the public and linguist’ atlases, look at the interactive stories, view data in tables or even connect to the API you can now do it all at https://scotssyntaxatlas.ac.uk/.  I’m really pleased with how it’s all looking and functioning and my only real worry is that we get lots of users and a big bill from MapBox for supplying the base tiles for the map.  We’ll see what happens about that next week.  But for now I’ll go into a bit more detail about the tasks I completed this week.

We had a team meeting on Tuesday morning where we finalised the menu structure for the site, the arrangement of the pages, the wording of various sections and such matters.  Frankie MacLeod, the project RA, had produced some walkthrough videos for the atlas that are very nicely done and we looked at those and discussed some minor tweaks and how they might be integrated with the site.  We did wonder about embedding a video in the homepage, but decided against it as E reckons ‘pivot to video’ is something people thought users did but has proved to be false.  I personally never bother with videos so I kind of agree with this, but I know from previous experience with other projects such as Mapping Metaphor that other users do really appreciate videos, so I’m sure they will be very useful.

We still had the online consent form for accessing the data to finalise so we discussed the elements that would need to be included and how it would operate.  Later in the week I implemented the form, which will allow researchers to request all of the audio files, or ones for specific areas.  We also decided to remove the apostrophe from “Linguists’ Atlas” and to just call it “Linguist Atlas” throughout the site as the dangling apostrophe looks a bit messy.  Hopefully our linguist users won’t object too much!  I also added in a little feature to allow images of places to appear in the pop-ups for the ‘How do people speak in…’ map.  These can now be managed via the project’s Content Management System and I think having the photos in the map really helps to make it feel more alive.

During the week I arranged with Raymond Brasas of Arts IT Support to migrate the site to a new server in preparation for the launch, as the current server hosts many different sites, is low on storage and is occasionally a bit flaky.  The migration went smoothly and after some testing we updated the DNS records and the version of the site on the new server went live on Thursday.  On Friday I went live with all of the updates, which is when I made changes to the menu structure and did things such as update all internal URLs in the site so that any links to the ‘beta’ version of the atlases now point to the final versions.  This includes things such as the ‘cite this page’ links.  I also updated the API to remove the warning asking people to not use it until the launch as we’re now ready to go.  There are still a few tweaks to make next week, but pretty much all my work on the project is now complete.

In addition to SCOSYA I worked for a few other projects this week, and also attended the English Language & Linguistics Christmas lunch on Tuesday afternoon.  I had a chat to Thomas Clancy about the outstanding tasks I still need to do for the Place-names of Kirkcudbrightshire project.  I’m hoping to find the time to get all these done next week.  I made a few updates to Stuart Gillespie’s Newly Recovered English Classical Translations annexe (https://nrect.gla.ac.uk/) site, namely adding in a translation of Sappho’s Ode and fixing some issues with security certificates.  I also arranged for Carole Hough’s CogTop site to have its domain renewed for a year, as it was due to expire this month and had a chat with Alasdair Whyte from Celtic and Gaelic about his new place-names of Mull project.  I’m going to be adapting the system I created for the Berwickshire Place-names project for Alasdair, which will include adding in multilingual support and other changes.  I’m meeting with him next week to discuss the details.  I also managed to dig out the Scots School Dictionary Google Developer account details for Ann Ferguson at DSL, as she didn’t have a record of these and needed to look at the stats.

Other than the above, I continued to work on a new version of the Digital Humanities Network site.  I completed work on the new Content Management System, created a few new records (well, mostly overhauling some of the old data), and started work on the front-end.  This included creating the page where project details will be viewed and starting work on the browse facilities that will let people access the projects.  There’s still some work to be done but I’m pretty happy with how the new site is coming on.  It’s certainly a massive improvement on the old, outdated resource.  Below is a screenshot of one of the new pages:

Next week we will officially launch the SCOSYA resource and I will hopefully have enough time to tie up a few loose ends before finishing for the Christmas holidays.

Week Beginning 18th November 2019

On Monday I’d arranged another Arts Developer Coffee meeting, and for the first time all four of the current developers in the College of Arts were able to attend (that’s me, Luca Guariento, Stevie Barrett and David Wilson).  It was a great opportunity to catch up with them all and discuss our work and some of the issues we’re currently dealing with.

I divided my time between several projects this week.  I spent some time on DSL duties.  This involved making updates to the new version of the API that I developed a few months ago, that uses the most up to date version of the DSL data and a new instance of Apache Solr that I set up.  Ann had gone through the search facilities and had noted some discrepancies between this new version and the live site, so I addressed these.  Firstly, the new version of the quick search if the ‘autocomplete’ list was ignored and the search form was submitted was running a ‘match the characters anywhere in a headword’ search, meaning lots of results were being returned that were possibly not very relevant.  I updated this to ensure an exact match would be executed in such circumstances instead.  The other change I made was to the advanced search, which was previously ordering results by entry ID.  This meant that for a search that returned a lot of results only DOST results would be returned before the ‘maximum results’ limit was reached.  Instead I changed the Solr query to use Solr’s ‘relevance’ score instead, which uses various methods to rank results.  Strangely the returned result-set was still in a different order to the ‘live’ site, which also uses Solr’s ‘relevance’ score, which I can only assume is because the methods in which this score is generated have changed between versions of Solr.

I also got stuck into the creation of a new means of ordering SND and DOST entries for browse purposes.  The original browse lists supplemental entries after the main entries and doesn’t disregard non-alphanumeric characters when working out browse order, which leads to some rather odd placements of entries.  After a bit of toing and froing with Ann about how the new browse should work I spent some time on Friday writing a script that would order the data-sets.  The output consists of a table with one row per entry, listing the ID, Headword and POS of each, and also the ‘stripped’ version of the headword (i.e. no non-alphanumeric characters), the generated browse order and the generated POS Num, which is based on a table that Thomas Widmann had created.

In order for yogh to get positioned correctly after ‘Y’ and before ‘Z’ I’ve replaced the character with ‘yz’ in the stripped column (the stripped column will never be seen so this doesn’t really matter).  There was also an ash character in DOST, which I’ve converted to ‘ae’.  The ordering algorithm reorganises entries by the ‘stripped’ column initially.  If this column is the same as another then the ‘POS Num’ column is used to order things.  If this number is also identical then the IDX column (the numerical part of the ID column) is used.  This seems to work pretty well.  E.g. for ‘aiker’ in the SND output there are 3 rows for ‘n.1’, ‘,n.2’ and ‘n.3’.  These are all given the ‘POS Num’ of ‘1’, so the IDX field is checked and the entries are ordered correctly because by IDX order ‘n.1’ comes before ‘n.2’ which comes before ‘n.3’.

Also this week I met with Thomas Clancy and Gilbert Markus to discuss the Place-names of Kirkcudbrightshire project.  I’m going to be putting the finishing touches to the front-end for this project soon and we met to discuss what updates might need to be made.  I also added some new parishes to the underlying database.

On Thursday this week there were some issues with the server that hosts the majority of our project websites, which meant I couldn’t work on the projects I had intended to work on.  Instead I continued to look into overhauling the Digital Humanities Network website.  I began to set up the content management system for the data and migrated some of the underlying tables across to work with.  I also spent some time thinking about what new data regarding projects would need to be recorded and how this might be stored.

I spent the rest of the week continuing to make updates to the SCOSYA website.  We had received feedback from a user testing session on Monday and I spent some time reading through all of this and thinking about the various issues that had cropped up.  Generally the feedback was hugely positive, but there were some usability issues that kept cropping up.  I met with Jennifer on Tuesday to discuss these, and implemented a number of changers after that.  This included making the community voices popups narrower on smaller screens and larger with a bigger font size on bigger screens and making the atlas area narrower so it’s easy to scroll up and down the page.  I also needed to change the way the sidebar opens and closes.  There was an issue where pointer events were not being passed to the map in the area underneath the sidebar, meaning you couldn’t open popups in this space, and on mobile devices this space was basically the whole map.

I had to update the JavaScript to get this to work (rather than just updating the stylesheet which I did initially, but introduced new problems).  Previously the left-hand menu would only get resized if the contents ended up bigger than the height of the atlas.  In such cases the menu would be set to a bit less than the height of the atlas, with a scrollbar used to reach the parts that didn’t fit.  If a different menu item was clicked on that resulted in a shorter menu the menu area was not resized, as the contents would automatically take up less space anyway.  However, this left the menu as a transparent area taking up the full height of the atlas, an area where any map-based actions (e.g. opening markers) would not function, as the menu area was set to intercept the pointer events in order for scrolling to work.

What happens now is whenever the page loads or a section of the menu is opened or closed the JavaScript tallies up the height of all of the visible menu items (the headers plus the open body).  If this total is greater than the atlas height then the menu is set to be a bit smaller than the atlas height and a scrollbar appears as before.  But if the total is less than the atlas height then the menu is now resized to the height of all the visible elements combined, so that it no longer takes up the full height of the atlas as a transparent area that intercepts pointer events.  It’s a much more elegant solution and works very well.

I also updated the base map for the atlas to add in place-names, something that we had made a decision to exclude previously.  We had some feedback that suggested our bare, minimalist map made it difficult for people to understand where places were, so I reinstated the names.  Everyone is actually very happy with how the names look, so we’re going to leave them in.  I also updated a number of marker locations that didn’t have quite the right name (e.g. Boness instead of Bo’ness).

In addition I updated the atlas so that the choice of age group and markers or points is no longer remembered, but always defaults to ‘all’ and ‘markers’ when moving between examples.  I added a ‘choose another story’ link that now appears at the top of every slide in a story, rather than just the first and I’ve added a button to the top of the first story slide that opens the left-hand menu and reloads the ‘how do people speak in…?’ map.  I also added more explanatory text above the atlas, as feedback suggested people were not understanding how to use the interactive map.  I’ve added text explaining how to show and hide the left-hand menu, how to move about the map and how to zoom in and out.  I made the ‘show transcription’ links in the community voices popups buttons rather than links and fixed the links in the tabular dialect samples page, which were pointing to an older version of the atlas that no longer exists.  I made the border round the atlas blue to make it clearer where the atlas area ends and the rest of the page begins, as the previous light grey border was a bit hard to see and fixed an issue with the map popups when viewing a story pane that displays data for ‘young’ or ‘old’, whereby the sentence for ‘all’ and the other age group was still appearing when it shouldn’t.  I also applied a number of these fixes to the linguists’ atlas as well.  Here’s a screenshot of how the public atlas now looks:

The UCU strike begins on Monday next week so I will not be back into work until Thursday the 5th of December, unless the situation changes.

Week Beginning 11th November 2019

It was another mostly SCOSYA week this week, ahead of the launch of the project that was planned for the end of next week.  However, on Friday this week I bumped into Jennifer who said that the launch will now be pushed back into December.  This is because our intended launch date was the last working day before the UCU strike action begins, and is a bad time to launch the project, for reasons of publicity, engaging with other scholars and risks associated with technical issues that might crop up which might not be able to be sorted until after the strike.  As there’s a general election soon after the strike is due to end, it looks like the launch is going to be pushed back until closer to Christmas.  But as none of this transpired until Friday I still spent most of the week until then making what I thought were last-minute tweaks to the website and fixing bugs that had cropped up during user testing.

This included going through all of the points raised by Gary following the testing session he had arranged with his students in New York the week before, and meeting with Jennifer, E and Frankie to discuss how we intended to act on the feedback, which was all very positive but did raise a few issues relating to the user interface, the data and the explanatory text.

After the meeting I make such tweaks as removing most of the intro text from above the public atlas, but adding in a sentence about how to make the atlas full screen, as this is a feature that I think most users overlook.  I also overhauled the footer, adding in the new AHRC logo, logos for Edinburgh and QMUL and rearranging everything.  I also updated the privacy policy based on feedback from the University’s data protection people. I also updated the styling of the atlas’s menu headers to make them bolder on Macs, adding in links to the project’s API from the Linguists’ atlas and extended the height of the example selection area in the Linguists’ atlas too.  I also slightly tweaked the menu header text (e.g. ‘Search Examples’ is now ‘Search the examples’ to make it clearer that the tab isn’t just a few example searches) and updated the rating selection option to make unselected ratings appear in a very faded grey colour, to hopefully make it more obvious what is selected and what isn’t.  I also updated the legend so that the square grey boxes that previously said ‘no data’ now say ‘Example not tested’ instead.  I also updated the pop-ups accordingly.

I also sent the URL for the public and linguists’ atlases to the other developers in the College of Arts for feedback.  Luca Guariento found a way to break the map, which was good as after some investigation I figured out what was causing the issue and fixed it.  Basically, if you press the ‘top’ button in the footer it jumps to the div with ID ‘masthead’ using HTML’s plain ‘if hash passed show this on screen’ option.  But then if you do a full reload of the page the JavaScript grabs ‘masthead’ from the URL and tries to convert it to a float to pass it to Leaflet and things break.  By ensuring that the ‘jump to masthead’ link is handled in JavaScript rather than HTML I stopped this situation arising.  Stevie Barrett also noted that in Internet Explorer the HTML5 audio player in the pop-ups was too large for the pop-up area, and thankfully by adding in a bit of CSS to set the width of the audio player this issue was resolved.

Also this week I had a further chat with Luca about the API he’s building, and a DMP request that came his way, and arranged for the App and Play store account administration to be moved over the Central IT Services.  I also helped Jane Roberts with an issue with the Thesaurus of Old English and had a chat with Thomas Clancy and Gilbert Markus about the Place-names of Kirkcudbrightshire project, which I set the systems up for last year and is now nearing completion and requiring some further work to develop the front-end.

I also completed an initial version of a WordPress site for Corey Gibson’s bibliography project and spoke to Eleanor Capaldi about how to get some images for her website that I recently set up.  I also spent a bit of time upgrading all of the WordPress sites I manage to the latest version.  Also this week I had a chat with Heather Pagan about the Anglo-Norman Dictionary data.  She now has access to the data that powers the current website and gave me access to this.  It’s great to finally know that the data has been retrieved and to get a copy of it to work with.  I spent a bit of time looking through the XML files, but we need to get some sort of agreement about how Glasgow will be involved in the project before I do much more with it.

I had a bit of an email chat with the DSL people about adding a new ‘history’ field to their entries, something that will happen through the new editing interface that has been set up for them by another company, but will have implications for the website once we reach the point of adding the newly edited data from their new system to the online dictionary.  I also arranged for the web space for Rachel Smith and Ewa Wanat’s project to be set up and spent a bit of time playing around with a new interface and design for the Digital Humanities Network website (https://digital-humanities.glasgow.ac.uk/) which is in desperate need of a makeover.

Week Beginning 4th November 2019

I spent much of my time this week continuing with updates to the SCOSYA website ahead of the launch later this month.  This included adding in introductory text in several places, creating new buttons and a new homepage, writing a privacy page for the site and adding in a cookie banner.  I also created a new ‘contact us’ form using the ‘Contact Form 7’ plugin for WordPress.  I used this as we wanted to ensure the form had a ‘Captcha’, a test that must be completed before the form is submitted in order to separate out spam bots from real people.  The plugin uses Google’s ‘reCAPTCHA’ server, which previously presented users with an ‘I am not a robot’ button and additional tests if the user wasn’t already somehow known to Google.  However, a new version of reCAPTCHA has now been released which does all of the checking in the background and isn’t in any way visible to the user.  This seemed ideal, but what it means is that Google now checks the user’s interactions on every single page of the site in order to ascertain whether they are a valid user if they choose to fill out the contact form.  Worse still, Google automatically adds a ‘reCAPTCHA’ logo that hovers over the site in the bottom right corner, obscuring anything else that is put there.  It’s all horribly intrusive.  I tried reverting back to the earlier version of reCAPTCHA with its ‘I am not a robot’ button, but this no longer works with the Contact Form 7 plugin and ends up just breaking the form submission.  Instead I reluctantly returned to using the current version, and found a way to hide the badge but keep the service running (see https://stackoverflow.com/a/53986985) which apparently Google now allows you to do, so long as links to its terms of service and privacy policy are clearly displayed, which they are on the SCOSYA site.  I also met with Jennifer to discuss changes to the site and the impending launch, and spoke to Gary again about some issues that he is experiencing on his computer that no-one else is able to replicate.  I also made changes to the public atlas, renaming some of the atlas menus and updating all references to these, I updated the Google Analytics access to allow Jennifer to access the stats and changed the way attribute names are managed to allow HTML tags to be included in the CMS, and for the display of the names to no all be bold in the various front-ends, thus allowing certain words in the names to be made bold.  On Friday Gary ran a user testing session so no doubt I’ll have some further things to change next week.

Also this week I met with Corey Gibson from Scottish Literature to discuss an online bibliographical resource he would like me to help him put together for a Carnegie funded project that he’s currently running.  It was a useful meeting and we made some decisions about how the resource will function, and later in the week I put in a request to set up a subdomain for the resource.  Next week I’ll create an initial version for him.  Also this week I created a new project website for an RSE funded project that is involving English Literature for Eleanor Capaldi.  I created an initial interface and page structure and got all of the basics in place and will update this further once Eleanor gets back to me, if further changes are required.

I had a meeting with Rachel Smith about an interactive website that she is putting together with Ewa Wanat.  I’d met with Ewa about this in May but hadn’t heard anything since, but since then they have been speaking to Alistair Beith, a PhD student in Psychology, who is going to do the development work for them.  Alistair was at the meeting too and we discussed the requirements of the project, the technologies that will be used and some of the implications relating to access and file formats.  It was good to speak to another person with web development skills, and once the team has decided on a suitable subdomain for the project I’ll get things set up and give Alistair access to create the site.

I also met with my old friend and colleague Yunhyong Kim from Information Studies to try and get access to an ancient Apple computer that I had left in the building when I moved to Critical Studies.  Thankfully I could still remember the password and we managed to get it to boot up so I could set her up with a user account.  I also had some further communication with Brian McKenna from Central IT Services about the University’s App and Play Store accounts.  I have been managing these for many years now, mainly because Critical Studies was prepared to pay the subscription fees when no-one else was.  It looks like responsibility for this is now going to be taken over the IT Services, which makes sense.  I just hope it’s not going to make the process of publishing apps even more torturous than it already is, though.

Finally, on Friday I attended an ArtsLAb session on research integrity and data management plans.  This was a course that we’d tried to run several times before, but hadn’t managed to get sufficient numbers to sign up for.  This time we reduced the length of the session and we got a decent number of attendees.  I had previously spoken at such sessions, but as there was less time I suggested that it made sense to split the hour between Nigel Leask, who gave a very interesting talk about research integrity, and Matt from Research Data Management gave a great overview of data management and what researchers are required to do.  I provided some sample data management plans for the attendees to look at in their own time and it was a very useful session.

Week Beginning 28th October 2019

I split most of my time this week between the SCOSYA project and the Historical Thesaurus.  The launch of the SCOSYA atlases is scheduled to take place in November and I had suggested to Jennifer that it might be good to provide access to the project’s data via tables rather than through the atlas interfaces.  This is because although the atlases look great and are a nice interactive way of accessing and visualising the data, some people prefer looking at tables of data instead, and other people may struggle to use the interactive atlases due to accessibility issues, but may still want to be able to view the project’s data.  We will of course provide free access to the project’s API, through which all of the data can be accessed as CSV or JSON files, or can even be incorporated into a completely new interface, but I thought it might be useful if we provided text-based access to the data through the project’s front-end as well.  Jennifer agreed that this would be useful, so I spent some time writing a specification document for the new features, sending it to the team for feedback and developing the new features.

I created four new features.  First was a table of dialect samples, which lists all of the locations that have dialect sample recordings and provides access to these recordings and the text that accompanies them, replicating the data as found on the ‘home’ map of the public atlas.  The second feature provides a tabular list of all of the locations that have community voice recordings.  Clicking on a location then displays the recordings and the transcriptions of each, as the following screenshot shows:

The third new feature lists all of the examples that can be browsed for through the public atlas.  You can then click on one of these examples to listen to the example sound clips of the example and to view a table of results for all of the questionnaire locations.  Users can also click through to view this example on the atlas itself too, as I figured that some people might want to view the results as a table but then see how these look on the atlas too.  The following screenshot shows the ‘explore’ feature for a particular example:

The fourth new feature replicates the full list of examples as found in the linguists’ atlas.  There are many examples nested within parent and sub-parent categories and it can be a bit difficult to get a clear picture of what is available through the nested menus in the atlas, so this new feature provides access to a complete list of the examples that is fully expanded and more easy to view, as the following screenshot demonstrates:

It’s then possible to click on an example to view the results of this example for every location in a table, again with a link through to the result on the atlas, which then enables the user to customise the display of results further, for example focussing on older or younger speakers or limiting the display to particular rating levels.

I reckon these new features are going to complement the atlases very well and will hopefully prove very useful to researchers.  Also this week I received some feedback on the atlases from the project team and I spent some time going through this, adding in some features that had been requested (e.g. adding in buttons to scroll the user’s page down so that the full atlas is on screen) and investigating some bugs and other issues that had been reported, including some issues with the full-screen view of the atlas when using Safari in MacOS that Gary reported that I have so far been unable to replicate.  I also implemented a new way of handling links to other parts of the atlas from the stories, as new project RA Frankie had alerted me to an issue with the old way.  Handling internal links is rather tricky as we’re not really loading a new page, it’s just the JavaScript in the user’s browser processing and displaying some different data.  As a new page is never requested pressing the ‘back’ button doesn’t load what you might expect to be the previous page, but instead displays the last full web page that was loaded.  The pages also don’t open properly in a new tab because the reference in the link is not to an actual page, but is instead intended to be picked up by JavaScript in the page when the link is clicked on and then processed to change the map contents.  When you open the link in a new tab the JavaScript doesn’t get to run and the browser tries to load the reference in the link, which ends up as a broken link.

It’s not a great situation and the alternative should work a bit better.  Rather than handling the link in JavaScript the links are now full page requests that get sent to the server.  A script on the server then picks up the link and processes a full page reload of the relevant section of the atlas.  Unfortunately if the user is on the last slide of a story, then clicks a link on the last slide then presses the back button they’ll end up back at the first slide in the story not the last, as there isn’t a way to reference a specific slide in a story, but setting the links to open in a new tab by default gets round this problem.

Finally for the project this week I met with Jennifer and E to discuss the ancillary pages and text that need to be put in place before the launch, and we discussed the launch itself and what this would involve.

For the HT I generated some datasets that an external researcher had requested from the Thesaurus of Old English data, and I generated some further datasets from the main HT database for another request.  I also started to implement a system to generate the new dates table.  I created the necessary table and wrote a function that takes a lexeme and goes through all of the 19 date fields to generate the rows that would need to be created for the lexeme.  As of yet I haven’t set this running on the whole dataset, but instead I’ve created a test script that allows you to pass a catid and view all of the date rows that would be created for each lexeme in the category so I (and Marc and Fraser) can test things out.  I’ve tested it out with categories that have some complicated date structures and so far I’ve not encountered any unexpected behaviour, apart from one thing:  Some lexemes have a full date such as ‘1623 Dict. + 1642–1711’.  The script doesn’t analyse the ‘fulldate’ field but instead looks at each of the actual date fields.  There is only one ‘label’ field so it’s not possible to ascertain that in this case the label is associated with the first date.  Instead the script always associates the label with the last date that a lexeme has.  I’m not sure how common it is for a label to appear in the middle of a full date, but it definitely crops up fairly regularly when I load a random category on the HT homepage, always as ‘Dict.’ so far.  We’ll need to see what we can do about this, if it turns out to be important, which I guess it probably will.

Also this week I performed some App Manager duties, had a conversation with Dauvit Broun, Nigel Leask and the RDM team about the ArtsLab session on data management plans next week, and spoke to Ann Ferguson of the DSL about how changes to the XML structure of entries will be  reflected in the front-end of the DSL.

 

Week Beginning 21st October 2019

After a wonderful three weeks’ holiday I returned to work on Monday this week.  I’d been keeping track of my emails whilst I’d been away so although I had a number of things waiting for me to tackle on my return at least I knew what they were, so returning to work wasn’t as much of a shock as it might otherwise have been.  The biggest item waiting for me to get started on was a request from Gerry Carruthers to write a Data Management Plan for an AHRC proposal he’s putting together.  He’d sent me all of the bid documentation so I read through that and began to think about the technical aspects of the project, which would mainly revolve around the creation of TEI-XML digital editions.  I had an email conversation with Gerry over the course of the week where I asked him questions and he got back to me with answers.  I’d also arranged to meet with my fellow developer Luca Guariento on Wednesday as he has been tasked with writing a DMP and wanted some advice.  This was a good opportunity for me to ask him some details about the technology behind the digital editions that had been created for the Curious Travellers project, as it seemed like a good idea to reuse a lot of this for Gerry’s project.  I finished a first version of the plan on Wednesday and sent it to Gerry, and after a few further tweaks based on feedback a final version was submitted on Thursday.

Also this week I met with Head of School Alice Jenkins to discuss my role in the School, a couple of projects that have cropped up that need my input and the state of my office.  It was a really useful meeting, and it was good to discuss the work I’ve done for staff in the School and to think about how my role might be developed in future.  I spent a bit of time after the meeting investigating some technology that Alice was hoping might exist, and I also compiled a list of all of the current Critical Studies Research and Teaching staff that I’ve worked with over the years.  Out of 104 members of staff I have worked with 50 of them, which I think is pretty good going, considering not every member of staff is engaged in research, or if they are may not be involved with anything digital.

I spent some more time this week working on the pilot website for 18th Century Borrowers for Matthew Sangster.  We met on Wednesday morning and had a useful meeting, discussing the new version of the data that Matt is working on, how my import script might be updated to incorporate some changes and investigating why some of the error rows that were outputted during my last data import were generated and how these could be addressed.  We also went through the website I’d created, as Matt had uncovered a couple of bugs, such as the order of the records in the tabular view of the page not matching up with the order on the scanned image.  This turned out to have been caused by the tabular order depending on an imported column that was set to hold general character data rather than numbers, meaning the database arranged all of the ones (1,10,11 etc) then all of the twos (2, 21,22 etc) rather than arranging things in proper numerical order.  I also realised that I hadn’t created indexes for a lot of the columns in the database that were used in the queries, which was making the queries rather slow and inefficient.  After generating these indexes the various browses are now much speedier.

I also added authors under book titles in the various browse lists, which helps to identify specific books and created a new section of the website for frequency lists.  There are now three ‘top 20’ lists, which show the most frequently borrowed books and authors, and the student borrowers who borrowed the most books.  Finally, I created the search facility for the site, allowing any combination of book title, author, student, professor and date of lending to be combined and for the results of the search to be displayed.  This took a fair amount of time to implement, but I managed to get the URL for the page to Matt before the end of the week.

Also this week I investigated and fixed a bug that the Glasgow Medical Humanities Network RA Cris Sarg was encountering when creating new people records and adding these to the site, I responded to a query from Bryony Randall about the digital edition we had made for the New Modernist Editing project, spoke to Corey Gibson about a new project he’s set up that will be starting soon and I’ll be creating the website for, had a chat with Eleanor Capaldi about a project website I’ll be setting up for her, responded to a query from Fraser about access to data from the Thesaurus of Old English and attended the Historical Thesaurus birthday drinks.  I also read through the REF digital guidelines that Jennifer Smith had sent on to me and spoke to her about the implications for the SCOYA project, helped the SCOSYA RA Frankie MacLeod with some issues she was encountering with map stories and read through some feedback on the SCOSYA interfaces that had been sent back from the wider project team.  Next week I intend to focus on the SCOSYA project, acting on the feedback and possibly creating some non-map based ways of accessing the data.

Week Beginning 23rd September 2019

On Monday this week we had another Arts Developer coffee meeting, which as always was a good opportunity to catch up with my fellow developers in the College of Arts and talk about our work.  On Tuesday I attended a team meeting for the SCOSYA project, where we discussed some of the final things that needed done before the online resource would be ready for the user testing sessions that will take place in the next few weeks.  I spend quite a bit of time implementing these final tweaks during the week.  This included adding in the full map attribution and copyright information in a pop-up that’s linked to from the bottom of the atlas.  I also added it in to the API as well.  After this I changed a number of colours that were used for markers and menu items on both the public and experts atlases and added in some links to help pages and some actual text to the atlas menus to replace the placeholder text.

I also realised that highlighting wasn’t working on the experts ‘home’ map, which was probably a bit confusing.  Implementing this turned out to be rather tricky as highlighting depended on grabbing the location name from the pop-up and then comparing this with the location names in a group.  The ‘Home’ map has no pop-ups so highlighting wouldn’t work.  Instead I had to change things so that the location is grabbed from the tooltip text.  Also, the markers on the ‘Home’ map were actually different types of markers (HTML elements styled by CSS as opposed to SVG shapes) so even though they look the same the highlighting code wasn’t working for them.  I’ve now switched them to SVG shape and highlighting seems to be working now.  It’s even possible to create a group on the ‘Home’ page too.

I also added in a new ‘cite’ menu item to the experts atlas, the allows users to grab a link to their specific map view, formatted in a variety of citation styles.  This updates everytime the ‘cite’ menu is opened, so if the user has changed the zoom level or map centre the citation link always reflects this.  Finally, I created new versions of the atlases (now called ‘atlas’ and ‘linguists atlas’) that will be used for beta testing.

I also spent some time working for the DSL, fixing the ‘sienna’ test version of the website and changing how the quick search works on both test versions of the website.  If the user selects an item from the autocomplete list, the search then performs an exact search for this work, whereas previously it was just matching the characters anywhere in the headword, which didn’t really make much sense.  I also spent quite a bit of time looking through the old DSL editor server to try and track down some files for Rhona.

Also this week I had a chat with Gavin Miller about publicising his new Glasgow Medical Humanities site, set up a researcher in Psychology with an account to create an iOS app, fixed a couple of broken links on the Seeing Speech website and had a lengthy email chat with Heather Pagan about the Anglo-Norman Dictionary data.  We have now managed to access the server and begin to analyse the contents to try and track down the data, and by the end of the week it looked like we might actually have found the full dataset, which is encouraging.  I finished off the week by creating a final ‘Song Story’ for the RNSN project, which took a few hours to implement but is looking pretty good.

I’m going to be out of the office for the next three weeks on a holiday in Australia so there will be no further updates from me for a while.