Week Beginning 20th June 2022

I completed an initial version of the Chambers Library map for the Books and Borrowing project this week.  It took quite a lot of time and effort to implement the subscription period range slider.  Searching for a range when the data also has a range of dates rather than a single date means we needed to make a decision about what data gets returned and what doesn’t.  This is because the two ranges (the one chosen as a filter by the user and the one denoting the start and end periods of subscription for each borrower) can overlap in many different ways.  For example, the period chosen by the user is 05 1828 to 06 1829.  Which of the following borrowers should therefore be returned?

  1. Borrowers range is 06 1828 to 02 1829: Borrower’s range is fully within the period so should definitely be included
  2. Borrowers range is 01 1828 to 07 1828: Borrower’s range extends beyond the selected period at the start and ends within the selected period.  Presumably should be included.
  3. Borrowers range is 01 1828 to 09 1829: Borrower’s range extends beyond the selected period in both directions.  Presumably should be included.
  4. Borrowers range is 05 1829 to 09 1829: Borrower’s range begins during the selected period and ends beyond the selected period. Presumably should be included.
  5. Borrowers range is 01 1828 to 04 1828: Borrower’s range is entirely before the selected period. Should not be included
  6. Borrowers range is 07 1829 to 10 1829: Borrower’s range is entirely after the selected period. Should not be included.

Basically if there is any overlap between the selected period and the borrower’s subscription period the borrower will be returned.  But this means most borrowers will always be returned a lot of the time.  It’s a very different sort of filter to one that purely focuses on a single date – e.g. filtering the data to only those borrowers whose subscription periods *begins* between 05 1828 and 06 1829.

Based on the above assumptions I began to write the logic that would decide which borrowers to include when the range slider is altered.  It was further complicated by having to deal with months as well as years.  Here’s the logic in full if you fancy getting a headache:

if(((mapData[i].sYear>startYear || (mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth || mapData[i].sYear>startYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)) || (((mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth) || mapData[i].sYear>startYear) && ((mapData[i].sYear==endYear && mapData[i].sMonth <=endMonth) || mapData[i].sYear<endYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)))

I also added the subscription period to the popups.  The only downside to the range slider is that the occupation marker colours change depending on how many occupations are present during a period, so you can’t always tell an occupation by its colour. I might see if I can fix the colours in place, but it might not be possible.

I also noticed that the jQuery UI sliders weren’t working very well on touchscreens so installed the jQuery TouchPunch library to fix that (https://github.com/furf/jquery-ui-touch-punch).  I also made the library marker bigger and gave it a white border to more easily differentiate it from the borrower markers.

I then moved onto incorporating page images in the resource too.  Where a borrower has borrower records the relevant pages where these borrowing records are found now appear as thumbnails in the borrower popup.  These are generated by the IIIF server based on dimensions passed to it, which is much nicer than having to generate and store thumbnails directly.  I also updated the popup to make it wider when required to give more space for the thumbnails.  Here’s a screenshot of the new thumbnails in action:

Clicking on a thumbnail opens a further popup containing a zoomable / pannable image of the page.  This proved to be rather tricky to implement.  Initially I was going to open a popup in the page (outside of the map container) using a jQuery UI Dialog.  However, I realised that this wouldn’t work when the map was being viewed in full-screen mode, as nothing beyond the map container is visible in such circumstances.  I then considered opening the image in the borrower popup but this wasn’t really big enough.  I then wondered about extending the ‘Map options’ section and replacing the contents of this with the image, but this then caused issues for the contents of the ‘Map options’ section, which didn’t reinitialise properly when the contents were reinstated.  I then found a plugin for the Leaflet mapping library that provides a popup within the map interface (https://github.com/w8r/Leaflet.Modal) and decided to use this.  However, it’s all a little complex as the popup then has to include another mapping library called OpenLayers that enables the zooming and panning of the page image, all within the framework of the overall interactive map.  It is all working and I think it works pretty well, although I guess the map interface is a little cluttered, what with the ‘Map Options’ section, the map legend, the borrower popup and then the page image popup as well.  Here’s a screenshot with the page image open:

All that’s left to do now is add in the introductory text once Alex has prepared it and then make the map live.  We might need to rearrange the site’s menu to add in a link to the Chambers Map as it’s already a bit cluttered.

Also for the project I downloaded images for two further library registers for St Andrews that had previously been missed.  However, there are already records for the registers and pages in the CMS so we’re going to have to figure out a way to work out which image corresponds to which page in the CMS.  One register has a different number of pages in the CMS compared to the image files so we need to work out how to align the start and end and if there are any gaps or issues in the middle.  The other register is more complicated because the images are double pages whereas it looks like the page records in the CMS are for individual pages.  I’m not sure how best to handle this.  I could either try and batch process the images to chop them up or batch process the page records to join them together.  I’ll need to discuss this further with Gerry, who is dealing with the data for St Andrews.

Also this week I prepared for and gave a talk to a group of students from Michigan State University who were learning about digital humanities.  I talked to them for about an hour about a number of projects, such as the Burns Supper map (https://burnsc21.glasgow.ac.uk/supper-map/), the digital edition I’d created for New Modernist Editing (https://nme-digital-ode.glasgow.ac.uk/), the Historical Thesaurus (https://ht.ac.uk/), Books and Borrowing (https://borrowing.stir.ac.uk/) and TheGlasgowStory (https://theglasgowstory.com/).  It went pretty and it was nice to be able to talk about some of the projects I’ve been involved with for a change.

I also made some further tweaks to the Gentle Shepherd Performances page which is now ready to launch, and helped Geert out with a few changes to the WordPress pages of the Anglo-Norman Dictionary.  I also made a few tweaks to the WordPress pages of the DSL website and finally managed to get a hotel room booked for the DHC conference in Sheffield in September.  I also made a couple of changes to the new Gaelic Tongues section of the Seeing Speech website and had a discussion with Eleanor about the filters for Speech Star.  Fraser had been in touch with about 500 Historical Thesaurus categories that had been newly matched to OED categories so I created a little script to add these connections to the online database.

I also had a Zoom call with the Speak For Yersel team.  They had been testing out the resource at secondary schools in the North East and have come away with lots of suggested changes to the content and structure of the resource.  We discussed all of these and agreed that I would work on implementing the changes the week after next.

Next week I’m going to be on holiday, which I have to say I’m quite looking forward to.

Week Beginning 13th June 2022

I worked for several different projects this week.  For the Books and Borrowing project I processed and imported a further register for the Advocates library that had been digitised by the NLS.  I also continued with the interactive map of Chambers library borrowers, although I couldn’t spend as much time on this as I’d hoped as my access to Stirling University’s VPN had stopped working and without VPN access I can’t connect to the database and the project server.  It took a while to resolve the issue as access needs to be approved by some manager or other, but once it was sorted I got to work on some updates.

One thing I’d noticed last week was that when zooming and panning the historical map layer was throwing out hundreds of 403 Forbidden errors to the browser console.  This was not having any impact on the user experience, but was still a bit messy and I wanted to get to the bottom of the issue.  I had a very helpful (as always) chat with Chris Fleet at NLS Maps, who provided the historical map layer and he reckoned it was because the historical map only covers a certain area and moving beyond this was still sending requests for map tiles that didn’t exist.  Thankfully an option exists in Leaflet that allows you to set the boundaries for a map layer (https://leafletjs.com/reference.html#latlngbounds) and I updated the code to do just that, which seems to have stopped the errors.

I then returned to the occupations categorisation, which was including far too many options.  I therefore streamlined the occupations, displaying the top-level occupation only.  I think this works a lot better (although I need to change the icon colour for ‘unknown’).  Full occupation information is still available for each borrower via the popup.

I also had to change the range slider for opacity as standard HTML range sliders don’t allow for double-ended ranges.  We require a double-ended range for the subscription period and I didn’t want to have two range sliders that looked different on one page.  I therefore switched to a range slider offered by the jQuery UI interface library (https://jqueryui.com/slider/#range).  The opacity slider still works as before, it just looks a little different.  Actually, it works better than before, as the opacity now changes as you slide rather than only updating after you mouse-up.

I then began to implement the subscription period slider.  This does not yet update the data.  It’s been pretty tricky to implement this.  The range needs to be dynamically generated based on the earliest and latest dates in the data, and dates are both year and month, which need to be converted into plain integers for the slider and then reinterpreted as years and months when the user updates the end positions.  I think I’ve got this working as it should, though.  When you update the ends of the slider the text above that lists the months and years updates to reflect this.  The next step will be to actually filter the data based on the chosen period.  Here’s a screenshot of the map featuring data categorised by the new streamlined occupations and the new sliders displayed:

For the Speak For Yersel project I made a number of tweaks to the resource, which Jennifer and Mary are piloting with school children in the North East this week.  I added in a new grammatical question and seven grammatical quiz questions.  I tweaked the homepage text and updated the structure of questions 27-29 of the ‘sound about right’ activity.  I ensured that ‘Dumfries’ always appears as ‘Dumfries and Galloway’ in the ‘clever’ activity and follow-on and updated the ‘clever’ activity to remove the stereotype questions.  These were the ones where users had to rate the speakers from a region without first listening to any audio clips and Jennifer reckoned these were taking too long to complete.  I also updated the ‘clever’ follow-on to hide the stereotype options and switched the order of the listener and speaker options in the other follow-on activity for this type.

For the Speech Star project I replaced the data for the child speech error database with a new, expanded dataset and added in ‘Speaker Code’ as a filter option.  I also replicated the child speech and normalised speech databases from the clinical website we’re creating on the more academic teaching site we’re creating and also pulled in the IPA chart from Seeing Speech into this resource too.  Here’s a screenshot of how the child speech error database looks with the new ‘speaker code’ filter with ‘vowel disorder’ selected:

I also responded to Craig Lamont in Scottish literature with some further feedback on the structure of his Burns Manuscript Database spreadsheet, which is now shaping up nicely.  Craig had also sent me an updated spreadsheet with data for the Ramsay Gentle Shepherd performances project.  I’d set this up (interactive map, timeline and filterable tabular data) a few weeks ago, migrating it to the University’s T4 website management system.  All had worked then but when I logged into T4 and previewed the page I previously created I discovered it longer worked.  The page hadn’t been updated since the end of May and I had no idea what’s gone wrong.  I can only assume that the linked content (i.e. the links to the JavaScript files) had somehow become unlinked.  I decided, therefore, that it would be easier to just host the JavaScript files on another server I have direct access to rather than having to shoehorn it all into T4.  I made an updated version with the new dataset and this is working well.

I also made a couple of tweaks to the DSL this week, installing the TablePress plugin for the ancillary pages and creating a further alternative logo for the DSL’s Facebook posts.  I also returned to going some work for the Anglo-Norman Dictionary, offering some advice to the editor Geert about incorporating publications and overhauling how cross references are displayed in the Dictionary Management System.

I updated the ‘View Entry’ page in the DMS.  Previously it only included cross references FROM the entry you’re looking at TO any other entries.  I.e. it only displayed content when the entry was of type ‘xref’ rather than ‘main’.  Now in addition to this there’s a further section listing all cross references TO the entry you’re looking at from any entry of type ‘xref’ that links to it.

In addition there is a button allowing you to view all entries that include a cross reference to the current entry anywhere in their XML – i.e. where an <xref> tag that features the current entry’s slug is found at any level in any other main entry’s XML.  This code is hugely memory intensive to run, as basically all 27,464 main entries need to be pulled into the script, with the full XML contents of each checked for matching xrefs.  For this reason the page doesn’t run the code each time the ‘view entry’ page is loaded but instead only runs when you actively press the button.  It takes a few seconds for the script to process, but after it does the cross references are listed in the same manner as the ‘pure’ xrefs in the preceding sections.

Finally I participated in a Zoom-based focus group for the AHRC about the role of technicians in research projects this week.  It was great to participate to share my views on my role and to hear from other people with similar roles at other organisations.

Week Beginning 23rd May 2022

I’d completed all of the outstanding tasks for ‘Speak For Yersel’ last week so this week I turned my attention to several other projects.  For the Books and Borrowing project I wrote a script to strip out duplicate author records from the data and reassign any books associated with the duplicates to the genuine author records.  The script iterated through each author in the ‘duplicates’ spreadsheet, found all rows where the ‘AID’ did not match the ‘AID to keep’ column, reassigned any book author records from the former to the latter and then deleted the author record.  The script deleted 310 duplicate authors and reassigned 735 books to other authors, making the data in the content management system a lot cleaner.

I then migrated the Uist Saints website to a server at Glasgow and got everything working at a temporary URL.  All looked fine to me, although there was an issue with the homepage that needed investigating.  This issue was present on the live site too, resulting in the page content cutting off and displaying a lot of blank space and no footer, with lots of errors being displayed in the console.  I did some investigation into the errors and discovered that these were being caused by some JavaScript embedded in the homepage that has been treated like HTML by WordPress.  It has added HTML line breaks (<br>) wherever there is a line break in the code, thereby breaking the JavaScript.  I updated the page to strip out all of the <br> tags and it now loads without any errors in the console but whatever the JavaScript is supposed to be doing still isn’t working and there’s still a huge expanse of empty space and then no footer.

The JavaScript appears to be attempting to display a map using the Leaflet mapping library, but using some sort of WordPress plugin to do so.  There are over 3000 lines of JavaScript code in the page, which is really crazy.  Every single marker on the map (e.g. “Cladh Choinnich (burial ground and site of chapel)” at [57.157715,-7.301283]) has its own script comprising around 70 lines of code.  Sofia, the project RA looked at the page and decided to try deleting the blocks of JavaScript, and this then seemed to solve to problem, which was great, as I was thinking I’d need to create a new map after somehow extracting all of the data.

I then moved on to the Ramsay ‘Gentle Shepherd’ data, and this week tackled the issue of importing the code I’d written into the University website’s T4 content management system.  I created a ‘one file’ version of the page that has everything incorporated in one single file – all the scripts, the data and the styles.  I was hoping I’d then be able to just upload this to T4 but I ran into a problem:

I selected the ‘Standard plain’ content type as I did for the Enlightenment map I created in T4 many years ago, but the ‘content’ box can only accept a maximum of 80,000 characters.  My ‘one file’ approach is around 404,000 characters so I can’t upload it.  I then wondered about using separate files, as I had done with the Enlightenment map, but the JSON data for the performances on its own is over 227,000 characters.  This data needs to be a single thing and can’t be split up into smaller chunks (at least not without then having to stitch the data back together in the JavaScript before it can be used every time someone loads the page which would have an impact on the speed of the page).

I notice that the Enlightenment map has a further content type called ‘_blank’ that isn’t available to me where the performance data is to go.  This type allows up to 150,000 characters.  Unfortunately this is still not big enough.  The Leaflet JavaScript library which I also need to upload is 141,000 characters so currently can’t be uploaded either. I then looked into uploading the JSON data as a media file and I managed to upload it, but apparently media files only become active in the system when they are linked to from a T4 page using T4’s method of linking to a file.  The JSON file would only ever be loaded in via an AJAX call from the JavaScript code so would never work.  However, I did realise that I could upload the JavaScript file with the JSON data stored directly within it as a media file and then link to this (and also the leaflet JavaScript file and the CSS files) from the T4 HTML file.  However, this wouldn’t work when using regular HTML tags to link to scripts and CSS files as T4 only activates media files when linked to using its own special way of inserting links.

A helpful guy called Rick in the Web Team suggested using the ‘standard’ content type and T4’s way of linking to files to get things working, and this did sort of work, but while the ‘standard’ content type allows you to manually edit the HTML, T4 then processes any HTML you enter, which included stripping out a lot of tags my code needed and overwriting other HTML tags, which was very frustrating.

However, I was able to view the source for the embedded media files in this template and then copy this into my ‘standard plain’ section and this seems to have worked.  There were other issues, though, such as that T4 applies its CSS styles AFTER any locally created styles meaning a lot of my custom styles were being overwritten.  I managed to find a way around this and the section of the page is now working if you preview it in T4.

Unfortunately to get this to work the JSON data needed to be embedded in the JavaScript file rather than loaded in as a separate file.  This is going to make it more difficult for non-technical people to edit the data directly in T4.  In order to do so someone would need to:  Download the ‘gspCode’ file in the Media Library, which T4 unhelpfully converts into a .txt file then rename the file to remove the .txt extension (so it ends in .js instead).  Then find the data array in the file, make the changes to it and then validate it in the handy JSON validator https://jsonlint.com/ before saving the JS file and uploading it as a replacement for the item in the Media Library.

With all of this out of the way I was hoping to begin work on the API and front-end for the Books and Borrowing project, and I did manage to make a start on this.  However, many further tweaks and updates came through from Jennifer Smith for the Speak For Yersel system, which we’re intending to sent out to selected people next week, and I ended up spending most of the rest of the week on this project instead.  This included several Zoom calls and implementing countless minor tweaks to the website content, including homepage text, updating quiz questions and answer options, help text, summary text, replacing images, changing styles and other such things.  I also updated the maps to set their height dynamically based on the height of the browser window, ensuring that the map and the button beneath it are visible without scrolling (but also including a minimum height so the map never gets too small).  I also made the maps wider and the question area narrower as there was previously quite a lot of wasted space with there was a 50/50 split between the two.

I also fixed a bug with the slider-based questions that was only affecting Safari that prevented the ‘next’ button from activating.  This was because the code that listened for the slider changing was set to do something when a slider was clicked on, but for it to work in Safari instead of ‘click’ the event needed to be ‘change’.  I also added in the new dictionary-based question type and added in the questions, although we then took these out again for now as we’d promised the DSL that the embedded school dictionary would only be used by the school children in our pilot.  I also added in a question about whether the user has been to university to the registration page and then cleared out all of the sample data and users that we’d created during our testing before actual users begin using the resource next week.

Week Beginning 16th May 2022

This week I finished off all of the outstanding work for the Speak For Yerself project. The other members of the team (Jennifer and Mary) are both on holiday so I finished off all of the tasks I had on my ‘to do’ list, although there will certainly be more to do once they are both back at work again.  The tasks I completed were a mixture of small tweaks and larger implementations.  I made tweaks to the ‘About’ page text and changed the intro text to the ‘more give your word’ exercise.  I then updated the age maps for this exercise, which proved to be pretty tricky and time-consuming to implement as I needed to pull apart a lot of the existing code.  Previously these maps showed ‘60+’ and ‘under 19’ data for a question, with different colour markers for each age group showing those who would say a term (e.g. ‘Scunnered’) and grey markers for each age group showing those who didn’t say the term.  We have completely changed the approach now.  The maps now default to showing ‘under 19’ data only, with different colours for each different term.  There is now an option in the map legend to switch to viewing the ‘60+’ data instead.  I added in the text ‘press to view’ to try and make it clearer that you can change the map.  Here’s a screenshot:

I also updated the ‘give your word’ follow-on questions so that they are now rated in a new final page that works the same way as the main quiz.  In the main ‘give your word’ exercise I updated the quiz intro text and I ensured that the ‘darker dots’ explanatory text has now been removed for all maps.  I tweaked a few questions to change their text or the number of answers that are selectable and I changed the ‘sounds about right’ follow-on ‘rule’ text and made all of the ‘rule’ words lower case.  I also made it so that when the user presses ‘check answers’ for this exercise a score is displayed to the right and the user is able to proceed directly to the next section without having to correct their answers.  They still can correct their answers if they want.

I then made some changes to the ‘She sounds really clever’ follow-on.  The index for this is now split into two sections, one for ‘stereotype’ data and one for ‘rating speaker’ data and you can view the speaker and speaker/listener results for both types of data.  I added in the option of having different explanatory text for each of the four perception pages (or maybe just two – one for stereotype data, one for speaker ratings) and when viewing the speaker rating data the speaker sound clips now appear beneath the map.  When viewing the speaker rating data the titles above the sliders are slightly different.  Currently when selecting the ‘speaker’ view the title is “This speaker from X sounds…” as opposed to “People from X sound…”.  When selecting the ‘speaker/listener’ view the title is “People from Y think this speaker from X sounds…” as opposed to “People from Y think people from X sound…”.  I also added a ‘back’ button to these perception follow-on pages so it’s easier to choose a different page.  Finally, I added some missing HTML <title> tags to pages (e.g. ‘Register’ and ‘Privacy’) and fixed a bug whereby the ‘explore more’ map sound clips weren’t working.

With my ‘Speak For Yersel’ tasks out of the way I could spend some time looking at other projects that I’d put on hold for a while.  A while back Eleanor Lawson contacted me about adding a new section to the Seeing Speech website where Gaelic speaker videos and data will be accessible, and I completed a first version this week.  I replicated the Speech Star layout rather than the /r/ & /l/ page layout as it seemed more suitable: the latter only really works for a limited number of records while the former works well with lots more (there are about 150 Gaelic records).  What this means is the data has a tabular layout and filter options.  As with Speech Star you can apply multiple filters and you can order the table by a column by clicking on its header (clicking a second time reverses the order).  I’ve also included the option to open multiple videos in the same window.  I haven’t included the playback speed options as the videos already include the clip at different speeds.  Here’s a screenshot of how the feature looks:

On Thursday I had a Zoom call with Laura Rattray and Ailsa Boyd to discuss a new digital edition project they are in the process of planning.  We had a really great meeting and their project has a lot of potential.  I’ve offered to give technical advice and write any technical aspects of the proposal as and when required, and their plan is to submit the proposal in the autumn.

My final major task for the week was to continue to work on the Ramsay ‘Gentle Shepherd’ data.  I overhauled the filter options that I implemented last week so they work in a less confusing way when multiple types are selected now.  I’ve also imported the updated spreadsheet, taking the opportunity to trim whitespace to cut down on strange duplicates in the filter options.  There are some typos you’ll need to fix in the spreadsheet, though (e.g. we have ‘Glagsgow’ and ‘Glagsow’) plus some dates still need to be fixed.

I then created an interactive map for the project and have incorporated the data for which there are latitude and longitude values.  As with the Edinburgh Gazetteer map of reform societies (https://edinburghgazetteer.glasgow.ac.uk/map-of-reform-societies/) the number of performances at a venue is displayed in the map marker.  Hover over a marker to see info about the venue.  Click on it to open a list of performances.  Note that when zoomed out it can be difficult to make out individual markers but we can’t really use clustering as on the Burns Supper map (https://burnsc21.glasgow.ac.uk/supper-map/) because this would get confusing:  we’d have clustered numbers representing the number of markers in a cluster and then induvial markers with a number representing the number of performances.  I guess we could remove the number of performances from the marker and just have this in the tooltip and / or popup, but it is quite useful to see all the numbers on the map.  Here’s a screenshot of how the map currently looks:

I still need to migrate all of this to the University’s T4 system, which I aim to tackle next week.

Also this week I had discussions about migrating an externally hosted project website to Glasgow for Thomas Clancy.  I received a copy of the files and database for the website and have checked over things and all is looking good.  I also submitted a request for a temporary domain and I should be able to get a version of the site up and running next week.  I also regenerated a list of possible duplicate authors in the Books and Borrowing system after the team had carried out some work to remove duplicates.  I will be able to use the spreadsheet I have now to amalgamate duplicate authors, a task which I will tackle next week.

Week Beginning 9th May 2022

I spent most of the week continuing with the Speak For Yersel website, which is now nearing completion.  A lot of my time was spent tweaking things that were already in place, and we had a Zoom call on Wednesday to discuss various matters too.  I updated the ‘explore more’ age maps so they now include markers for young and old who didn’t select ‘scunnered’, meaning people can get an idea of the totals.  I also changed the labels slightly and the new data types have been given two shades of grey and smaller markers, so the data is there but doesn’t catch the eye as much as the data for the selected term.  I’ve updated the lexical ‘explore more’ maps so they now actually have labels and the ‘darker dots’ text (which didn’t make much sense for many maps) has been removed.  Kinship terms now allow for two answers rather than one, which took some time to implement in order to differentiate this question type from the existing ‘up to 3 terms’ option.  I also updated some of the pictures that are used and added in an ‘other’ option to some questions.  I also updated the ‘Sounds about right’ quiz maps so that they display different legends that match the question words rather than the original questionnaire options.  I needed to add in some manual overrides to the scripts that generate the data for use in the site for this to work.

I also added in proper text to the homepage and ‘about’ page.  The former included a series of quotes above some paragraphs of text and I wrote a little script that highlighted each quote in turn, which looked rather nice.  This then led onto the idea of having the quotes positioned on a map on the homepage instead, with different quotes in different places around Scotland.  I therefore created an animated GIF based on some static map images that Mary had created and this looks pretty good.

I then spent some time researching geographical word clouds, which we had been hoping to incorporate into the site.  After much Googling it would appear that there is no existing solution that does what we want, i.e. take a geographical area and use this as the boundaries for a word cloud, featuring different coloured words arranged at various angles and sizes to cover the area.  One potential solution that I was pinning my hopes on was this one: https://github.com/JohnHenryEden/MapToWordCloud which promisingly states “Turn GeoJson polygon data into wordcloud picture of similar shape.”.  I managed to get the demo code to run, but I can’t get it to actually display a word cloud, even though the specifications for one are in the code.  I’ve tried investigating the code but I can’t figure out what’s going wrong.  No errors are thrown and there’s very little documentation.  All that happens is a map with a polygon area is displayed – no word cloud.

The word cloud aspects of the above are based on another package here: https://npm.io/package/wordcloud and this package allows you to specify a shape to use as an outline for the cloud, and one of the examples shows words taking up the shape of Taiwan: https://wordcloud2-js.timdream.org/#taiwan  However, this is a static image not an interactive map – you can’t zoom into it or pan around it.  One possible solution may be to create images of our regions, generate static word cloud images as with the above and then stitch the images together for form a single static map of Scotland.  This would be a static image, though, and not comparable to the interactive maps we use elsewhere in the website.  Programmatically stitching the individual region images together might also be quite tricky.  I guess another option would be to just allow users to select an individual region and view the static word cloud (dynamically generated based on the data available when the user selects to view it) for the selected region, rather than joining them all together.

I also looked at some further options that Mary had tracked down.  The word cloud on a leaflet map (http://hourann.com/2014/js-devs-dont-get-lost/leaflet-wordcloud.html?sydney) only uses a circle for the boundaries of the word cloud.  All of the code is written around the use of a circle (e.g. using diameters to work out placement) so couldn’t really be adapted to work with a complex polygon.  We could work out a central point for each region and have a circular word cloud positioned at that point, but we wouldn’t be able to make the words fill the entire region.  The second of Mary’s links (https://www.jasondavies.com/wordcloud/) as far as I can tell is just a standard word cloud generator with no geographical options.  The third option (https://github.com/peterschretlen/leaflet-wordcloud) has no demo or screenshot or much information about it and I’m afraid I can’t get it to work.

The final option (https://dagjomar.github.io/Leaflet.ParallaxMarker/) is pretty cool but it’s not really a word cloud as such.  Instead it’s a bunch of labels set to specific lat/lng points and given different levels which sets their size and behaviour on scroll.  We could use this to set the highest rated words to the largest level with lower rated words at lower level and position each randomly in a region, but it’s not really a word cloud and it would be likely that words would spill over into neighbouring regions.

Based on the limited options that appear to be out there, I think creating a working, interactive map-based word cloud would be a research project in itself and would take far more time than we have available.

Later on in the week Mary sent me the spreadsheet she’d been working on to list settlements found in postcode areas and to link these areas to the larger geographical regions we use.  This is exactly what we needed to fill in the missing piece in our system and I wrote a script that successfully imported the data.  For our 411 areas we now have 957 postcode records and 1638 settlement records.  After that I needed to make some major updates to the system.  Currently a person is associated with an area (e.g. ‘Aberdeen Southwest’) but I need to update this so that a person is associated with a specific settlement (e.g. ‘Ferryhill, Aberdeen’), which is then connected to the area and from the area to one of our 14 regions (e.g. ‘North East (Aberdeen)’).

I updated the system to make these changes and updated the ‘register’ form, which now features an autocomplete for the location – start typing a place and all matches appear.  Behind the scenes the location is saved and connected up to areas and regions, meaning we can now start generating real data, rather than a person being assigned a random area.  The perception follow-on now connects the respondent up with the larger region when selecting ‘listener is from’, although for now some of this data is not working.

I then needed to further update the registration page to add in an ‘outside Scotland’ option so people who did not grow up in Scotland can use the site.  Adding in this option actually broke much of the site:  registration requires an area with a geoJSON shape associated with the selected location otherwise it fails and the submission of answers requires this shape in order to generate a random marker point and this then failed when the shape wasn’t present.  I updated the scripts to fix these issues, meaning an answer submitted by an ‘outside’ person has a zero for both latitude and longitude, but then I also needed to update the script that gets the map data to ensure that none of these ‘outside’ answers were returned in any of the data used in the site (both for maps and for non-map visualisations such as the sliders).  So, much has changed and hopefully I haven’t broken anything whilst implementing these changes.  It does now mean that ‘outside’ people can now be included and we can export and use their data in future, even though it is not used in the current site.

Further tweaks I implemented this week included: changing the font sizes of some headings and buttons; renaming the ‘activities’ and ‘more’ pages as requested; adding ‘back’ buttons from all ‘activity’ and ‘more’ pages back to the index pages; adding an intro page to the click exercise as previously it just launched into the exercise whereas all others have an intro.  I also added summary pages to the end of the click and perception activities with links through to the ‘more’ pages and removed the temporary ‘skip to quiz’ option.  I also added progress bars to the click and perception activities.  Finally, I switched the location of the map legend from top right to top left as I realised when it was in the top right it was always obscuring Shetland whereas there’s nothing in the top left.  This has meant I’ve had to move the region label to the top right instead.

Also this week I continued to work on the Allan Ramsay ‘Gentle Shepherd’ performance data.  I added in faceted browsing to the tabular view, adding in a series of filter options for location, venue, adaptor and such things.  You can select any combination of filters (e.g. multiple locations and multiple years in combination).  When you select an item of one sort the limit options of other sorts update to only display those relevant to the limited data.  However, the display of limiting options can get a bit confusing once multiple limiting types have been selected.  I will try and sort this out next week.  There are also multiple occurrences of items in the limiting options (e.g. two Glasgows) because the data has spaces in some rows (‘Glasgow’ vs ‘Glasgow ‘) and I’ll need to see about trimming these out next time I import the data.

Also this week I arranged for the old DSL server to be taken offline, as the new website has now been operating successfully for two weeks.  I also had a chat with Katie Halsey about timescales for the development of the Books and Borrowers front-end.  Finally, I imported a new disordered paediatric speech dataset into the Speech Star website.  This included around double the number of records, new video files and a new ‘speaker code’ column.  Finally, I participated in a Zoom call for the Scottish Place-Names database where we discussed the various place-names surveys that are in progress and the possiblity of created an overarching search across all systems.

Week Beginning 11th June 2018

I met with Matthew Creasey from English Literature this week to discuss a project website for his recently funded ‘Decadence and Translation Network’ project.  The project website is going to be a fairly straightforward WordPress site, but there will also be a digital edition hosted through it, which will be sort of similar to what I did for the Woolf short story for the New Modernist Editing project (https://nme-digital-ode.glasgow.ac.uk/).  I set up an initial site for Matthew and will work on it further once he receives the images he’d like to use in the site design.

I also gave some further help with Craig Lamont in getting access to Google Analytics for the Ramsay project, and spoke to Quintin Cutts in Computing Science about publishing an iOS app they have created.  I also met with Graeme Cannon to discuss AHRC Data Management Plans, as he’s been asked to contribute to one and hasn’t worked with a plan yet.  I also made a couple of minor fixes to the RNSN timeline and storymap pages and updated the ‘attribution’ text on the REELS map.  There’s quite a lot of text relating to map attribution and copyright so instead of cluttering up the bottom of the maps I’ve moved everything into a new pop-up window.  In addition to the statements about the map tilesets I’ve also added in a statement about our place-name data, the copyright statement that’s required for the parish boundaries, a note about Leaflet and attribution for the map icons too. I think it works a lot better.

Other than these issues I mainly focussed on three projects this week.  For the SCOSYA project I tackled an issue with the ‘or’ search, that was causing the search to not display results in a properly categorised manner when the ‘rated by’ option was set to more than one.  It took a while to work through the code, and my brain hurt a bit by the end of it, but thankfully I managed to figured out what the problem was.  Basically when ‘rated by’ was set to 1 the code only needed to match a single result for a location.  If it found one that matched then the code stopped looking any further.  However, when multiple results need to be found that match, the code didn’t stop looking, but instead had to cycle through all other results for the location, including those for other codes.  So if it found two matches that met the criteria for ‘A1’ it would still go on looking through the ‘A2’ results as well, would realise these didn’t match and set the flag to ‘N’.  I was keeping a count of the number of matches but this part of the code was never reached if the ‘N’ flag was set.  I’ve now updated how the checking for matches works and thankfully the ‘Or’ search now works when you set a ‘rated by’ to be more than 1.

For the reworking of the Seeing Speech and Dynamic Dialects websites I decided for focus on the accent map and accent chart features of Dynamic Dialects.  For the map I switched to using the Leaflet.js mapping library rather than Google Maps.  This is mainly because I prefer Leaflet, you can use it with lots of different map tilesets, data doesn’t have to be posted to Google for the map to work and other reasons, such as the fact that you can zoom in and out with the scrollwheel of a mouse without having to also press ‘ctrl’, which gets really annoying with the existing map.  I’ve removed the option to switch from map to satellite and streetview as well as these didn’t really seem to serve much purpose.  The new base map is a free map supplied by Esri (a big GIS company).  It isn’t cluttered up with commercial map markers when zoomed in, unlike Google.

You can now hover over a map marker to view the location and area details.  Clicking on a marker opens up a pop-up containing all of the information about the speaker and links to the videos as ‘play’ buttons.  Note that unlike the existing map, buttons for sounds only appear if there are actually videos for them.  E.g. on the existing map for Oregon there are links for every video type, but only one (spontaneous) actually works.

Clicking on a ‘play’ button brings down the video overlay, as with the other pages I’ve redeveloped.  As with other pages, the URL is updated to allow direct linking to the video.  Note that any map pop-up you have open does not remain open when you follow such a link, but as the location appears in the video overlay header it should be easy for a user to figure out where the relevant marker is when they close the overlay.

For the Accent Chart page I’ve added in some filter options, allowing you to limit the display of data to a particular area, age range and / or gender.  These options can be combined, and also bookmarked / shared / cited (e.g. so you can follow a link to view only those rows where the area is ‘Scotland’ the age range is ’18-24’ and the gender is ‘F’).  I’ve also added a row hover-over colour to help you keep your eye on a row.  As with other pages, click on the ‘play’ button and a video overlay drops down.  You can also cite / bookmark specific videos.

I’ve made the table columns on this page as narrow as possible, but it’s still a lot of columns and unless you have a very wide monitor you’re going to have to scroll to see everything.  There are two ways I can set this up.  Firstly the table area of the page itself can be set to scroll horizontally.  This keeps the table within the boundaries of the page structure and looks more tidy, but it means you have to vertically scroll to the bottom of the table before you see the scrollbar, which is probably going to get annoying and may be confusing.  The alternative is to allow the table to break out of the boundaries of the page.  This looks messier, but the advantage is the horizontal scrollbar then appears at the bottom of your browser window and is always visible, even if you’re looking at the top section of the table.  I’ve asked Jane and Eleanor how they would prefer the page to work.

My final project of the week was the Historical Thesaurus.  I spent some time working on the new domain names we’re setting up for the thesaurus, and on Thursday I attended the lectures for the new lectureship post for the Thesaurus.  It was very interesting to hear the speakers and their potential plans for the Thesaurus in future, but obviously I can’t say much more about the lectures here.  I also attended the retirement do for Flora Edmonds on Thursday afternoon.  Flora has been a huge part of the thesaurus team since the early days of its switch to digital and I think she had a wonderful send-off from the people in Critical Studies she’s worked closely with over the years.

On Friday I spent some time adding the mini timelines to the search results page.  I haven’t updated the ‘live’ page yet but here’s an image showing how they will look:

It’s been a little tricky to add the mini-timelines in as the search results page is structured rather differently to the ‘browse’ page.  However, they’re in place now, both for general ‘word’ results  and for words within the ‘Recommended Categories’ section.  Note that if you’ve turned mini-timelines off in the ‘browse’ page they stay off on this page too.

We will probably want to add a few more things in before we make this page live.  We could add in the full timeline visualisation pop-up, that I could set up to feature all search results, or at least the results for the current page of search results.  If I did this I would need to redevelop the visualisation to try and squeeze in at least some of the category information and the pos, otherwise the listed words might all be the same.  I will probably try to add in each word’s category and pos, which should provide just enough context, although subcat names like ‘pertaining to’ aren’t going to be very helpful.

We will also need to consider adding in some sorting options.  Currently the results are ordered by ‘Tier’ number, but I could add in options to order results by ‘first attested date’, ‘alphabetically’ and ‘length of attestation’.  ‘Alphabetically’ isn’t going to be hugely useful if you’re looking at a page of ‘sausage’ results, but will be useful for wildcard searches (e.g. ‘*sage’) and other searches like dates.  I would imagine ordering results by ‘length of attestation’ is going to be rather useful in picking out ‘important’ words.  I’ll hopefully have some time to look into these options next week.

 

 

 

Week Beginning 4th June 2018

I’d taken Friday off as a holiday this week, and I was also off on Monday afternoon to attend a funeral.  Despite being off for a day and a half I still managed to achieve quite a lot this week.  Over the weekend Thomas Clancy had alerted me to another excellent resource that has been developed by the NLS Maps people that plots the boundaries of all parishes in Scotland, which you can access here:  http://maps.nls.uk/geo/boundaries/#zoom=10.671666666666667&lat=55.8481&lon=-2.5155&point=0,0.  For REELS we had been hoping to incorporate parish boundaries into our Berwickshire map but didn’t know where to get the coordinates from, and there wasn’t enough time in the project for us to manually create the data.  I emailed Chris Fleet at the NLS to ask where they’d got their data from, and whether we might be able to access the Berwickshire bits of it.  Chris very helpfully replied to say were created by the James Hutton Institute and are hosted on the Scottish government’s Scottish Spatial Data Infrastructure Metadata Portal (see https://www.spatialdata.gov.scot/geonetwork/srv/eng/catalog.search#/metadata/c1d34a5d-28a7-4944-9892-196ca6b3be0c).  The data is free to use, so long as a copyright statement is displayed, and there’s even an API through which the data can be grabbed (see here: http://sedsh127.sedsh.gov.uk/arcgis/rest/services/ScotGov/AreaManagement/MapServer/1/query).  The data can even be outputted in a variety of formats, including shape files, JSON and GeoJSON.  I decided to go for GeoJSON, as this seemed like a pretty good fit for the Leaflet mapping library we use.

Initially I used the latitude and longitude coordinates for one parish (Abbey St Bathans) and added this to the map.  Unfortunately the polygon shape didn’t appear on the map, even though no errors were returned.  This was rather confusing until I realised that whereas Leaflet tends to use latitude and then longitude as the order of the input data, GeoJSON is set to have longitude first and then latitude.  This meant my polygon boundaries had been added to my map, just in a completely different part of the world!  It turns out that in order to use GeoJSON data in Leaflet it’s better to use Leaflet’s in-built ‘L.geoJSON’ functions (See https://leafletjs.com/examples/geojson/).  With this in place, Leaflet very straightforwardly plotted out the boundaries of my sample parish.

I had intended to write a little script that would then grab the GeoJSON data for each of the parishes in our system from the API mentioned above.  However, I noticed that when passing a text string to the API it does a partial match, and can return multiple parishes.  For example, our parish ‘Duns’ also brings back the data for ‘Dunscore’ and ‘Dunsyre’.  I figured therefore that it would be safer if I just manually grabbed the data and inserted it directly into our ‘parishes’ database.  This all worked perfectly, other than for the parish of Coldingham, which is a lot bigger than the rest, meaning the JSON data was also a lot larger.  The size of the data was larger than a setting on the server was allowing me to upload to MySQL, but thankfully Chris McGlashan was able to sort that out for me.

With all of the parish data in place I styled the lines a sort of orange colour that would show up fairly well on all of our base maps.  I also updated the ‘Display options’ to add in facilities to turn the boundary lines on or off.  This also meant updating the citation, bookmarking and page reloading code too.  I also wanted to add in the three-letter acronyms for each parish too.  It turns out that adding plain text directly to a Leaflet map is not actually possible, or at least not easily.  Instead the text needs to be added as a tooltip on an invisible marker, and the tooltip then has to be set as permanently visible, and then styled to remove the bubble around the text.  This still left the little arrow pointing to the marker, but a bit of Googling informed me that if I set the tooltip’s ‘dicrection’ to ‘center’ the arrowheads aren’t shown.  It all feels like a bit of a hack, and I hope that in future it’s a lot easier to just add text to a map in a more direct manner.  However, I was glad to figure a solution out, and once I had manually grabbed the coordinates where I wanted the parish labels to appear I was all set.  Here’s an example of how the map looks with parish boundaries and labels turned on:

I had some other place-name related things to do this week.  On Wednesday afternoon I met with Carole, Simon and Thomas to discuss the Scottish Survey of Place-names, which I will be involved with in some capacity.  We talked for a couple of hours about how the approach taken for REELS might be adapted for other surveys, and how we might connect up multiple surveys to provide Scotland-wide search and browse facilities.  I can’t really say much more about it for now, but it’s good that such issues are being considered.

I spent about a day this week continuing to work on the new pages and videos for the Seeing Speech project.  I fixed a formatting issue with the ‘Other Symbols’ table in the IPA Charts that was occurring in Internet Explorer, which Eleanor had noticed last week.  I also uploaded the 16 new videos for /l/ and /r/ sounds that Eleanor had sent me, and created a new page for accessing these.  As with the IPA Charts page I worked on last week, the videos on this page open in an overlay, which I think works pretty well.  I also noticed that the videos kept on playing if you closed an overlay before the video finished, so I updated the code to ensure that the videos stop when the overlay is closed.

Other than these projects, I investigated an issue relating to Google Analytics that Craig Lamont was encountering for the Ramsay project, and I spent the rest of my time returning to the SCOSYA project.  I’d met with Gary last week and he’d suggested some further updates to the staff Atlas page.  It took a bit of time to get back into how the atlas works as it’s been a long time since I last worked on it, but once I’d got used to it again, and had created a new test version of the atlas for me to play with without messing up Gary’s access, I decided to try and figure out whether it would be possible to add in a ‘save map as image’ feature.  I had included this before, but as the atlas uses a mixture of image types (bitmap, SVG, HTML elements) for base layers and markers the method I’d previously used wasn’t saving everything.

However, I found a plugin called ‘easyPrint’ (https://github.com/rowanwins/leaflet-easyPrint) that does seem to be able to save everything.  By default it prints the map to a printer (or to PDF), but it can also be set up to ‘print’ to a PNG image.  It is a bit clunky, sometimes does weird things and only works in Chrome and Firefox (and possibly Safari, I haven’t tried, but definitely not MS IE or Edge).  It’s not going to be suitable for inclusion on the public atlas for these reasons, but it might be useful to the project team as a means of grabbing screenshots.

With the plugin added a new ‘download’ icon appears above the zoom controls in the bottom right.  If you move your mouse over this some options appear that allow you to save an image at a variety of sizes (current, A4 portrait, A4 landscape and A3 portrait).  The ‘current’ size should work without any weirdness, but the other ones have to reload the page, bringing in map tiles that are beyond what you currently see.  This is where the weirdness comes in, as follows:

  1. The page will display a big white area instead of the map while the saving of the image takes place.  This can take a few seconds.
  2. Occasionally the map tiles don’t load successfully and you get white areas in the image instead of the map.  If this happens pan around the map a bit to load in the tiles and then try saving the image again.
  3. Very occasionally when the map reloads it will have completely repositioned itself, and the map image will be of this location too.  Not sure why this is happening.  If it does happen, reposition the map and try again and things seem to work.

Once the processing is complete the image will be saved as a PNG.  If you select the ‘A3’ option the image will actually be of a much larger area than you see on your screen.  I think this will prove useful to you for getting higher resolution images and also for including Shetland, two issues Gary was struggling with.  Here’s a large image with Shetland in place:

That’s all for this week.

 

Week Beginning 30th April 2018

I continued to work on the REELS website for a lot of this week, and attended a team meeting for the project on Wednesday afternoon.  In the run-up to the meeting I worked towards finalising the interface for the map.  Previously I’d just been using colour schemes and layouts I’d taken from previous projects I’d worked on, but I needed to develop an interface that was right for the current project.  I played around with some different colour schemes before settling on one that’s sort of green and blue, with red as a hover-over.  I also updated the layout of the textual list of records to make the buttons display a bit more nicely, and updated the layout of the record page to place the description text above the map.  Navigation links and buttons also now appear as buttons across the top of pages, whereas previously they were all over the place.  Here’s an example of the record page:

The team meeting was really useful, as Simon had some useful feedback on the CMS and we all went through the front-end and discussed some of the outstanding issues.  By the end of the meeting I had accumulated quite a number of items to add to my ‘to do’ list, and I worked my way through these during the rest of the week.  These included:

  1. Unique record IDs now appear in the cross reference system in the CMS, so the team can more easily figure out which place-name to select if there is more than one with the same name.  I’ve also added this unique record ID to the top of the ‘edit place’ page.
  2. I’ve added cross references to the front-end record page, as I’d forgotten to add these in before
  3. I’ve replaced the ‘export’ menu item in the CMS with a new ‘Tools’ menu item.  This page includes a link to the ‘export’ page plus links to new pages I’m adding in
  4. I’ve created a script that lists all duplicate elements within each language.  It is linked to from the ‘tools’ page.  Each duplicate is listed, together with its unique ID and the number of current and historical names each is associated with and a link through to the ‘edit element’ page
  5. The ‘edit element’ page now lists all place-names and historical forms that the selected element is associated with.  These are links leading to the ‘manage elements’ page for the item.
  6. When adding a new element the element ID appears in the autocomplete in addition to the element and language, hopefully making it easier to ensure you link to the correct element.
  7. ‘Description’ has been changed to ‘analysis’ in both the CMS and in the API (for the CSV / JSON downloads)
  8. ‘Proper name’ language has been changed to ‘Personal name’
  9. The new roles ‘affixed name’ and ‘simplex’ have been added
  10. The new part of speech ‘Numeral’ has been added.
  11. I’ve created a script that lists all elements that have a role of ‘other’, linked to from the ‘tools’ menu in the CMS.  The page lists the element that has this role, its language, the ID and name of the place-name this appears in, and a link to the ‘manage elements’ page for the item.  For historical forms the historical form name also appears.
  12. I’ve fixed the colour of the highlighted item in the elements glossary when reached via a link on the record page
  13. I’ve changed the text in the legend for grey dots from ‘Other place-names’ to ‘unselected’.  We had decided on ‘Unselected place-names’ but this made the box too wide and I figured ‘unselected’ worked just as well – we don’t say ‘Settlement place-names’, after all, but just ‘Settlement’)
  14. I’ve removed place-name data from the API that doesn’t appear in the front-end.  This is basically just the additional element fields
  15. I’ve checked that records that are marked as ‘on website’ but don’t appear on landranger maps are set to appear on the website.  They weren’t, but they are now.
  16. I’ve also made the map on the record page use the base map you had selected on the main map, rather than always loading the default view.  Similarly, if you change the base map on the record page and then return to the map using the ‘return’ button.

I also investigated some issues with the Export script that Daibhidh had reported.  It turned out that these were being caused by Excel.  The output file is a comma separated value file encoded in UTF-8.  I’d included instructions on how to import the file into Excel to allow UTF-8 characters to display properly, but for some reason this method was causing some of the description fields to be incorrectly split up.  If instead of importing the file following the instructions it was opened directly into Excel the fields get split up into their proper columns correctly, but you end up with a bunch of garbled UTF-8 characters.

After a bit of research I figured out a way for the CSV file to be directly opened in Excel with the UTF-8 characters intact (and with the columns not getting split up where they shouldn’t).  By setting my script to include ‘Byte Order Marking’ at the top of the file, Excel magically knows to render the UTF-8 characters properly.

In addition to the REELS project, I attended an IT Services meeting on Wednesday morning.  It was billed as a ‘Review of IT Support for Researchers’ meeting but in reality the focus of pretty much the whole meeting was on the proposal for the high performance compute cluster, with most of the discussions being about the sorts of hardware setup it should feature.  This is obviously very important for researchers dealing with petabytes and exabytes of data and there were heated debates about whether there were too many GPUs when CPUs would be more useful (and vice versa) but really this isn’t particularly important for anything I’m involved with.  The other sections of the agenda (training, staff support etc) were also entirely focussed on HPC and running intensive computing jobs, not on things like web servers and online resources.  I’m afraid there wasn’t really anything I could contribute to the discussions.

I did learn a few interesting things, though, namely: IT Services are going to start offering a training course in R, which might be useful.  Also, Machine Learning is very much considered the next big thing and is already being used quite heavily in other parts of the University.  Machine Learning works better with GPUs rather than CPUs and there are apparently some quite easy to use Machine Learning packages out there now.  Google has an online tool called Colaboratory (https://colab.research.google.com) for Machine Learning education and research, which might be useful to investigate.  Also, IT Services offer Unix tutorials here: http://nyx.cent.gla.ac.uk/unix/ and other help documentation about HPC, R and other software here: http://nyx.cent.gla.ac.uk/unix/ These don’t seem to be publicised anywhere, but might be useful.

I also worked on a number of other projects this week, including creating a timeline feature based on data about the Burns song ‘Afton Water’ that Brianna had sent me for the RNSN project.  I created this using the timeline.js library (https://timeline.knightlab.com/), which is a great library and really easy to use.  I also responded to a query about some maps of the Ramsay ARHC project, which is now underway.  Also, Jane and Eleanor got back to me with some feedback on my mock-up designs for the new Seeing Speech website.  They have decided on a version that is very similar in layout to the old site, and they had suggested several further tweaks.  I created a new mock-up with these tweaks in place, which they both seem happy with.  Once they have worked a bit more on the content of the site I will then be able to begin the full migration to the new design.