Week Beginning 11th June 2018

I met with Matthew Creasey from English Literature this week to discuss a project website for his recently funded ‘Decadence and Translation Network’ project.  The project website is going to be a fairly straightforward WordPress site, but there will also be a digital edition hosted through it, which will be sort of similar to what I did for the Woolf short story for the New Modernist Editing project (https://nme-digital-ode.glasgow.ac.uk/).  I set up an initial site for Matthew and will work on it further once he receives the images he’d like to use in the site design.

I also gave some further help with Craig Lamont in getting access to Google Analytics for the Ramsay project, and spoke to Quintin Cutts in Computing Science about publishing an iOS app they have created.  I also met with Graeme Cannon to discuss AHRC Data Management Plans, as he’s been asked to contribute to one and hasn’t worked with a plan yet.  I also made a couple of minor fixes to the RNSN timeline and storymap pages and updated the ‘attribution’ text on the REELS map.  There’s quite a lot of text relating to map attribution and copyright so instead of cluttering up the bottom of the maps I’ve moved everything into a new pop-up window.  In addition to the statements about the map tilesets I’ve also added in a statement about our place-name data, the copyright statement that’s required for the parish boundaries, a note about Leaflet and attribution for the map icons too. I think it works a lot better.

Other than these issues I mainly focussed on three projects this week.  For the SCOSYA project I tackled an issue with the ‘or’ search, that was causing the search to not display results in a properly categorised manner when the ‘rated by’ option was set to more than one.  It took a while to work through the code, and my brain hurt a bit by the end of it, but thankfully I managed to figured out what the problem was.  Basically when ‘rated by’ was set to 1 the code only needed to match a single result for a location.  If it found one that matched then the code stopped looking any further.  However, when multiple results need to be found that match, the code didn’t stop looking, but instead had to cycle through all other results for the location, including those for other codes.  So if it found two matches that met the criteria for ‘A1’ it would still go on looking through the ‘A2’ results as well, would realise these didn’t match and set the flag to ‘N’.  I was keeping a count of the number of matches but this part of the code was never reached if the ‘N’ flag was set.  I’ve now updated how the checking for matches works and thankfully the ‘Or’ search now works when you set a ‘rated by’ to be more than 1.

For the reworking of the Seeing Speech and Dynamic Dialects websites I decided for focus on the accent map and accent chart features of Dynamic Dialects.  For the map I switched to using the Leaflet.js mapping library rather than Google Maps.  This is mainly because I prefer Leaflet, you can use it with lots of different map tilesets, data doesn’t have to be posted to Google for the map to work and other reasons, such as the fact that you can zoom in and out with the scrollwheel of a mouse without having to also press ‘ctrl’, which gets really annoying with the existing map.  I’ve removed the option to switch from map to satellite and streetview as well as these didn’t really seem to serve much purpose.  The new base map is a free map supplied by Esri (a big GIS company).  It isn’t cluttered up with commercial map markers when zoomed in, unlike Google.

You can now hover over a map marker to view the location and area details.  Clicking on a marker opens up a pop-up containing all of the information about the speaker and links to the videos as ‘play’ buttons.  Note that unlike the existing map, buttons for sounds only appear if there are actually videos for them.  E.g. on the existing map for Oregon there are links for every video type, but only one (spontaneous) actually works.

Clicking on a ‘play’ button brings down the video overlay, as with the other pages I’ve redeveloped.  As with other pages, the URL is updated to allow direct linking to the video.  Note that any map pop-up you have open does not remain open when you follow such a link, but as the location appears in the video overlay header it should be easy for a user to figure out where the relevant marker is when they close the overlay.

For the Accent Chart page I’ve added in some filter options, allowing you to limit the display of data to a particular area, age range and / or gender.  These options can be combined, and also bookmarked / shared / cited (e.g. so you can follow a link to view only those rows where the area is ‘Scotland’ the age range is ’18-24’ and the gender is ‘F’).  I’ve also added a row hover-over colour to help you keep your eye on a row.  As with other pages, click on the ‘play’ button and a video overlay drops down.  You can also cite / bookmark specific videos.

I’ve made the table columns on this page as narrow as possible, but it’s still a lot of columns and unless you have a very wide monitor you’re going to have to scroll to see everything.  There are two ways I can set this up.  Firstly the table area of the page itself can be set to scroll horizontally.  This keeps the table within the boundaries of the page structure and looks more tidy, but it means you have to vertically scroll to the bottom of the table before you see the scrollbar, which is probably going to get annoying and may be confusing.  The alternative is to allow the table to break out of the boundaries of the page.  This looks messier, but the advantage is the horizontal scrollbar then appears at the bottom of your browser window and is always visible, even if you’re looking at the top section of the table.  I’ve asked Jane and Eleanor how they would prefer the page to work.

My final project of the week was the Historical Thesaurus.  I spent some time working on the new domain names we’re setting up for the thesaurus, and on Thursday I attended the lectures for the new lectureship post for the Thesaurus.  It was very interesting to hear the speakers and their potential plans for the Thesaurus in future, but obviously I can’t say much more about the lectures here.  I also attended the retirement do for Flora Edmonds on Thursday afternoon.  Flora has been a huge part of the thesaurus team since the early days of its switch to digital and I think she had a wonderful send-off from the people in Critical Studies she’s worked closely with over the years.

On Friday I spent some time adding the mini timelines to the search results page.  I haven’t updated the ‘live’ page yet but here’s an image showing how they will look:

It’s been a little tricky to add the mini-timelines in as the search results page is structured rather differently to the ‘browse’ page.  However, they’re in place now, both for general ‘word’ results  and for words within the ‘Recommended Categories’ section.  Note that if you’ve turned mini-timelines off in the ‘browse’ page they stay off on this page too.

We will probably want to add a few more things in before we make this page live.  We could add in the full timeline visualisation pop-up, that I could set up to feature all search results, or at least the results for the current page of search results.  If I did this I would need to redevelop the visualisation to try and squeeze in at least some of the category information and the pos, otherwise the listed words might all be the same.  I will probably try to add in each word’s category and pos, which should provide just enough context, although subcat names like ‘pertaining to’ aren’t going to be very helpful.

We will also need to consider adding in some sorting options.  Currently the results are ordered by ‘Tier’ number, but I could add in options to order results by ‘first attested date’, ‘alphabetically’ and ‘length of attestation’.  ‘Alphabetically’ isn’t going to be hugely useful if you’re looking at a page of ‘sausage’ results, but will be useful for wildcard searches (e.g. ‘*sage’) and other searches like dates.  I would imagine ordering results by ‘length of attestation’ is going to be rather useful in picking out ‘important’ words.  I’ll hopefully have some time to look into these options next week.

 

 

 

Week Beginning 4th June 2018

I’d taken Friday off as a holiday this week, and I was also off on Monday afternoon to attend a funeral.  Despite being off for a day and a half I still managed to achieve quite a lot this week.  Over the weekend Thomas Clancy had alerted me to another excellent resource that has been developed by the NLS Maps people that plots the boundaries of all parishes in Scotland, which you can access here:  http://maps.nls.uk/geo/boundaries/#zoom=10.671666666666667&lat=55.8481&lon=-2.5155&point=0,0.  For REELS we had been hoping to incorporate parish boundaries into our Berwickshire map but didn’t know where to get the coordinates from, and there wasn’t enough time in the project for us to manually create the data.  I emailed Chris Fleet at the NLS to ask where they’d got their data from, and whether we might be able to access the Berwickshire bits of it.  Chris very helpfully replied to say were created by the James Hutton Institute and are hosted on the Scottish government’s Scottish Spatial Data Infrastructure Metadata Portal (see https://www.spatialdata.gov.scot/geonetwork/srv/eng/catalog.search#/metadata/c1d34a5d-28a7-4944-9892-196ca6b3be0c).  The data is free to use, so long as a copyright statement is displayed, and there’s even an API through which the data can be grabbed (see here: http://sedsh127.sedsh.gov.uk/arcgis/rest/services/ScotGov/AreaManagement/MapServer/1/query).  The data can even be outputted in a variety of formats, including shape files, JSON and GeoJSON.  I decided to go for GeoJSON, as this seemed like a pretty good fit for the Leaflet mapping library we use.

Initially I used the latitude and longitude coordinates for one parish (Abbey St Bathans) and added this to the map.  Unfortunately the polygon shape didn’t appear on the map, even though no errors were returned.  This was rather confusing until I realised that whereas Leaflet tends to use latitude and then longitude as the order of the input data, GeoJSON is set to have longitude first and then latitude.  This meant my polygon boundaries had been added to my map, just in a completely different part of the world!  It turns out that in order to use GeoJSON data in Leaflet it’s better to use Leaflet’s in-built ‘L.geoJSON’ functions (See https://leafletjs.com/examples/geojson/).  With this in place, Leaflet very straightforwardly plotted out the boundaries of my sample parish.

I had intended to write a little script that would then grab the GeoJSON data for each of the parishes in our system from the API mentioned above.  However, I noticed that when passing a text string to the API it does a partial match, and can return multiple parishes.  For example, our parish ‘Duns’ also brings back the data for ‘Dunscore’ and ‘Dunsyre’.  I figured therefore that it would be safer if I just manually grabbed the data and inserted it directly into our ‘parishes’ database.  This all worked perfectly, other than for the parish of Coldingham, which is a lot bigger than the rest, meaning the JSON data was also a lot larger.  The size of the data was larger than a setting on the server was allowing me to upload to MySQL, but thankfully Chris McGlashan was able to sort that out for me.

With all of the parish data in place I styled the lines a sort of orange colour that would show up fairly well on all of our base maps.  I also updated the ‘Display options’ to add in facilities to turn the boundary lines on or off.  This also meant updating the citation, bookmarking and page reloading code too.  I also wanted to add in the three-letter acronyms for each parish too.  It turns out that adding plain text directly to a Leaflet map is not actually possible, or at least not easily.  Instead the text needs to be added as a tooltip on an invisible marker, and the tooltip then has to be set as permanently visible, and then styled to remove the bubble around the text.  This still left the little arrow pointing to the marker, but a bit of Googling informed me that if I set the tooltip’s ‘dicrection’ to ‘center’ the arrowheads aren’t shown.  It all feels like a bit of a hack, and I hope that in future it’s a lot easier to just add text to a map in a more direct manner.  However, I was glad to figure a solution out, and once I had manually grabbed the coordinates where I wanted the parish labels to appear I was all set.  Here’s an example of how the map looks with parish boundaries and labels turned on:

I had some other place-name related things to do this week.  On Wednesday afternoon I met with Carole, Simon and Thomas to discuss the Scottish Survey of Place-names, which I will be involved with in some capacity.  We talked for a couple of hours about how the approach taken for REELS might be adapted for other surveys, and how we might connect up multiple surveys to provide Scotland-wide search and browse facilities.  I can’t really say much more about it for now, but it’s good that such issues are being considered.

I spent about a day this week continuing to work on the new pages and videos for the Seeing Speech project.  I fixed a formatting issue with the ‘Other Symbols’ table in the IPA Charts that was occurring in Internet Explorer, which Eleanor had noticed last week.  I also uploaded the 16 new videos for /l/ and /r/ sounds that Eleanor had sent me, and created a new page for accessing these.  As with the IPA Charts page I worked on last week, the videos on this page open in an overlay, which I think works pretty well.  I also noticed that the videos kept on playing if you closed an overlay before the video finished, so I updated the code to ensure that the videos stop when the overlay is closed.

Other than these projects, I investigated an issue relating to Google Analytics that Craig Lamont was encountering for the Ramsay project, and I spent the rest of my time returning to the SCOSYA project.  I’d met with Gary last week and he’d suggested some further updates to the staff Atlas page.  It took a bit of time to get back into how the atlas works as it’s been a long time since I last worked on it, but once I’d got used to it again, and had created a new test version of the atlas for me to play with without messing up Gary’s access, I decided to try and figure out whether it would be possible to add in a ‘save map as image’ feature.  I had included this before, but as the atlas uses a mixture of image types (bitmap, SVG, HTML elements) for base layers and markers the method I’d previously used wasn’t saving everything.

However, I found a plugin called ‘easyPrint’ (https://github.com/rowanwins/leaflet-easyPrint) that does seem to be able to save everything.  By default it prints the map to a printer (or to PDF), but it can also be set up to ‘print’ to a PNG image.  It is a bit clunky, sometimes does weird things and only works in Chrome and Firefox (and possibly Safari, I haven’t tried, but definitely not MS IE or Edge).  It’s not going to be suitable for inclusion on the public atlas for these reasons, but it might be useful to the project team as a means of grabbing screenshots.

With the plugin added a new ‘download’ icon appears above the zoom controls in the bottom right.  If you move your mouse over this some options appear that allow you to save an image at a variety of sizes (current, A4 portrait, A4 landscape and A3 portrait).  The ‘current’ size should work without any weirdness, but the other ones have to reload the page, bringing in map tiles that are beyond what you currently see.  This is where the weirdness comes in, as follows:

  1. The page will display a big white area instead of the map while the saving of the image takes place.  This can take a few seconds.
  2. Occasionally the map tiles don’t load successfully and you get white areas in the image instead of the map.  If this happens pan around the map a bit to load in the tiles and then try saving the image again.
  3. Very occasionally when the map reloads it will have completely repositioned itself, and the map image will be of this location too.  Not sure why this is happening.  If it does happen, reposition the map and try again and things seem to work.

Once the processing is complete the image will be saved as a PNG.  If you select the ‘A3’ option the image will actually be of a much larger area than you see on your screen.  I think this will prove useful to you for getting higher resolution images and also for including Shetland, two issues Gary was struggling with.  Here’s a large image with Shetland in place:

That’s all for this week.

 

Week Beginning 30th April 2018

I continued to work on the REELS website for a lot of this week, and attended a team meeting for the project on Wednesday afternoon.  In the run-up to the meeting I worked towards finalising the interface for the map.  Previously I’d just been using colour schemes and layouts I’d taken from previous projects I’d worked on, but I needed to develop an interface that was right for the current project.  I played around with some different colour schemes before settling on one that’s sort of green and blue, with red as a hover-over.  I also updated the layout of the textual list of records to make the buttons display a bit more nicely, and updated the layout of the record page to place the description text above the map.  Navigation links and buttons also now appear as buttons across the top of pages, whereas previously they were all over the place.  Here’s an example of the record page:

The team meeting was really useful, as Simon had some useful feedback on the CMS and we all went through the front-end and discussed some of the outstanding issues.  By the end of the meeting I had accumulated quite a number of items to add to my ‘to do’ list, and I worked my way through these during the rest of the week.  These included:

  1. Unique record IDs now appear in the cross reference system in the CMS, so the team can more easily figure out which place-name to select if there is more than one with the same name.  I’ve also added this unique record ID to the top of the ‘edit place’ page.
  2. I’ve added cross references to the front-end record page, as I’d forgotten to add these in before
  3. I’ve replaced the ‘export’ menu item in the CMS with a new ‘Tools’ menu item.  This page includes a link to the ‘export’ page plus links to new pages I’m adding in
  4. I’ve created a script that lists all duplicate elements within each language.  It is linked to from the ‘tools’ page.  Each duplicate is listed, together with its unique ID and the number of current and historical names each is associated with and a link through to the ‘edit element’ page
  5. The ‘edit element’ page now lists all place-names and historical forms that the selected element is associated with.  These are links leading to the ‘manage elements’ page for the item.
  6. When adding a new element the element ID appears in the autocomplete in addition to the element and language, hopefully making it easier to ensure you link to the correct element.
  7. ‘Description’ has been changed to ‘analysis’ in both the CMS and in the API (for the CSV / JSON downloads)
  8. ‘Proper name’ language has been changed to ‘Personal name’
  9. The new roles ‘affixed name’ and ‘simplex’ have been added
  10. The new part of speech ‘Numeral’ has been added.
  11. I’ve created a script that lists all elements that have a role of ‘other’, linked to from the ‘tools’ menu in the CMS.  The page lists the element that has this role, its language, the ID and name of the place-name this appears in, and a link to the ‘manage elements’ page for the item.  For historical forms the historical form name also appears.
  12. I’ve fixed the colour of the highlighted item in the elements glossary when reached via a link on the record page
  13. I’ve changed the text in the legend for grey dots from ‘Other place-names’ to ‘unselected’.  We had decided on ‘Unselected place-names’ but this made the box too wide and I figured ‘unselected’ worked just as well – we don’t say ‘Settlement place-names’, after all, but just ‘Settlement’)
  14. I’ve removed place-name data from the API that doesn’t appear in the front-end.  This is basically just the additional element fields
  15. I’ve checked that records that are marked as ‘on website’ but don’t appear on landranger maps are set to appear on the website.  They weren’t, but they are now.
  16. I’ve also made the map on the record page use the base map you had selected on the main map, rather than always loading the default view.  Similarly, if you change the base map on the record page and then return to the map using the ‘return’ button.

I also investigated some issues with the Export script that Daibhidh had reported.  It turned out that these were being caused by Excel.  The output file is a comma separated value file encoded in UTF-8.  I’d included instructions on how to import the file into Excel to allow UTF-8 characters to display properly, but for some reason this method was causing some of the description fields to be incorrectly split up.  If instead of importing the file following the instructions it was opened directly into Excel the fields get split up into their proper columns correctly, but you end up with a bunch of garbled UTF-8 characters.

After a bit of research I figured out a way for the CSV file to be directly opened in Excel with the UTF-8 characters intact (and with the columns not getting split up where they shouldn’t).  By setting my script to include ‘Byte Order Marking’ at the top of the file, Excel magically knows to render the UTF-8 characters properly.

In addition to the REELS project, I attended an IT Services meeting on Wednesday morning.  It was billed as a ‘Review of IT Support for Researchers’ meeting but in reality the focus of pretty much the whole meeting was on the proposal for the high performance compute cluster, with most of the discussions being about the sorts of hardware setup it should feature.  This is obviously very important for researchers dealing with petabytes and exabytes of data and there were heated debates about whether there were too many GPUs when CPUs would be more useful (and vice versa) but really this isn’t particularly important for anything I’m involved with.  The other sections of the agenda (training, staff support etc) were also entirely focussed on HPC and running intensive computing jobs, not on things like web servers and online resources.  I’m afraid there wasn’t really anything I could contribute to the discussions.

I did learn a few interesting things, though, namely: IT Services are going to start offering a training course in R, which might be useful.  Also, Machine Learning is very much considered the next big thing and is already being used quite heavily in other parts of the University.  Machine Learning works better with GPUs rather than CPUs and there are apparently some quite easy to use Machine Learning packages out there now.  Google has an online tool called Colaboratory (https://colab.research.google.com) for Machine Learning education and research, which might be useful to investigate.  Also, IT Services offer Unix tutorials here: http://nyx.cent.gla.ac.uk/unix/ and other help documentation about HPC, R and other software here: http://nyx.cent.gla.ac.uk/unix/ These don’t seem to be publicised anywhere, but might be useful.

I also worked on a number of other projects this week, including creating a timeline feature based on data about the Burns song ‘Afton Water’ that Brianna had sent me for the RNSN project.  I created this using the timeline.js library (https://timeline.knightlab.com/), which is a great library and really easy to use.  I also responded to a query about some maps of the Ramsay ARHC project, which is now underway.  Also, Jane and Eleanor got back to me with some feedback on my mock-up designs for the new Seeing Speech website.  They have decided on a version that is very similar in layout to the old site, and they had suggested several further tweaks.  I created a new mock-up with these tweaks in place, which they both seem happy with.  Once they have worked a bit more on the content of the site I will then be able to begin the full migration to the new design.