I continued to work on the REELS website for a lot of this week, and attended a team meeting for the project on Wednesday afternoon. In the run-up to the meeting I worked towards finalising the interface for the map. Previously I’d just been using colour schemes and layouts I’d taken from previous projects I’d worked on, but I needed to develop an interface that was right for the current project. I played around with some different colour schemes before settling on one that’s sort of green and blue, with red as a hover-over. I also updated the layout of the textual list of records to make the buttons display a bit more nicely, and updated the layout of the record page to place the description text above the map. Navigation links and buttons also now appear as buttons across the top of pages, whereas previously they were all over the place. Here’s an example of the record page:
The team meeting was really useful, as Simon had some useful feedback on the CMS and we all went through the front-end and discussed some of the outstanding issues. By the end of the meeting I had accumulated quite a number of items to add to my ‘to do’ list, and I worked my way through these during the rest of the week. These included:
- Unique record IDs now appear in the cross reference system in the CMS, so the team can more easily figure out which place-name to select if there is more than one with the same name. I’ve also added this unique record ID to the top of the ‘edit place’ page.
- I’ve added cross references to the front-end record page, as I’d forgotten to add these in before
- I’ve replaced the ‘export’ menu item in the CMS with a new ‘Tools’ menu item. This page includes a link to the ‘export’ page plus links to new pages I’m adding in
- I’ve created a script that lists all duplicate elements within each language. It is linked to from the ‘tools’ page. Each duplicate is listed, together with its unique ID and the number of current and historical names each is associated with and a link through to the ‘edit element’ page
- The ‘edit element’ page now lists all place-names and historical forms that the selected element is associated with. These are links leading to the ‘manage elements’ page for the item.
- When adding a new element the element ID appears in the autocomplete in addition to the element and language, hopefully making it easier to ensure you link to the correct element.
- ‘Description’ has been changed to ‘analysis’ in both the CMS and in the API (for the CSV / JSON downloads)
- ‘Proper name’ language has been changed to ‘Personal name’
- The new roles ‘affixed name’ and ‘simplex’ have been added
- The new part of speech ‘Numeral’ has been added.
- I’ve created a script that lists all elements that have a role of ‘other’, linked to from the ‘tools’ menu in the CMS. The page lists the element that has this role, its language, the ID and name of the place-name this appears in, and a link to the ‘manage elements’ page for the item. For historical forms the historical form name also appears.
- I’ve fixed the colour of the highlighted item in the elements glossary when reached via a link on the record page
- I’ve changed the text in the legend for grey dots from ‘Other place-names’ to ‘unselected’. We had decided on ‘Unselected place-names’ but this made the box too wide and I figured ‘unselected’ worked just as well – we don’t say ‘Settlement place-names’, after all, but just ‘Settlement’)
- I’ve removed place-name data from the API that doesn’t appear in the front-end. This is basically just the additional element fields
- I’ve checked that records that are marked as ‘on website’ but don’t appear on landranger maps are set to appear on the website. They weren’t, but they are now.
- I’ve also made the map on the record page use the base map you had selected on the main map, rather than always loading the default view. Similarly, if you change the base map on the record page and then return to the map using the ‘return’ button.
I also investigated some issues with the Export script that Daibhidh had reported. It turned out that these were being caused by Excel. The output file is a comma separated value file encoded in UTF-8. I’d included instructions on how to import the file into Excel to allow UTF-8 characters to display properly, but for some reason this method was causing some of the description fields to be incorrectly split up. If instead of importing the file following the instructions it was opened directly into Excel the fields get split up into their proper columns correctly, but you end up with a bunch of garbled UTF-8 characters.
After a bit of research I figured out a way for the CSV file to be directly opened in Excel with the UTF-8 characters intact (and with the columns not getting split up where they shouldn’t). By setting my script to include ‘Byte Order Marking’ at the top of the file, Excel magically knows to render the UTF-8 characters properly.
In addition to the REELS project, I attended an IT Services meeting on Wednesday morning. It was billed as a ‘Review of IT Support for Researchers’ meeting but in reality the focus of pretty much the whole meeting was on the proposal for the high performance compute cluster, with most of the discussions being about the sorts of hardware setup it should feature. This is obviously very important for researchers dealing with petabytes and exabytes of data and there were heated debates about whether there were too many GPUs when CPUs would be more useful (and vice versa) but really this isn’t particularly important for anything I’m involved with. The other sections of the agenda (training, staff support etc) were also entirely focussed on HPC and running intensive computing jobs, not on things like web servers and online resources. I’m afraid there wasn’t really anything I could contribute to the discussions.
I did learn a few interesting things, though, namely: IT Services are going to start offering a training course in R, which might be useful. Also, Machine Learning is very much considered the next big thing and is already being used quite heavily in other parts of the University. Machine Learning works better with GPUs rather than CPUs and there are apparently some quite easy to use Machine Learning packages out there now. Google has an online tool called Colaboratory (https://colab.research.google.com) for Machine Learning education and research, which might be useful to investigate. Also, IT Services offer Unix tutorials here: http://nyx.cent.gla.ac.uk/unix/ and other help documentation about HPC, R and other software here: http://nyx.cent.gla.ac.uk/unix/ These don’t seem to be publicised anywhere, but might be useful.
I also worked on a number of other projects this week, including creating a timeline feature based on data about the Burns song ‘Afton Water’ that Brianna had sent me for the RNSN project. I created this using the timeline.js library (https://timeline.knightlab.com/), which is a great library and really easy to use. I also responded to a query about some maps of the Ramsay ARHC project, which is now underway. Also, Jane and Eleanor got back to me with some feedback on my mock-up designs for the new Seeing Speech website. They have decided on a version that is very similar in layout to the old site, and they had suggested several further tweaks. I created a new mock-up with these tweaks in place, which they both seem happy with. Once they have worked a bit more on the content of the site I will then be able to begin the full migration to the new design.