On Monday this week we had another Arts Developer coffee meeting, which as always was a good opportunity to catch up with my fellow developers in the College of Arts and talk about our work. On Tuesday I attended a team meeting for the SCOSYA project, where we discussed some of the final things that needed done before the online resource would be ready for the user testing sessions that will take place in the next few weeks. I spend quite a bit of time implementing these final tweaks during the week. This included adding in the full map attribution and copyright information in a pop-up that’s linked to from the bottom of the atlas. I also added it in to the API as well. After this I changed a number of colours that were used for markers and menu items on both the public and experts atlases and added in some links to help pages and some actual text to the atlas menus to replace the placeholder text.
I also realised that highlighting wasn’t working on the experts ‘home’ map, which was probably a bit confusing. Implementing this turned out to be rather tricky as highlighting depended on grabbing the location name from the pop-up and then comparing this with the location names in a group. The ‘Home’ map has no pop-ups so highlighting wouldn’t work. Instead I had to change things so that the location is grabbed from the tooltip text. Also, the markers on the ‘Home’ map were actually different types of markers (HTML elements styled by CSS as opposed to SVG shapes) so even though they look the same the highlighting code wasn’t working for them. I’ve now switched them to SVG shape and highlighting seems to be working now. It’s even possible to create a group on the ‘Home’ page too.
I also added in a new ‘cite’ menu item to the experts atlas, the allows users to grab a link to their specific map view, formatted in a variety of citation styles. This updates everytime the ‘cite’ menu is opened, so if the user has changed the zoom level or map centre the citation link always reflects this. Finally, I created new versions of the atlases (now called ‘atlas’ and ‘linguists atlas’) that will be used for beta testing.
I also spent some time working for the DSL, fixing the ‘sienna’ test version of the website and changing how the quick search works on both test versions of the website. If the user selects an item from the autocomplete list, the search then performs an exact search for this work, whereas previously it was just matching the characters anywhere in the headword, which didn’t really make much sense. I also spent quite a bit of time looking through the old DSL editor server to try and track down some files for Rhona.
Also this week I had a chat with Gavin Miller about publicising his new Glasgow Medical Humanities site, set up a researcher in Psychology with an account to create an iOS app, fixed a couple of broken links on the Seeing Speech website and had a lengthy email chat with Heather Pagan about the Anglo-Norman Dictionary data. We have now managed to access the server and begin to analyse the contents to try and track down the data, and by the end of the week it looked like we might actually have found the full dataset, which is encouraging. I finished off the week by creating a final ‘Song Story’ for the RNSN project, which took a few hours to implement but is looking pretty good.
I’m going to be out of the office for the next three weeks on a holiday in Australia so there will be no further updates from me for a while.
I spent some time this week investigating the final part of the SCOSYA online resource that I needed to implement: A system whereby researchers could request access to the full audio dataset and a member of the team could approve the request and grant the person access to a facility where the required data could be downloaded. Downloads would be a series of large ZIP files containing WAV files and accompanying textual data. As we wanted to restrict access to legitimate users only I needed to ensure that the ZIP files were not directly web accessible, but were passed through to a web accessible location on request by a PHP script.
I created a test version using a 7.5Gb ZIP file that had been created a couple of months ago for the project’s ‘data hack’ event. This version can be set up to store the ZIP files in a non-web accessible directory and then grab a file and pass it through to the browser on request. It will be possible to add user authentication to the script to ensure that it can only be executed by a registered user. The actual location of the ZIP files is never divulged so neither registered nor unregistered users will ever be able to directly link to or download the files (other than via the authenticated script).
This all sounds promising but I realised that there are some serious issues with this approach. HTTP as used by web pages to transfer files is not really intended for downloading huge files and using this web-based method to download massive zip files is just not going to work very well. The test ZIP file I used was about 7.5Gb in size (roughly the size of a DVD), but the actual ZIP files are likely to be much larger than this – with the full dataset taking up about 180Gb. Even using my desktop PC on the University network it’s taken roughly 30 minutes to download the 7.5Gb file. Using an external network would likely take a lot longer and bigger files are likely to be pretty unmanageable for people to download.
It’s also likely that a pretty small number of researchers will be requesting the data, and if this is the case then perhaps it’s not such a good idea to take up 180Gb of web server space (plus the overheads of backups) to store data that is seldomly going to be accessed, especially if this is simply replicating data that is already taking up a considerable amount of space on the shared network drive. 180Gb is probably more web space than is used by most other Critical Studies websites combined. After discussing this issue with the team, we decided that we would not set up such a web-based resource to access the data, but would instead send ZIP files on request to researchers using the University’s transfer service, which allows files of up to 20Gb to be sent to both internal and external email addresses. We’ll need to see how this approach works out, but I think it’s a better starting point than setting up our own online system.
I also spent some further time on the SCOSYA project this week implementing some changes to both the experts and the public atlases based on feedback from the team. This included changing the default map position and zoom level, replacing some of the colours used for map markers and menu items, tweaking the layout of the transcriptions, ensuring certain words in story titles can appear in bold (as opposed to the whole title being bold as was previously the case) and removing descriptions from the list of features found in the ‘Explore’ menu in the public atlas. I also added a bit of code to ensure that internal links from story pages to other parts of the public atlas would work (previously they weren’t doing anything because only the part after the hash was changing). I also ensured that the experts atlas side panel resizes to fit the content whenever an additional attribute is added or removed.
Also this week I finally found a bit of time to fix the map on the advanced search page of the SCOTS Corpus website. This map was previously powered by Google Maps, but they have now removed free access to the Google Maps service (you now need to provide a credit card and get billed if your usage goes over a certain number of free hits a month). As we hadn’t updated the map or provided such details Google broke the map, covering it with warning messages and removing our custom map styles. I have now replaced the Google Maps version with a map created using the free to use Leaflet,js mapping library (as I’m using for SCOSYA) and a free map tileset from OpenStreetMap. Other than that it works in exactly the same way as the old Google Map. The new version is now live here: https://www.scottishcorpus.ac.uk/advanced-search/.
Also this week I upgraded all of the WordPress sites I manage, engaged in some App Store duties and had a further email conversation with Marc Alexander about how dates may be handled in the Historical Thesaurus. I also engaged in a long email conversation with Heather Pagan of the Anglo-Norman Dictionary about accessing the dictionary data. Heather has now managed to access one of the servers that the dictionary website runs on and we’re now trying to figure out exactly where the ‘live’ data is located so that I can work with it. I also fixed a couple of issues with the widgets I’d created last week for the GlasgowMedHums project (some test data was getting pulled into them) and tweaked a couple of pages. The project website is launching tomorrow so if anyone wants to access it they can do so here: https://glasgowmedhums.ac.uk/
Finally, I continued to work on the new API for the Dictionary of the Scots Language, implementing the bibliography search for the ‘v2’ API. This version of the API uses data extracted from the original API, and the test website I’ve set up to connect to it should be identical to the live site, but connects to the ‘v2’ API to get all of its data and in no way connects to the old, undocumented API. API calls to search the bibliographies (both a predictive search used for displaying the auto-complete results and to populate a full search results page), and to display an individual bibliography are now available and I’ve connected the test site to these API calls, so staff can search for bibliographies here.
Whilst investigating how to replicate the original API I realised that the bibliography search on the live site is actually a bit broken. The ‘Full Text’ search simply doesn’t work, but instead just does the same as a search for authors and titles (in fact the original API doesn’t even include a ‘full text’ option). Also, results only display authors, so for records with no author you get some pretty unhelpful results. I did consider adding in a full-text search, but as bibliographies contain little other than authors and titles there didn’t seem much point, so instead I’ve removed the option. As the search is primarily set up as an auto-complete, which is set up to match words in authors or titles that begin with the characters that are being typed (i.e. a wildcard search such as ‘wild*’) and the full search results page only gets displayed if someone ignores the auto-complete list of results and manually presses the ‘Search’ button, I’ve made full search results page always work as a ‘wild*’ search too. So typing ‘aber’ into the search box and pressing ‘Search’ will bring up a list of all bibliographies with titles / authors featuring a word beginning with these characters. With the previous version this wasn’t the case – you had to add a ‘*’ after ‘aber’ otherwise the full search results page would match ‘aber’ exactly and find nothing. I’ve updated the help text on the bibliography search page to explain this a bit.
The full search results (and the results side panel) in the new version now include titles as well as authors, which makes things clearer and I’ve also made the search results numbering appear at the top of the corresponding result text rather than on the last line. This is also the case for entry searches too. Once the test site has been fully tested and approved we should be able to replace the live site with the new site (ensuring all WordPress content from the live site is carried over, of course). Doing so will mean the old server containing the original API can (once we’re confident all is well) be switched off. There is still the matter of implementing the bibliography search for the V3 data, but as mentioned previously this will probably be best tackled once we have sorted out the issues with the data and we are getting ready to launch the new version.
I spent most of this week continuing to work on the Experts Atlas for the SCOSYA project, focussing primarily on implementing an alternative means of selecting attributes that allows attributes to be nested in two levels rather than just one. Implementing this has been a major undertaking as basically the entire attribute search was built around the use of drop-down lists (HTML <select> lists with <optgroups> items for parents and <option> items for attributes). Replacing this meant replacing how the code figures out what is selected, how the markers and legend are generated, how the history feature works and how the reloading of an atlas based on variables passed in the URL works. It took the best part of a day and a half to implement and it’s been pretty hellish.
The two levels of nesting rely on how the code parent is recorded in the CMS. Basically if you have two parts of a code parent separated by a space, a dash and a space ‘ – ‘ then the Experts Atlas will treat the part before the dash as the highest level in the hierarchy and the part after as another level down. So ‘AFTER-PREFECT’ is just one level whereas ‘AGREEMENT – MEASURES’ is two levels – ‘AGREEMENT’ as the top level and ‘MEASURES’ as the second.
In the Atlas I’ve replaced the drop-down list with a scrollable area that contains all of the top level code parents, each with a ‘+’ icon. Press on one of these and an area will scroll down containing either the attributes or the second-level parent headings. If it’s the latter, press on one of these to open it and see the attributes. You can press on a heading a second time to hide the contents.
Each attribute is listed with its code, title and description (if available). Clicking on an attribute will select it, and the details will appear in a ‘Selected Attribute’ section underneath the scrollable area. In this initial version everything else about the attribute search was the same as before, including the option to add another and supply limit options. I Also decided against replacing the ‘Age’ and ‘Rated by’ drop-down lists with buttons as I’m a bit worried about having too many similar looking buttons. E.g. if the ‘rated by’ drop-down list was replaced by buttons 1-4 it would look very similar to the ‘score’ option’s 1-5 buttons which could get confusing. Also, although the new attribute selection allows nesting of attributes, allows descriptions to be more easily read and allows us to control the look of the section (as opposed to an HTML select element which has its behaviour defined by the web browser) it does take up more space and might actually be less easy to use. The fixed height of it feels quite claustrophobic when using it, it’s more difficult to just see all of the attributes, and once the desired attribute has been selected it feels like the scrollable area is taking up a lot of unnecessary space. We’ll maybe need to see what the feedback is like when people use the atlas.
After a bit of further playing around with the interface I decided to make the limit options always visible, which I think works better and doesn’t take up much more vertical space. I’ve moved the ‘Show’ button to be in line with the ‘add another’ button and have relabelled both ‘Show on map’ (with magnifying glass icon) and ‘Add another attribute’. moved the ‘remove’ button when multiple attributes are selected to the same line as the Boolean choices, which saves space and works better. Here’s a screenshot of the new attribute select feature:
I then started working on the final Experts Atlas feature: the group statistics. I added in an option for loading in the set of default groups that E had created previously. If you perform an attribute search and then go to the Group Statistics menu item you can now press one or more ‘Highlight’ button to highlight the group. There are 5 different highlight colours (I can change these or add more as required) and they cycle through, so if you don’t like one you can keep clicking to find one you like. The group row is given the highlight colour as a background to help keep track of which group is which on the map. Highlighting works for all attribute search types but doesn’t work for the ‘Home’ map as it depends on contents of the pop-ups to function and the ‘Home’ map has no popups. Pressing on the ‘Statistics’ button opens up a further sliding panel and functions in the same way as the stats in the CMS Atlas, listing the attributes, giving stats for each and displaying a graph of individual score frequency. The following screenshot shows a map with highlighting and the stat panel open:
I noticed there were some issues with the stats panel staying open and displaying data that was no longer relevant when other menus were opened. I therefore fixed things to ensure that statistics and highlighting are removed when navigating between menus (but highlighting remains when changing an attribute search so you can more easily see the effect on your highlighted locations). I also fixed a bug in the statistics when ‘young’ or ‘old’ were selected that meant the count of ratings was not entirely accurate. In doing so I uncovered the issue with 11 questionnaires being of people that were too young to be ‘old’, which was causing problems for the display of data. Changing the year criteria in the API has fixed that.
I spent the rest of my time on the project working on the facilities to allow users to make their own groups. This should now be fully operational. This uses HTML5 LocalStorage, which is a way of storing data in the user’s browser. No data relating to the user’s chosen locations or group name is stored on the server and no logging in or anything is required to use the feature. However, data is stored in the user’s browser so if they use a different browser they won’t see their groups. Also, they need to have HTML5 LocalStorage enabled and supported in their browser. It is supported by default in all modern browsers so this shouldn’t be an issue for most people. If their browser doesn’t support it they simply won’t see the options to save their own groups.
If you open the group statistics menu, above the default groups is a section for ‘My saved groups’. If you click on the ‘create a group’ button you can create a group in the same way as you could in the CMS atlas – entering a name for your group and clicking on map markers to add a location (or clicking a second time to remove one). Once the ‘Save’ button is pressed your group will appear in the list of saved groups, and you will be able to highlight the markers and view statistics for it as you can with the default groups. There are also options to delete your group (there is currently no confirmation for this – if you press the button your group is immediately deleted) and edit your group, which returns you to the ‘create’ view but pre-populated with your data. You can rename your group or change the selected markers via this feature. I think that’s pretty much everything I needed to implement for the Experts Atlas, so next week I’ll press on with the facility to allow certain users to download the full dataset. Here’s a screenshot of the ‘My Groups’ feature:
Also this week I spent a bit of time on the Glasgow Medical Humanities website for Gavin Miller. This is due to go live in the next few weeks so I focussed on the last things that needed implemented. This included migrating the blog data from the old MHRC blog, which I managed to do via WordPress’s own import / export facilities. This worked pretty well, importing all of the text, the author details and the categories. The only thing it didn’t do was migrate the media, but Gavin has said this can be done manually as required so it’s not such a big deal. I also met with the project administrator on Friday to talk through the site and the CMS and discuss some of the issues she’d been encountering in accessing the site. I also found out a way of exporting the blog subscribers from the MHRC site so Gavin can let them know about the new site. I also created new versions of the ‘Spotlight on…’ and ‘Featured images’ from the old Medical Humanities site. I created these as WordPress widgets, meaning they can be added to any WordPress page simply by adding in a shortcode. I based the featured image carousel on Bootstrap, which includes a very nice carousel. Currently I’ve added both features to the bottom of the homepage, but these can easily be moved elsewhere. Here’s an example of how they look:
Other than the above I got into a discussion with various people across the University about a policy for publishing apps, responded to a request for help with audio files from Rob Maslen, had a chat with Gerry McKeever about the interactive map he wants to create for his project, spoke to Heather Pagan about the Anglo-Norman Dictionary data, helped Craig Lamont make some visual changes to the Editing Burns blog, replied to a query from Marc Alexander about how dates might be handled differently in the Historical Thesaurus, and had an email conversation with Ann Ferguson about the DSL bibliographical data.
I had my PDR session this week, so I needed to spend some time preparing for it, attending it, and reworking some of my PDR sections after it. I think it all went pretty well, though, and it’s good to get it over with for another year. I had one other meeting this week, with Sophie Vlacos from English Literature. She is putting a proposal together and I get her some advice on setting up a website and other technical matters.
My main project of the week once again was SCOSYA, and this week I was able to really get stuck into the Experts Atlas interface, which I began work on last week. I’ve set up the Experts Atlas to use the same grey map as the Public Atlas, but it currently retains the red to yellow markers of the CMS Atlas. The side panel is slightly wider than the Public Atlas and uses different colours, taken from the logo. The fractional zoom from the Public Atlas is also included, as is the left-hand menu style (i.e. not taking the full height of the Atlas). The ‘Home’ map shows the interview locations, with each appearing as a red circle. There are no pop-ups on this map, but the location name appears as a tooltip when hovered over.
The ‘Search Attributes’ option is mostly the same as the ‘Attribute Search’ option in the CMS Atlas. I’ve not yet updated the display of the attributes to allow grouping at three as opposed to two levels, probably using a tree-based approach. This is something I’ll need to tackle next week. I have removed the ‘interviewed by’ option, but as of yet I haven’t changed the Boolean display. At a team meeting we had discussed making the joining of multiple attributes default to ‘And’ and to hide ‘Or’ and ‘Not’ but I just can’t think of a way of doing this without ending up with more clutter and complexity. ‘And’ is already the default option and I personally don’t think it’s too bad to just see the other options, even if they’re not used.
The searches all work in the same way as in the CMS Atlas, but I did need to change the API a little, as when multiple attributes were selected these weren’t being ordered by location (e.g. all the D3 data would display then all the D4 data rather than all the data for both attributes for Aberdeen etc). This was meaning the full information was not getting displayed in the pop-ups. I’ve also completely changed the content of the pop-ups so as to present the data in a tabular format. The average rating appears in a circle to the right of the pop-up, with a background colour reflecting the average rating. The individual ratings also appear in coloured circles, which I personally think works rather well. Changing the layout of the popup was a fairly major undertaking as I had to change the way in which the data was processed, but I’d say it’s a marked improvement on the popups in the CMS Atlas. I removed the descriptions from the popups as these were taking up a lot of space and they can be viewed in the left-hand menu anyway. Currently if a location doesn’t meet the search criteria and is given a grey marker the popup still lists all of the data that is found for the selected attributes at that location. I did try removing this and just displaying the ‘did not meet criteria’ message, but figured it would be more interesting for users to see what data there is and how it doesn’t meet the criteria. Below is a screenshot of the Experts Atlas and an ‘AND’ search selected:
Popups for ‘Or’ and ‘Not’ searches are identical, but for an ‘Or’ search I’ve updated the legend to try and make it more obvious what the different colours and shapes refer to. In the CMS Atlas the combinations appear as ‘Y/N’ values. E.g. if you have selected ‘D3 ratings 3-5’ OR ‘Q14 ratings 3-5’ then locations where neither are found were identified as ‘NN’, locations were the D3 was present at these ratings but Q14 wasn’t were identified as ‘YN’, locations without D3 but with Q14 were ‘NY’ and locations with both were ‘YY’. This wasn’t very easy to understand, so now the legend includes the codes, as the following screenshot demonstrates:
I think works a lot better, but there is a slight issue in that if someone chooses the same code but with different criteria (e.g. ‘D3 rated 4-5 by Older speakers’ OR ‘D3 rated 4-5 by Younger speakers’) the legend doesn’t differentiate between the different ‘D3’s, but hopefully anyone doing such a search would realise the first ‘D3’ relates to their first search selection while the second refers to their second selection.
I have omitted the ‘spurious’ tags from the ratings in the popups and also the comments. I wasn’t sure whether these should be included, and if so how best to incorporate them. I’ve also not included the animated dropping down of markers in the Experts Atlas as firstly it’s supposed to be more serious and secondly the drop down effect won’t work with the types of markers used for the ‘Or’ search. I have also not currently incorporated the areas. We had originally decided to include these, but they’ve fallen out of favour somewhat, plus they won’t work with ‘Or’ searches, which rely on differently shaped markers as well as colour, and they don’t work so well with group highlighting either.
The next menu item is ‘My search log’, which is what I’ve renamed the ‘History’ feature from the CMS Atlas. This now appears in the general menu structure rather than replacing the left-hand menu contents. Previously the rating levels just ran together (e.g. 1234), which wasn’t very clear so I’ve split these up so the description reads something like:
“D3: I’m just after, age: all, rated by 1 or more people giving it at score of 3, 4 or 5 Or Q14: Baremeasurepint, age: all, rated by 1 or more people giving it at score of 3, 4 or 5 viewed at 15:41:00”
As with the CMS Atlas, pressing on a ‘Load’ button loads the search back into the map. The data download option has also been given its own menu item, and pressing on this downloads the CSV version of the data that’s displayed on the map. And that’s as far as I’ve got. The main things still to do are replacing the attribute drop-down list with a three-level tree-based approach and adding in the group statistics feature. Plus I still need to create the facility for managing users who have been authorised to download the full dataset and creating the login / download options for this.
Also this week I made some changes to the still to launch Glasgow Medical Humanities Network website for Gavin Miller. I made some minor tweaks, such as adding in the Twitter feed and links to subscribe to the blog, updated the site text on pages that are not part of the WordPress interface. Gavin also wanted me to grab a copy of all the blogs on another of his sites (http://mhrc.academicblogs.co.uk/) and migrate this to the new site. However, getting access to this site has proved to be tricky. Gavin reckoned the domain was set up by UoG, but I submitted a Helpdesk query about it and no-one in IT knows anything about the site. Eventually someone in the Web Team get back to me to say that the site had been set up by someone in Research Strategy and Innovation and they’d try to get me access, but despite the best efforts of a number of people I spoke to I haven’t managed to get access yet. Hopefully next week, though.
Also this week I continued to work on the 18th Century Borrowing site for Matthew Sangster. I have now fixed the issue with the zoomable images that were landscape being displayed on their side, as demonstrated in last week’s post. All zoomable images should now display properly, although there are a few missing images at the start or end of the registers. I also developed all of the ‘browse’ options for the site. It’s now possible to browse a list of all student borrower names. This page displays a list of all initial letters of the surnames, with a count of the number of students with surnames beginning with the letter. Clicking on a letter displays a list of all students with surnames beginning with the letter, and a count of the number of records associated with each student. Clicking on a student brings up the results page, which lists all of the associated records in a tabular format. This is pretty much identical to the tabular view offered when looking at a page, only the records can come from any page. As such there is an additional column displaying the register and page number of each record, and clicking on this takes you to the page view, so you can see the record in context and view the record in the zoomable image if you so wish. There are links back to the results page, and also links back from the results page to the student page. Here’s an example of the list of students with surnames beginning with ‘C’:
The ‘browse professors’ page does something similar, only all professors are listed on one page rather than being split into different pages for each initial letter of the surname. This is due to there being a more limited number of professors. That there are some issues with the data, which is why we have professors listed with names like ‘, &’. There are what look like duplicates listed as separate professors (e.g. ‘Traill, Dr’) because the surname and / or title fields must have contained additional spaces or carriage returns so the scripts considered the contents to be different. Clicking on a professor loads the results page in the same way as the students page. Note that currently there is no pagination of results, so for example clicking on ‘Mr Anderson’ will display all 1034 associated records in one long table. I might split this up, although in these days of touchscreens people tend to prefer scrolling through long pages rather than clicking links to browse through multiple smaller pages.
‘Browse Classes’ does the same for classes. I also created two new related tables to hold details of the classes, which enables me to pass a numerical ‘class ID’ in the URL rather than the full class text, which is tidier and more easy to control. Again, there are issues with the data that results in multiple entries for what is presumably the same class – e.g. ‘Anat, Anat., Anat:, Anato., Anatom, Anatomy’. Matthew is still working on the data and it might be that creating a ‘normalised’ text field for class is something that we should do.
‘Book Names’ does the same thing for book names. Again, I’ve written a script that extracts all of the unique book names and stores them once, allowing me to pass a ‘book name ID’ in the URL rather than the full text. As with ‘students’ an alphabetical list of book names is presented initially due to the number of different books. And as with other data types, a normalised book name should ideally be recorded as there are countless duplicates with slight variations here, making the browse feature pretty unhelpful as it currently stands. I’ve taken the same approach with book titles, although surprisingly there is less variation here, even though the titles are considerably longer. One thing to note is that any book with a title that doesn’t start with an a-z character is currently not included. There are several that start with ‘….’ And some with ‘[‘ that are therefore omitted. This is because the initial letter is passed in the URL and for security reasons there are checks in place to stop characters other than a-z being passed. ‘Browse Authors’ works in the same way, and generally there don’t appear to be too many duplicate variants, although there are some (e.g. ‘Aeschylus’ and ‘Aeschylus.’), and finally, there is browse by lending date, which groups records by month of lending.
Also this week I added a new section to Bryony Randall’s New Modernist Editing site for her AHRC follow-on funding project: https://newmodernistediting.glasgow.ac.uk/the-imprints-of-the-new-modernist-editing/ and I spent a bit of time on DSL duties too. I responded to a long email from Rhona Alcorn about the data and scripts that Thomas Widmann had been working on before he left, and I looked at some bibliographical data that Ann Ferguson had sent me last week, investigating what the files contained and how the data might be used.
Next week I will continue to focus on the SCOSYA project and try to get the Experts Atlas finished.
I focussed on the SCOSYA project for the first few days of this week. I need to get everything ready to launch by the end of September and there is an awful lot still left to do, so this is really my priority at the moment. I’d noticed over the weekend that the story pane wasn’t scrolling properly on my iPad when the length of the slide was longer than the height of the atlas. In such cases the content was just getting cut off and you couldn’t scroll down to view the rest or press the navigation buttons. This was weird as I thought I’d fixed this issue before. I spent quite a bit of time on Monday investigating the issue, which has resulted in me having to rewrite a lot of the slide code. After much investigation I reckoned that this was an intermittent fault caused by the code returning a negative value for the height of the story pane instead of its real height. When the user presses the button to load a new slide the code pulls the HTML content of the slide in and immediately displays it. After that another part of the code then checks the height of the slide to see if the new contents make the area taller than the atlas, and if so the story area is then resized. The loading of the HTML using jQuery’s html() function should be ‘synchronous’ – i.e. the following parts of code should not execute before the loading of the HTML is completed. But sometimes this wasn’t the case – the new slide contents weren’t being displayed before the check for the new slide height was being run, meaning the slide height check was giving a negative value (no contents minus the padding round the slide). The slide contents then displayed but as the code thought the slide height was less than the atlas it was not resizing the slide, even when it needed to. It is a bit of a weird situation as according to the documentation it shouldn’t ever happen. I’ve had to put a short ‘timeout’ into the script as a work-around – after the slide loads the code waits for half a second before checking for the slide height and resizing, if necessary. This seems to be working but it’s still annoying to have to do this. I tested this out on my Android phone and on my desktop Windows PC with the browser set to a narrow height and all seemed to be working. However, when I got home I tested the updated site out on my iPad and it still wasn’t working, which was infuriating as it was working perfectly on other touchscreens.
In order to fix the issue I needed to entirely change how the story pane works. Previously the story pane was just an HTML area that I’d added to the page and then styled to position within the map, but there were clearly some conflicts with the mapping library Leaflet when using this approach. The story pane was positioned within the map area and mouse actions that Leaflet picks up (scrolling and clicking for zoom and pan) were interfering with regular mouse actions in the HTML story area (clicking on links, scrolling HTML areas). I realised that scrolling within the menu on the left of the map was working fine on the iPad so I investigated how this differed from the story pane on the right. It turned out that the menu wasn’t just a plain HTML area but was instead created by a plugin for Leaflet that extends Leaflet’s ‘Control’ options (used for buttons like ‘+/-‘ and the legend). Leaflet automatically prevents the map’s mouse actions from working within its control areas, which is why scrolling in the left-hand menu worked. I therefore created my own Leaflet plugin for the story pane, based on the menu plugin. Using this method to create the story area thankfully worked on my iPad, but it did unfortunately taken several hours to get things working, which was time I should ideally have been spending on the Experts interface. It needed to be done, though, as we could hardly launch an interface that didn’t work on iPads.
I also has to spend some further time this week making some more tweaks to the story interface that the team had suggested such as changing the marker colour for the ‘Home’ maps, updating some of the explanatory text and changing the pop-up text on the ‘Home’ map to add in buttons linking through to the stories. The team also wanted to be able to have blank maps in the stories, to make users focus on the text in the story pane rather than getting confused by all of the markers. Having blank maps for a story slide wasn’t something the script was set up to expect, and although it was sort of working, if you navigated from a map with markers to a blank map and then back again the script would break, so I spent some time fixing this. I also managed to find a bit of time starting on the experts interface, although less time than I had hoped. For this I’ve needed to take elements from the atlas I’d created for staff use, but adapt it to incorporate changes that I’d introduced for the public atlas. This has basically meant starting from scratch and introducing new features one by one. So far I have the basic ‘Home’ map showing locations and the menu working. There is still a lot left to do.
I spent the best part of two days this week working on the front-end for the 18th Century Borrowing pilot project for Matthew Sangster. I wrote a little document that detailed all of the features I was intending to develop and sent this to Matt so he could check to see if what I’m doing met his expectations. I spent the rest of the time working on the interface, and made some pretty good progress. So far I’ve made an initial interface for the website (which is just temporary and any aspect of which can be changed as required), I’ve written scripts to generate the student forename / surname and professor title / surname columns to enable searching by surname, and I’ve created thumbnails of the images. The latter was a bit of a nightmare as previously I’d batch rotated the images 90 degrees clockwise as the manuscripts (as far as I could tell) were written in landscape format but the digitised images were portrait, meaning everything was on its side.
However, I did this using the Windows image viewer, which gives the option of applying the rotation to all images in a folder. What I didn’t realise is that the image viewer doesn’t update the metadata embedded in the images, and this information is used by browsers to decide which way round to display the images. I ended up in a rather strange situation where the images looked perfect on my Windows PC, and also when opened directly within the browser, but when embedded in an HTML page they appeared on their side. It took a while to figure out why this was happening, but once I did I regenerated the thumbnails using the command-line ImageMagick tool instead, which I set to wipe the image metadata as well as rotating the images, which seemed to work. That is until I realised that Manuscript 6 was written in portrait not landscape so I had to repeat the process again but miss out Manuscript 6. I have since realised that all the batch processing of images I did to generate tiles for the zooming and panning interface is also now going to be wrong for all landscape images and I’m going to have to redo all of this too.
Anyway, I also made the facility where a user can browse the pages of the manuscripts, enabling them to select a register, view the thumbnails of each page contained therein and then click through to view all of the records on the page. This ‘view records’ page has both a text and image view. The former displays all of the information about each record on the page in a tabular manner, including links through to the GUL catalogue and the ESTC. The latter presents the image in a zoomable / pannable manner, but as mentioned earlier, the bloody image is on its side for any manuscript written in a landscape way and I still need to fix this, as the following screenshot demonstrates:
Also this week I spent a further bit of time preparing for my PDR session that I will be having next week, spoke to Wendy Anderson about updates to the SCOTS Corpus advanced search map that I need to fix, fixed an issue with the Medical Humanities Network website, made some further tweaks to the RNSN song stories and spoke to Ann Ferguson at the DSL about the bibliographical data that needs to be incorporated into the new APIs. A another pretty busy week, all things considered.