This week was the second week of the UCU strike action, meaning I only worked on Thursday and Friday. Thing were further complicated by the heavy snow, meaning the University was officially closed on Wednesday to Friday. However, I usually work from home on Thursdays anyway, so just worked as I would normally do. And on Friday I travelled into work without too much difficulty in order to participate in some meetings that had been scheduled.
I spent most of Thursday working on the REELS project, making tweaks to the database and content management system and working on the front end. I updated the ‘parts of speech’ list that’s used for elements, adding in ‘definite article’ and ‘preposition’, and also added in the full text in addition to the abbreviations to avoid any confusion. Last week I added ‘unknown’ to the elements database, with ‘na’ for the language. Carole pointed out that ‘na’ was appearing as the language when ‘unknown’ was selected, which it really shouldn’t do, so I updated the CMS and the front-end to ensure that this is hidden. I also wrote a blog post about the technical development of the front end. It’s not gone live yet but once it has I’ll link through to it. I also updated the quick search so that it only searches current place-names, elements and grid references, and I’ve fixed the ‘altitude’ field in the advanced search so that you can enter more than 4 characters into it.
In addition to this I spent some of the day catching up with emails and I also gave Megan Coyer detailed instructions on how to use Google Docs to perform OCR on an image based PDF file. This is a pretty handy trick to know and it works very well, even on older printed documents (so long as the print quality is pretty good). Here’s how you go about it:
You need to go to Google Drive (https://drive.google.com) then drag and drop the PDF into there, which keeps it as a PDF. Then right click on the thumbnail of the PDF and select ‘Open With…’ and then select Google Docs and it converts it into text (a process which can take a while depending on the size of your PDF). You can then save the file, download it as a Word file etc.
After trudging through the snow on Friday morning I managed to get into my office for 9am, and worked through until 5 without a lunch break as I had so much to try and do. At 10:30 I had a meeting with Jane Stuart-Smith and Eleanor Lawson about revamping the Seeing Speech website. I spent about an hour before this meeting going through the website and writing down a list of initial things I’d like to improve, and during our very useful two-hour meeting we went through this list, and discussed some other issues as well. It was all very helpful and I think we all have a good idea of how to proceed with the developments. Jane is going to try and apply for some funding to do the work, so it’s not something that will be tackled straight away, but I should be able to make good progress with it once I get the go-ahead.
I went straight from this meeting to another one with Marc and Fraser about updates to the Historical Thesaurus and work on the Linguistic DNA project. This was another useful and long meeting, lasting at least another two hours. I can’t really go into much detail about what was discussed here, but I have a clearer idea now of what needs to be done for LDNA in order to get frequency data from the EEBO texts, and we have a bit of a roadmap for future Historical Thesaurus updates, which is good.
After these meetings I spent the rest of the day working on an updated ‘Storymap’ for Kirsteen’s RNSN project. This involved stitching together four images of sheet music to use as a ‘map’ for the story, updating the position of all of the ‘pins’ so they appeared in the right places, updating the images used in the pop-ups, embedding some MP3 files in the pop-ups and other such things. Previously I was using the ‘make a storymap’ tools found here: https://storymap.knightlab.com/ which meant all our data was stored on a Google server and referenced files on the Knightlab servers. This isn’t ideal for longevity, as if anything changes either at Google or Knightlab then our feature breaks. Also, I wanted to be able to tweak the code and the data. For these reasons I instead downloaded the source code and added it to our server, and grabbed the JSON datafile generated by the ‘make a’ tool and added this to our server too. This allowed me to update the JSON file to make an HTML5 Audio player work in the pop-ups and it will hopefully allow me to update the code to make images in the pop-ups clickable too.