It was a four-day week this week due to the Queen’s funeral on Monday. I divided my time for the remaining four days over several projects. For Speak For Yersel I finally tackled the issue of the way maps are loaded. The system had been developed for a map to be loaded afresh every time data is requested, with any existing map destroyed in the process. This worked fine when the maps didn’t contain demographic filters as generally each map only needed to be loaded once and then never changed until an entirely new map was needed (e.g. for the next survey question). However, I was then asked to incorporate demographic filters (age groups, gender, education level), with new data requested based on the option the user selected. This all went through the same map loading function, which still destroyed and reinitiated the entire map on each request. This worked, but wasn’t ideal, as it meant the map reset to its default view and zoom level whenever you changed an option, map tiles were reloaded from the server unnecessarily and if the user was in ‘full screen’ mode they were booted out of this as the full screen map no longer existed. For some time I’ve been meaning to redevelop this to address these issues, but I’ve held off as there were always other things to tackled and I was worried about essentially ripping apart the code and having to rebuilt fundamental aspects of it. This week I finally plucked up the courage to delve into the code.
I created a test version of the site so as to not risk messing up the live version and managed to develop an updated method of loading the maps. This method initiates the map only once when a page is first loaded rather than destroying and regenerating the map every time a new question is loaded or demographic data is changed. This means the number of map tile loads is greatly reduced as the base map doesn’t change until the user zooms or pans. It also means the location and zoom level a user has left the map on stays the same when the data is changed. For example, if they’re interested in Glasgow and are zoomed in on it they can quickly flick between different demographic settings and the map will stay zoomed in on Glasgow rather than resetting each time. Also, if you’re viewing the map in full-screen mode you can now change the demographic settings without the resource exiting out of full screen mode.
All worked very well, with the only issues being that the transitions between survey questions and quiz questions weren’t as smooth as the with older method. Previously the map scrolled up and was then destroyed, then a new map was created and the data was loaded into the area before it smoothly scrolled down again. For various technical reasons this no longer worked quite as well any more. The map area still scrolls up and down, but the new data only populates the map as the map area scrolls down, meaning for a brief second you can still see the data and legend for the previous question before it switches to the new data. However, I spent some further time investigating this issue and managed to fix it, with different fixes required for the survey and the quiz. I also noticed a bug whereby the map would increase in size to fit the available space but the map layers and data were not extending properly into the newly expanded area. This is a known issue with Leaflet maps that have their size changed dynamically and there’s actually a Leaflet function that sorts it – I just needed to call map.invalidateSize(); and the map worked properly again. Of course it took a bit of time to figure this simple fix out.
I also made some further updates to the site. Based on feedback about the difficulty some people are having about which surveys they’ve done, I updated the site to log when the user completes a survey. Now when the user goes to the survey index page a count of the number of surveys they’ve completed is displayed in the top right and a green tick has been added to the button of each survey they have completed. Also, when they reach the ‘what next’ page for a survey a count of their completed survey is also shown. This should make it much easier for people to track what they’ve done. I also made a few small tweaks to the data at the request of Jennifer, and create a new version of the animated GIF that has speech bubbles, as the bubble for Shetland needed its text changed. As I didn’t have the files available I took the opportunity regenerate the GIF, using a larger map, as the older version looked quite fuzzy on a high definition screen like an iPad. I kept the region outlines on as well to tie it in better with our interactive maps. Also the font used in the new version is now the ‘Baloo’ font we use for the site. I stored all of the individual frames both as images and as powerpoint slides so I can change them if required. For future reference, I created the animated GIF using https://ezgif.com/maker with a 150 second delay between slides, crossfade on and a fader delay of 8.
Also this week I researched an issue with the Scots Thesaurus that was causing the site to fail to load. The WordPress options table had become corrupted and unreadable and needed to be replaced with a version from the backups, which thankfully fixed things. I also did my expenses from the DHC in Sheffield, which took longer than I thought it would, and made some further tweaks to the Kozeluch mini-site on the Burns C21 website. This included regenerating the data from a spreadsheet via a script I’d written and tweaking the introductory text. I also responded to a request from Fraser Dallachy to regenerate some data that a script Id’ previously written had outputted. I also began writing a requirements document for the redevelopment of the place-names project front-ends to make them more ‘map first’.
I also did a bit more work for Speech Star, making some changes to the database of non-disordered speech and moving the ‘child speech error database’ to a new location. I also met with Luca to have a chat about the BOSLIT project, its data, the interface and future plans. We had a great chat and I then spent a lot of Friday thinking about the project and formulating some feedback that I sent in a lengthy email to Luca, Lorna Hughes and Kirsteen McCue on Friday afternoon.
I spent a bit of time this week going through my notes from the Digital Humanities Congress last week and writing last week’s lengthy post. I also had my PDR session on Friday and I needed to spend some time preparing for this, writing all of the necessary text and then attending the session. It was all very positive and it was a good opportunity to talk to my line manager about my role. I’ve been in this job for ten years this month and have been writing these blog posts every working week for those ten years, which I think is quite an achievement.
In terms of actual work on projects, it was rather a bitty week, with my time spread across lots of different projects. On Monday I had a Zoom call for the VariCS project, a phonetics project in collaboration with Strathclyde that I’m involved with. The project is just starting up and this was the first time the team had all met. We mainly discussed setting up a web presence for the project and I gave some advice on how we could set up the website, the URL and such things. In the coming weeks I’ll probably get something set up for the project.
I then moved onto another Burns-related mini-project that I worked on with Kirsteen McCue many months ago – a digital edition of Koželuch’s settings of Robert Burns’s Songs for George Thomson. We’re almost ready to launch this now and this week I created a page for an introductory essay, migrated a Word document to WordPress to fill the page, including adding in links and tweaking the layout to ensure things like quotes displayed properly. There are still some further tweaks that I’ll need to implement next week, but we’re almost there.
I also spent some time tweaking the Speak For Yersel website, which is now publicly accessible (https://speakforyersel.ac.uk/) but still not quite finished. I created a page for a video tour of the resource and made a few tweaks to the layout, such as checking the consistency of font sizes used throughout the site. I also made some updates to the site text and added in some lengthy static content to the site in the form or a teachers’ FAQ and a ‘more information’ page. I also changed the order of some of the buttons shown after a survey is completed to hopefully make it clearer that other surveys are available.
I also did a bit of work for the Speech Star project. There had been some issues with the Central Scottish Phonetic Features MP4s playing audio only on some operating systems and the replacements that Eleanor had generated worked for her but not for me. I therefore tried uploading them to and re-downloading them from YouTube, which thankfully seemed to fix the issue for everyone. I then made some tweaks to the interfaces to the two project websites. For the public site I made some updates to ensure the interface looked better on narrow screens, ensuring changing the appearance of the ‘menu’ button and making the logo and site header font smaller to they take up less space. I also added an introductory video to the homepage too.
For the Books and Borrowing project I processed the images for another library register. This didn’t go entirely smoothly. I had been sent 73 images and these were all upside down so needed rotating. It then transpired that I should have been sent 273 images so needed to chase up the missing ones. Once I’d been sent the full set I was then able to generate the page images for the register, upload the images and associate them with the records.
I then moved on to setting up the front-end for the Ayr Place-names website. In the process of doing so I became aware that one of the NLS map layers that all of our place-name projects use had stopped working. It turned out that the NLS had migrated this map layer to a third party map tile service (https://www.maptiler.com/nls/) and the old URLs these sites were still using no longer worked. I had a very helpful chat with Chris Fleet at NLS Maps about this and he explained the situation. I was able to set up a free account with the maptiler service and update the URLS in four place-names websites that referenced the layer (https://berwickshire-placenames.glasgow.ac.uk/, https://kcb-placenames.glasgow.ac.uk/, https://ayr-placenames.glasgow.ac.uk and https://comparative-kingship.glasgow.ac.uk/scotland/). I’ll need to ensure this is also done for the two further place-names projects that are still in development (https://mull-ulva-placenames.glasgow.ac.uk and https://iona-placenames.glasgow.ac.uk/).
I managed to complete the work on the front-end for the Ayr project, which was mostly straightforward as it was just adapting what I’d previously developed for other projects. The thing that took the longest was getting the parish data and the locations where the parish three-letter acronyms should appear, but I was able to get this working thanks to the notes I’d made the last time I needed to deal with parish boundaries (as documented here: https://digital-humanities.glasgow.ac.uk/2021-07-05/. After discussions with Thomas Clancy about the front-end I decided that it would be a good idea to redevelop the map-based interface to display al of the data on the map by default and to incorporate all of the search and browse options within the map itself. This would be a big change, and it’s one I had been thinking of implementing anyway for the Iona project, but I’ll try and find some time to work on this for all of the place-name sites over the coming months.
Finally, I had a chat with Kirsteen McCue and Luca Guariento about the BOSLIT project. This project is taking the existing data for the Bibliography of Scottish Literature in Translation (available on the NLS website here: https://data.nls.uk/data/metadata-collections/boslit/) and creating a new resource from it, including visualisations. I offered to help out with this and will be meeting with Luca to discuss things further, probably next week.
I attended the Digital Humanities Congress in Sheffield this week (https://www.dhi.ac.uk/dhc2022/ ), which meant travelling down on the Wednesday and attending the event on the Thursday and Friday (there were also some sessions on Saturday but I was unable to stay for those). It was an excellent conference featuring some great speakers and plenty of exciting research and I’ll give an overview of the sessions I attended here. The event kicked off with a plenary session by Marc Alexander, who as always was an insightful and entertaining speaker. His talk was about the analysis of meaning at different scales, using the Hansard corpus as his main example and thinking about looking at the data from a distance (macro), close up (micro) but also the stuff in the middle that often gets overlooked, which he called the meso. The Hansard corpus is a record of what has been said in parliament and it began in 1803 and currently runs up to 2003/5 and consists of 7.6 million speeches and 1.6 billion words, all of which have been tagged for part of speech and also semantics and it can be accessed at https://www.english-corpora.org/hansard/. Marc pointed out that the corpus is not a linguistic transcript as it can be a summary rather than the exact words – it’s not verbatim but substantially so and doesn’t include things like interruptions and hesitations. The corpus was semantically tagged using the SAMUELS tagger which annotates the texts using data from the Historical Thesaurus.
Marc gave some examples of analysis at different scales. For micro analysis he looked at the use of ‘draconian’ and how this word does not appear much in the corpus until the 1970s. He stated that we can use word vectors and collocates at this level of analysis, for example looking at the collocates of ‘privatisation’ after the 1970s, showing the words that appear most frequently are things like rail, electricity, British etc but there are also words such as ‘proceeds’ and ‘proposals’, ‘opposed’ and ‘botched’. Marc pointed out that ‘botched’ is a word we mostly all know but would not use ourselves. This is where semantic collocates come in useful – grouping words by their meaning and being able to search for the meanings of words rather than individual forms. For example, it’s possible to look at speeches by women MPs in the 1990s and find the most common concepts they spoke about, which were things like ‘child’, ‘mother’ ‘parent’. Male MPs on the other hand talked about things like ‘peace treaties’ and ‘weapons’. Hansard doesn’t explicitly state the sex of the person so this is based on the titles that are used.
At the Macro level Marc discussed words for ‘uncivilised’ and 2046 references to ‘uncivil’. At different periods in time there are different numbers of terms available to mean this concept. The number of words that are available for a concept can show how significant a concept is at different time periods. It’s possible with Hansard to identify places and also whether a term is used to refer to the past or present, so we can see what places appear near an ‘uncivilised’ term (Ireland, Northern Ireland, India, Russia and Scotland most often). Also in the past ‘uncivilised’ was more likely to be used to refer to some past time whereas in more recent years it tends to be used to refer to the present.
Marc then discussed some of the limitations of the Hansard corpus. It is not homogenous but is discontinuous and messy. It’s also a lot bigger in recent times than historically – 30 million words a year now but much less in the past. Also until 1892 it was written in the third person.
Marc then discussed the ‘meso’ level. He discussed how the corpus was tagged by meaning using a hierarchical system with just 26 categories at the top level, so it’s possible to aggregate results. We can use this to find which concepts are discussed the least often in Hansard, such as the supernatural, textiles and clothing and plants. We can also compare this with other semantically tagged corpora such as SEEBO and compare the distributions of concept. There is a similar distribution but a different order. Marc discussed concepts that are ‘semantic scaffolds’ vs ‘semantic content’. He concluded by discussing sortable tables and how we tend to focus on the stuff at the top and the bottom and ignore the middle, but that it is here that some of the important things may reside.
The second session I attended featured four short papers. The first discussed linked ancient world data and research into the creation of and use of this data. Linked data is of course RDF triples, consisting of a subject, predicate and object, for example ‘Medea was written by Euripides’. It’s a means of modelling complex relationships and reducing disambiguation by linking through using URIs (e.g. to make it clear we’re talking about ‘Medea’ the play rather than a person). However, there are barriers for use. There are lots of authority files and vocabularies and some modelling approaches are incomplete. Also, modelling uncertainty can be difficult and there is a reliance on external resources. The speaker discussed LOUD data (Linked, Open, Usable Data) and conducted a survey of linked data use in ancient world studies, consisting of 212 participants and 16 in-depth interviews. The speaker came up with five principles:
Transparency (openness, availability of export options, documentation); Extensibility (current and future integration based on existing infrastructure); Intuitiveness (making it easy for users to do what they need to do); Reliability (the tool / data does what it says it does consistently – this is a particular problem as SPARQL endpoints for RDF data can become unreachable as servers get overloaded); Sustainability (continued functionality of the resource).
The speaker concluded by stating that human factors are also important in the use of the data, such as collaboration and training, and also building and maintaining a community that can lead to new collaborations and sustainability.
The second speaker discussed mimesis and the importance of female characters in Dutch literary fiction, comparing female characters in novels in the 1960s with those in the 2010s to see if literature is reflecting changes in society (mimesis). The speaker developed software to enable the automatic extraction of social networks from literary texts and wanted to investigate how each character’s social network changed in novels as the role of women changed in society. The idea was that female characters would be stronger and more central in the second period. The data used a corpus of 170 Dutch novels from 2013 and 152 Dutch novels from the 1960s. Demographic information on 2136 characters was manually compiled and a comprehensive network analysis was semi-automatically generated that identified character and gender resolution (based on first name and pronouns). Centrality scores were computed from the network diagrams to demonstrate how central a character was. The results shows that the data for the two time periods was the same on various metrics, with a 60/40 split of male to female characters in both periods. The speaker referred to this as ‘the golden mean of patriarchy’ where there are two male characters for every female one. The speaker stated that only one metric had a statistically significant result and that was network centrality, which for all speakers regardless of gender increased between the time periods. The speaker stated that this was due to a broader cultural trend towards more ‘relational’ novels with more of a focus on relations.
The third speaker discussed a quantitative analysis of digital scholarly editions as part of the ‘C21 Editions’. The research engaged with 50 scholars who have produced digital editions and produced a white paper on the state of the art of digital editions and also generated a visualisation of a catalogue of digital editions. The research brought together two existing catalogues of digital editions. One site (https://v3.digitale-edition.de/) contains 714 digital editions whereas the other one (https://dig-ed-cat.acdh.oeaw.ac.at/) contains 316.
The fourth speaker presented the results of a case study of big data using a learner corpus. The speaker pointed out that language-based research is changing due to the scale of the data, for example in digital communication such as Twitter, the digitisation of information such as Google Books and the capabilities of analytical tools such as Python and R. The speaker used a corpus of essays written by non-native English speakers as they were learning English. It contains more than 1 million texts by more than 100,000 learners from more than 100 countries, with many proficiency levels. The speaker was interested in lexical diversity in different tasks. He created a sub-corpus as only 20% of nationalities have more than 100 learners. He also had to strip out non-English text and remove duplicate texts. He then identified texts that were about the same topic using topic modelling, identifying keywords, such as cities, weather, sports and the corpus is available here: https://corpus.mml.cam.ac.uk/
After the break I attended a session about mapping and GIS, consisting of three speakers. The first was about the production of a ‘deep map’ of the European Grand Tour, looking specifically at the islands of Sicily and Cyprus and the identify of places mentioned in the tours. These were tours by mostly Northern European aristocrats to Southern European countries beginning at the end of the 17th Century. Richard Lassels’ Voyage of Italy in 1670 was one of the first. Surviving data the project analysed included diaries and letters full of details of places are the reasons for the visit, which might have been to learn about art or music, observe the political systems, visit universities and see the traces of ancient cultures. The data included descriptions of places and of routes taken, people met, plus opinions and emotions. The speaker stated that the subjectivity in the data is an area previously neglected but is important in shaping the identity of a place. The speaker stated that deep mapping (see for example http://wp.lancs.ac.uk/lakesdeepmap/the-project/gis-deep-mapping/) incorporates all of this in a holistic approach. The speaker’s project was interested in creating deep maps of Sicily and Cyprus to look at the development of a European identity forged by Northern Europeans visiting Southern Europe – what did the travellers bring back? And what influence did they leave behind? Sicily and Cyprus were chosen because they were less visited and are both islands with significant Greek and Roman histories. They also had a different political situation at the time, with Cyprus under the control of the Ottoman empire. The speaker discussed the project’s methodology, consisting of the selection of documents ( 18th century diaries of travellers interested in Classical times), looking at discussions of architecture, churches, food and accommodation. Adjectives were coded and the data was plotted using ArcGIS. Itineraries were plotted on a map, with different coloured lines showing routes and places marked. Eventually the project will produce an interactive web-based map but for now it just runs in ArcGIS.
The second paper in the session discussed using GIS to Illustrate and understand the influence of St Æthelthryth of Ely, a 7th century saint whose cult was one of the most enduring of the Middle Ages. The speaker was interested in plotting the geographical reach of the cult, looking at why it lasted so long, what its impact was and how DH tools could help with the research. The speaker stated that GIS tools have been increasingly used since the mid-2000s but are still not used much in medieval studies. The speaker created a GIS database consisting of 600 datapoints for things like texts, calendars, images and decorations and looked at how the cult expanded throughout the 10th and 11th centuries. This was due to reforming bishops arriving from France after the Viking pillages, bringing Benedictine rule. Local saints were used as role models and Ely was transformed. The speaker stated that one problem with using GIS for historical data is that time is not easy to represent. He created a sequence of maps to show the increases in land holding from 950 to 1066 and further development in the later middle ages as influence was moving to parish churches. He mapped parish churches that were dedicated to the saint or had images of her showing the change in distribution over time. Clusters and patterns emerged showing four areas. The speaker plotted these in different layers that could be turned on and off, and also overlaid the Gough Map (one of the earliest maps of Britain – http://www.goughmap.org/map/) as a vector layer. He also overlaid the itineraries of early kings to show different routes and possible pilgrimage routes emerged.
The final paper looked at plotting the history of the holocaust through holocaust literature and mapping, looking to narrate history through topography, noting the time and place of events specifically in the Warsaw ghetto and creating an atlas of holocaust literature (https://nplp.pl/kolekcja/atlas-zaglady/). The resource consists of three interconnected modules focussing on places, people and events, with data taken from the diaries of 17 notable individuals, such as the composer who was the subject of the film ‘The Pianist’. The resource features maps of the ghetto with areas highlighted depending on the data the user selects. Places are sometimes vague – they can be an exact address, a street or an area. There were also major mapping challenges as modern Warsaw is completely different to during the war and the boundaries of the ghetto changed massively during the course of the war. The memoirs also sometimes gave false addresses, such as intersections of streets that never crossed. At the moment the resource is still a pilot which took a year to develop, but it will be broadened out (with about 10 times more data) that will include memoirs written after the events and translations into English.
The final session of the day was another plenary, given by the CEO of ‘In the room’ (see https://hereintheroom.com/) who discussed the fascinating resource the company has created. It presents interactive video encounters using AI to enable users to ask spoken questions and for the system to pick out and play video clips that closely match the user’s topic. It began as a project at the National Holocaust Centre and Museum near Nottingham, which organised events where school children could meet survivors of the holocaust. The question arose as to how to keep these encounters going after the last survivors are no longer with us. The initial project recorded hundreds of answers to questions with videos responding to actual questions by users. Users reacted as if they were encountering a real person. The company was then set up to make a web-enabled version of the tool and to make it scalable. The tool responds to the ‘power of parasocial relationships’ (one sided relationships) and the desire for personalised experiences.
This led to the development of conversational encounters with famous 11 people, where the user can ask questions by voice, the AI matches the intent and plays the appropriate video clips. One output was an interview with Nile Rodgers in collaboration with the National Portrait Gallery to create a new type of interactive portrait experience. Nile answered about 350 questions over two days and the result (see the link above) was a big success, with fans reporting feeling nervous when engaging with the resource. There are also other possible uses for the technology in education, retail and healthcare (for example a database of 12,000 answers to questions about mental health).
The system can suggest follow-up questions to help the educational learning experience and when analysing a question the AI uses confidence levels. If the level is too low then a default response is presented. The system can work in different languages, with the company currently working with Munich holocaust survivors. The company is also working with universities to broaden access to lecturers. Students felt they got to know the lecturer better as if really interacting with them. A user survey suggested that 91% of 18-23 year olds believed the tool would be useful for learning. As a tool it can help to immediately identify which content is relevant and as it is asynchronous the answers can be found at any time. The speaker stated that conversational AI is growing and is not going away – audiences will increasingly expect such interactions in their lives.
The second day began with another parallel session. The first speaker in the session I chose discussed how a ‘National Collection’ could be located through audience research. The speaker discussed a project that used geographical information such as where objects were made, where they reside, the places they depict or describe and brought all this together on maps of locations where participants live. Data was taken from GLAMs (Galleries, Libraries, Archives, Museums) and Historic Environment and looked at how connections could be drawn between these. The project looked at a user centred approach when creating a map interface – looking at the importance of local identity in understanding audience motivations. The project conducted qualitative research (a user survey) and quantitative research (focus groups) and devised ‘pretotypes’ as focus group stimulus.
The idea was to create a map of local cultural heritage objects similar to https://astreetnearyou.org that displays war records related to a local neighbourhood. Objects that might have no interest to a person become interesting due to their location in their neighbourhood. The system created was based on the Pelagios methodology (https://pelagios.org/) and used a tool called locolligo (https://github.com/docuracy/Locolligo) to convert CSV data into JSONLD data.
The second speaker was a developer of DH resources who has worked at Sheffield’s DHI for 20 years. His talk discussed how best to manage the technical aspects of DH projects in future. He pointed out that his main task as a developer is to get data online for the public to use and that the interfaces are essentially the same. He pointed out that these days we mostly get out online content through ‘platforms’ such as Twitter, Tiktok and Instagram. There has been much ‘web consolidation’ away from individual websites to these platforms. However, this hasn’t happened in DH, which is still very much about individual website, discrete interfaces and individual voices. But these leads to a problem with maintenance of the resources. The speaker mentioned the AHDS service that used to be a repository for Arts and Humanities data, but this closed in 2008. The speaker also talked about FAIR data (Findable, Accessible, Interoperable, Reusable) and how depositing data in an institutional repository doesn’t really fit into this. Generally data is just a dataset. Project websites generally contain a lot of static ancillary pages and these can be migrated to a modern CMS such as WordPress, but what about the record level data? Generally all DH websites have a search form, search results and records. The underlying structure is generally the same too – a data store, a back end and a front end. These days at DHI the front end is often built using Rect.js or Angular, with Elastic Search for data store and Symphony as back end. The speaker in interested in how to automatically generate a DH interface for a project’s data, such as generating the search form by indexing the data. The generated front-end can then be customised but the data should need minimal interpretation by the front-end.
The third speaker in the session discussed exploratory notebooks for cultural heritage datasets, specifically Jupyter notebooks used with datasets at the NLS, which can be found here: https://data.nls.uk/tools/jupyter-notebooks/. The speaker stated that the NLS aims to have digitised a third of its 31 million objects by 2025 and has developed a data foundry to make data available to researchers. Data have to be open, transparent (i.e. include provenance) and practical (i.e. in usable file formats). Jupyter Notebooks allow people to explore and analyse the data without requiring any coding ability. Collections can be accessed as data and there are tutorials on things like text analysis. The notebooks use Python and the NLTK (https://www.nltk.org/) and the data has been cleaned and standardised, and is available in various forms such as lemmatised, normalised, stemmed. The notebooks allow for data analysis, summary and statistics such as lexical diversity in the novels of Lewis Grassic Gibbon over time. The notebooks launched in September 2020. The notebooks can also be run in the online service Binder (https://mybinder.org/).
After the break there was another parallel session and the one I attended mostly focussed on crowdsourcing. The first talk was given remotely by a speaker based in Belgium and discussed the Europeana photography collection, which currently holds many millions of items including numerous virtual exhibitions, for example one on migration (https://www.europeana.eu/en/collections/topic/128-migration). This allows you to share your migration story including adding family photos. The photo collection’s search options uses a visual similarity search that uses AI to perform pattern matching but there have been mixed results. Users can also create their own galleries and the project organised a ‘subtitle-a-thon’ which encouraged users to create subtitles for videos in their own languages. There is also a project called https://www.citizenheritage.eu/ to engage with people.
The second speaker discussed ‘computer vision and the history of printing’ and discussed the amazing work of the Visual Geometry Group at the University of Oxford (https://www.robots.ox.ac.uk/~vgg/). The speaker discussed a ‘computer vision pipeline’ through which images were extracted from a corpus and clustered by similarity and uniqueness. He first step was to extract illustrations from pages of text using an object detection model. This used the EfficientDet object detector (https://towardsdatascience.com/a-thorough-breakdown-of-efficientdet-for-object-detection-dc6a15788b73) which was trained on the Microsoft Common Objects for Context (COCO) dataset, which has labelled objects for 328,000 images. Some 3609 illustrated pages were extracted, although there were some false positives, such as bleed through, printers marks and turned up pages. Images were then passed through image segmentation where every pixel was annotated to identify text blocks, initials etc. The segmentation model used was Mask R-CNN (https://github.com/matterport/Mask_RCNN) and a study of image pretraining for historical document image analysis can be found here: https://arxiv.org/abs/1905.09113.
The speaker discussed image matching versus image classification and the VGG Image Search Engine (VISE, https://www.robots.ox.ac.uk/~vgg/software/vise/) that is image matching and can search and identify geometric features, matching features regardless of rotation and skewing (but it breaks with flipping or warping).
All of this was used to perform a visual analysis of chapbooks printed in Scotland to identify illustrations that are ‘the same’. There is variation in printing, corrections in pen etc but the thresholds for ‘the same’ depends on the purpose. The speaker mentioned that image classification using deep learning is different – it can be used to differentiate images of cats and dogs, for example.
The final speaker in the session followed on very nicely from the previous speaker, as his research was using many of the same tools. This project was looking at image recognition using images from the protestant reformation so discover how and where illustrations were used by both Protestants and Catholics during the period, looking specifically at printing, counterfeiting and illustrations of Martin Luther. The speaker discussed his previous project, which was called Ornamento and looked at 160,000 distinct editions – some 70 million pages – and extracted 5.7 million illustrations. This used Google Books and PDFs as source material. It identified illustrations and their coordinates on the page and then classified the illustrations, for example borders, devices, head pieces, music. These were preprocessed and the results were put in a database. So for example it was possible to say that a letter appeared in 86 books in five different places. There was also a nice comparison tool for comparing images such as using a slider.
For the current project the researcher aimed to identify anonymous books by use of illustrated letters. For example, the tool was able to identify 16 books that were all produced in the same workshop in Leipzig. The project looked at printing in the Holy Roman Empire from 1450-1600, religious books and only those with illustrations, so a much smaller project than the previous one.
The final parallel session of the day has two speakers. The first discussed how historical text collections can be unlocked by the use of AI. This project looked at the 80 editions of the Encyclopaedia Britannica that have been digitised by the NLS. AI was to be used to group similar articles and detect how articles have changed using machine learning. The processed included information extraction, the creation of ontologies and knowledge graphs and deep transfer learning (see https://towardsdatascience.com/what-is-deep-transfer-learning-and-why-is-it-becoming-so-popular-91acdcc2717a). The plan was to detect, classify and extract all of the ‘terms’ in the data. Terms could either be articles (1-2 paragraphs in length) and topics (several pages). The project used the Defoe Python library (https://github.com/alan-turing-institute/defoe) to read XML, ingest the text and perform NLP preprocessing. The system was set up to detect when articles and topics began and ended to store page coordinates for such breaks, although headers changed over the editions which made this trickier. The project then created an EB ontology and knowledge graph, which is available at https://francesnlp.github.io/EB-ontology/doc/index-en.html. The EB knowledge graph RDF then allowed querying such as looking at ‘science’ as a node and see how this connected across all editions. The graph at the above URL contains the data from the first 8 editions.
The second paper discussed a crowdsourcing project called ‘operation war diary’ that was a collaboration between The National Archives, Zooniverse and the Imperial War Museum (https://www.operationwardiary.org/). The presenter had been tasked with working with the crowdsourced data in order to produce something from it but the data was very messy. The paper discussed how to deal with uncertainty in crowdsourced data, looking at ontological uncertainty, aleatory uncertainty and epistemic uncertainty. The speaker discussed the differences between accuracy and precision – how a cluster of results can be precise (grouped closely together) but wrong. The researcher used OpenRefine (https://openrefine.org/) to work with the data in order to produce clusters of placenames, resulting in 26910 clusters from 500,000 datapoints. She also looked at using ‘nearest neighbour’ and Levenshtein but there were issues with false positives (e.g. ‘Trench A’ and ‘Trench B’ are only one character apart but are clearly not the same). The researcher also discussed the outcomes of the crowdsourcing project, stating that only 10% of the 900,000 records were completed. Many pages were skipped, with people stopping at the ‘boring bits’. The speaker stated that ‘History will never be certain, but we can make it worse’ which I thought was a good quote. She suggested that crowdsourcing outputs should be weighted in favour of the volunteers who did the most. The speaker also pointed out that there are currently no available outputs from the project, and it was hampered by being an early Zooniverse project before the tool was well established. During the discussion after the talk someone suggested that data could have different levels of acceptability like food nutrition labels. It was also mentioned that representing uncertainty in visualisations is an important research area, and that visualisations can help identify anomalies. Another comment was that crowdsourcing doesn’t save money and time – managers and staff are needed and in many cases the work could be done better by a paid team in the same time. The important reason to choose crowdsourcing is to democratise data, not to save money.
The final session of the day was another plenary. This discussed different scales of analysis in digital projects and was given by the PI of the ‘Living with Machines’ project (https://livingwithmachines.ac.uk/). The speaker stated that English Literature mostly focussed on close reading while DH mostly looked at distant reading. She stated that scalable reading was like archaeology – starting with an aerial photo at the large scale to observe patterns then moving to excavation of a specific area, then iterating again. The speaker had previously worked on the Tudor Networks of Power project (https://tudornetworks.net/) which has a lovely high-level visualisation of the project’s data. It dealt with around 130,000 letters. Next came Networking Archives project, which doesn’t appear to be online but has some information here: https://networkingarchives.github.io/blog/about/. This project dealt with 450,000 letters. Then came ‘Living with Machines’ which is looking at even larger corpora. How to move through different scales of analysis is an interesting research question. The Tudor project used the Tudor State Papers from 1509 to 1603 and dealt with 130,000 letters and 20,000 people. The top-level interface facilitated discovery rather than analysis. The archive is dominated by a small number of important people that can be discovered via centrality and betweenness – the number of times a person’s record is intersected. When looking at the network for a person you can then compare this to people with similar network profiles like a fingerprint. By doing so it is possible to identify one spy and then see if others with a similar profile may also have been spies. But actual deduction requires close reading so iteration is crucial. The speaker also mentioned the ‘mesa scale’ – the data in the middle. The resource enables researchers to identify who was at the same location at the same time – people who never corresponded with each other but may have interacted in person.
The Networking Archives project used the Tudor State Papers but also brought in the data from EMLO. The distribution was very similar with 1-2 highly connected people and most people only having a few connections. The speaker discussed the impact of missing data. We can’t tell how much data is already missing from the archive, but we can tell what impact it might have by progressively removing more data from what we do have. Patterns in the data are surprisingly robust even when 60-70% of the data has been removed, and when removing different types of data such as folios or years. The speaker also discussed ‘ego networks’ that show the shared connections two people have – the people in the middle between two figures.
The speaker then discussed the ‘Living with Machines’ project, which is looking at the effects of machination from 1780 to 1920, looking at newspapers, maps, books, census records and journals. It is a big project with 28 people in the project team. The project is looking at language model predictions using software called BERT (https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270) and using Word2Vec to cluster words into topics (https://www.tensorflow.org/tutorials/text/word2vec). One example was looking at the use of the terms ‘man’, ‘machine’ and ‘slave’ to see where they are interchangeable.
The speaker ended with a discussion of how many different types of data can come together for analysis. She mentioned a map reader system that could take a map, cut it into squares and then be trained to recognise rail infrastructure in the squares. Rail network data can then be analysed alongside census data to see how the proximity to stations affects the mean number of servants per household, which was fascinating to hear about.
And that was the end of the conference for me, as I was unable to attend the sessions on the Saturday. It was an excellent event, I learnt a great deal about new research and technologies and I’m really glad I was given the opportunity to attend.
Before travelling to the event on Wednesday this was just a regular week for me and I’ll give a summary of what I worked on now. For the SpeechStar project I updated the database of normative speech on the Seeing Speech version of the site to include the child speech videos, as previously these had only been added to the other site. I also changed all occurrences of ‘Normative’ to ‘non-disordered’ throughout the sites and added video playback speed options to the new Central Scottish phonetic features videos.
I also continued to process library registers and their images for the Books and Borrowing project. I processed three registers from the Royal High School, each of which required different amounts of processing and different methods. This included renaming images, adding in missing page records, creating entire new runs of page records, uploading hundreds of images and changing the order of certain page records. I also wrote a script to identify which page records still did not have associated image files after the upload, as each of the registers is missing some images.
For the Speak For Yersel project I arranged for more credit to be added to our mapping account in case we get lots of hits when the site goes live, and I made various hopefully final tweaks to text throughout the site. I also gave some advice to students working on the migration of the Scottish Theatre Studies journal, spoke to Thomas Clancy about further work on the Ayr Place-names project and fixed a minor issue with the DSL.
I divided my time between a number of different projects this week. For Speak For Yersel I replaced the ‘click’ transcripts with new versions that incorporated shorter segments and more highlighted words. As the segments were now different I also needed to delete all existing responses to the ‘click’ activity. I then completed the activity once for each speaker to test things out, and all seems to work fine with the new data. I also changed the pop-up ‘percentage clicks’ text to ‘% clicks occurred here’, which is more accurate than the previous text which suggested it was the percentage of respondents. I also fixed an issue with the map height being too small on the ‘where do you think this speaker is from’ quiz and ensured the page scrolls to the correct place when a new question is loaded. I also removed the ‘tip’ text from the quiz intros and renamed the ‘where do you think this speaker is from’ map buttons on the map intro page. I’d also been asked to trim down the number of ‘translation’ questions from the ‘I would never say that’ activity so I removed some of those. I then changed and relocated the ‘heard in films and TV’ explanatory text removed the question mark from the ‘where do you think the speaker is from’ quiz intro page.
Mary had encountered a glitch with the transcription popups, whereby the page would flicker and jump about when certain popups were hovered over. This was caused by the page height increasing to accommodate the pop-up, causing a scrollbar to appear in the browser, which changed the position of the cursor and made the pop-up disappear, making the scrollbar go and causing a glitchy loop. I increased the height of the page for this activity so the scrollbar issue is no longer encountered, and I also made the popups a bit wider so they don’t need to be as long. Mary also noticed that some of the ‘all over Scotland’ dynamically generated map answers seemed to be incorrect. After some investigation I realised that this was a bug that had been introduced when I added in the ‘I would never say that’ quizzes on Friday. A typo in the code meant that the 60% threshold for correct answers in each region was being used rather than ‘100 divided by the number of answer options’. Thankfully once identified this was easy to fix.
I also participated in a Zoom call for the project this week to discuss the launch of the resource with the University’s media people. It was agreed that the launch will be pushed back to the beginning of October as this should be a good time to get publicity. Finally for the project this week I updated the structure of the site so that the ‘About’ menu item could become a drop-down menu, and I created placeholder pages for three new pages that will be added to this menu for things like FAQs.
I also continued to work on the Books and Borrowing project this week. On Friday last week I didn’t quite get to finish a script to merge page records for one of the St Andrews registers as it needed further testing on my local PC before I ran it on the live data. I tackled this issue first thing on Monday and it was a task I had hoped would only take half an hour or so. Unfortunately things did not go well and it took most of the morning to sort out. I initially attempted to run things on my local PC to test everything out, but I forgot to update the database connection details. Usually this wouldn’t be an issue as generally the databases I work with use ‘localhost’ as a connection URL, so the Stirling credentials would have been wrong for my local DB and the script would have just quit, but Stirling (where the system is hosted) uses a full URL instead of ‘localhost’. This meant that even though I had a local copy of the database on my PC and the scripts were running on a local server set up on my PC the scripts were in fact connecting to the real database at Stirling. This meant the live data was being changed. I didn’t realise this as the script was running and as it was taking some time I cancelled it, meaning the update quit halfway through changing borrowing records and deleting page records in the CMS.
I then had to write a further script to delete all of the page and borrowing records for this register from the Stirling server and reinstate the data from my local database. Thankfully this worked ok. I then ran my test script on the actual local database on my PC and the script did exactly what I wanted it to do, namely:
Iterate through the pages and for each odd numbered page move the records on these to the preceding even numbered page, and at the same time regenerate the ‘page order’ for each record so they follow on from the existing records. Then the even page needs its folio number updated to add in the odd number (e.g. so folio number 2 becomes ‘2-3’) and generate an image reference based on this (e.g. UYLY207-2_2-3). Then delete the odd page record and after all that is done regenerate the ‘next’ and ‘previous’ page links for all pages.
This all worked so I ran the script on the server and updated the live data. However, I then noticed that there are gaps in the folio numbers and this has messed everything up. For example, folio number 314 isn’t followed by 315 but 320. 320 isn’t an odd number so it doesn’t get joined to 314. All subsequent page joins are then messed up. There are also two ‘350’ pages in the CMS and two images that reference 350. We have UYLY207-2_349-350 and also UYLY207-2_350-351. There might be other situations where the data isn’t uniform too.
I therefore had to use my ‘delete and reinsert’ script again to revert to the data prior to the update as my script wasn’t set up to work with pages that don’t just increment their folio number by 1 each time. After some discussion with the RA I updated the script again so that it would work with the non-uniform data and thankfully all worked fine after that. Later in the week I also found some time to process two further St Andrews registers that needed their pages and records merged, and thankfully these went much smoother.
I also worked on the Speech Star project this week. I created a new page on both of the project’s sites (which are not live yet) for viewing videos of Central Scottish phonetic features. I also replaced the temporary logos used on the sites with the finalised logos that had been designed by a graphic designer. However, the new logo only really works well on a white background as the white cut-out round the speech bubble into the star becomes the background colour of the header. The blue that we’re currently using for the site header doesn’t work so well with the logo colours. Also, the graphic designer had proposed using a different font for the site and I decided to make a new interface for the site, which you can see below. I’m still waiting for feedback to see whether the team prefer this to the old interface (a screenshot of which you can see on this page: https://digital-humanities.glasgow.ac.uk/2022-01-17/) but I personally think it looks a lot better.
I also returned to the Burns Manuscript database that I’d begun last week. I added a ‘view record’ icon to each row which if pressed on opens a ‘card’ view of the record on its own page. I also added in the search options, which appear in a section above the table. By default, the section is hidden and you can show/hide it by pressing on a button. Above this I’ve also added in a placeholder where some introductory text can go. If you open the ‘Search options’ section you’ll find text boxes where you can enter text for year, content, properties and notes. For year you can either enter a specific year or a range. The other text fields are purely free-text at the moment, so no wildcards. I can add these in but I think it would just complicate things unnecessarily. On the second row are checkboxes for type, location name and condition. You can select one or more of each of these.
The search options are linked by AND, and the checkbox options are linked internally by OR. For example, filling in ‘1780-1783’ for year and ‘wrapper’ for properties will find all rows with a date between 1780 and 1783 that also have ‘wrapper’ somewhere in their properties. If you enter ‘work’ in content and select ‘Deed’ and ‘Fragment’ as types you will find all rows that are either ‘Deed’ or ‘Wrapper’ and have ‘work’ in their content.
If a search option is entered and you press the ‘Search’ button the page will reload with the search options open, and the page will scroll down to this section. Any rows matching your criteria will be displayed in the table below this. You can also clear the search by pressing on the ‘Clear search options’ button. In addition, if you’re looking at search results and you press on the ‘view record’ button the ‘Return to table’ button on the ‘card’ view will reload the search results. That’s this mini-site completed now, pending feedback from the project team, and you can see a screenshot of the site with the search box open below:
Also this week I’d arranged an in-person coffee and catch up with the other College of Arts developers. We used to have these meetings regularly before Covid but this was the first time since then that we’d all met up. It was really great to chat with Luca Guariento, Stevie Barrett and David Wilson again and to share the work we’d been doing since we last met. Hopefully we can meet again soon.
Finally this week I helped out with a few WordPress questions from a couple of projects and I also had a chance to update all of the WordPress sites I manage (more than 50) to the most recent version.