I attended the Digital Humanities Congress in Sheffield this week (https://www.dhi.ac.uk/dhc2022/ ), which meant travelling down on the Wednesday and attending the event on the Thursday and Friday (there were also some sessions on Saturday but I was unable to stay for those). It was an excellent conference featuring some great speakers and plenty of exciting research and I’ll give an overview of the sessions I attended here. The event kicked off with a plenary session by Marc Alexander, who as always was an insightful and entertaining speaker. His talk was about the analysis of meaning at different scales, using the Hansard corpus as his main example and thinking about looking at the data from a distance (macro), close up (micro) but also the stuff in the middle that often gets overlooked, which he called the meso. The Hansard corpus is a record of what has been said in parliament and it began in 1803 and currently runs up to 2003/5 and consists of 7.6 million speeches and 1.6 billion words, all of which have been tagged for part of speech and also semantics and it can be accessed at https://www.english-corpora.org/hansard/. Marc pointed out that the corpus is not a linguistic transcript as it can be a summary rather than the exact words – it’s not verbatim but substantially so and doesn’t include things like interruptions and hesitations. The corpus was semantically tagged using the SAMUELS tagger which annotates the texts using data from the Historical Thesaurus.
Marc gave some examples of analysis at different scales. For micro analysis he looked at the use of ‘draconian’ and how this word does not appear much in the corpus until the 1970s. He stated that we can use word vectors and collocates at this level of analysis, for example looking at the collocates of ‘privatisation’ after the 1970s, showing the words that appear most frequently are things like rail, electricity, British etc but there are also words such as ‘proceeds’ and ‘proposals’, ‘opposed’ and ‘botched’. Marc pointed out that ‘botched’ is a word we mostly all know but would not use ourselves. This is where semantic collocates come in useful – grouping words by their meaning and being able to search for the meanings of words rather than individual forms. For example, it’s possible to look at speeches by women MPs in the 1990s and find the most common concepts they spoke about, which were things like ‘child’, ‘mother’ ‘parent’. Male MPs on the other hand talked about things like ‘peace treaties’ and ‘weapons’. Hansard doesn’t explicitly state the sex of the person so this is based on the titles that are used.
At the Macro level Marc discussed words for ‘uncivilised’ and 2046 references to ‘uncivil’. At different periods in time there are different numbers of terms available to mean this concept. The number of words that are available for a concept can show how significant a concept is at different time periods. It’s possible with Hansard to identify places and also whether a term is used to refer to the past or present, so we can see what places appear near an ‘uncivilised’ term (Ireland, Northern Ireland, India, Russia and Scotland most often). Also in the past ‘uncivilised’ was more likely to be used to refer to some past time whereas in more recent years it tends to be used to refer to the present.
Marc then discussed some of the limitations of the Hansard corpus. It is not homogenous but is discontinuous and messy. It’s also a lot bigger in recent times than historically – 30 million words a year now but much less in the past. Also until 1892 it was written in the third person.
Marc then discussed the ‘meso’ level. He discussed how the corpus was tagged by meaning using a hierarchical system with just 26 categories at the top level, so it’s possible to aggregate results. We can use this to find which concepts are discussed the least often in Hansard, such as the supernatural, textiles and clothing and plants. We can also compare this with other semantically tagged corpora such as SEEBO and compare the distributions of concept. There is a similar distribution but a different order. Marc discussed concepts that are ‘semantic scaffolds’ vs ‘semantic content’. He concluded by discussing sortable tables and how we tend to focus on the stuff at the top and the bottom and ignore the middle, but that it is here that some of the important things may reside.
The second session I attended featured four short papers. The first discussed linked ancient world data and research into the creation of and use of this data. Linked data is of course RDF triples, consisting of a subject, predicate and object, for example ‘Medea was written by Euripides’. It’s a means of modelling complex relationships and reducing disambiguation by linking through using URIs (e.g. to make it clear we’re talking about ‘Medea’ the play rather than a person). However, there are barriers for use. There are lots of authority files and vocabularies and some modelling approaches are incomplete. Also, modelling uncertainty can be difficult and there is a reliance on external resources. The speaker discussed LOUD data (Linked, Open, Usable Data) and conducted a survey of linked data use in ancient world studies, consisting of 212 participants and 16 in-depth interviews. The speaker came up with five principles:
Transparency (openness, availability of export options, documentation); Extensibility (current and future integration based on existing infrastructure); Intuitiveness (making it easy for users to do what they need to do); Reliability (the tool / data does what it says it does consistently – this is a particular problem as SPARQL endpoints for RDF data can become unreachable as servers get overloaded); Sustainability (continued functionality of the resource).
The speaker concluded by stating that human factors are also important in the use of the data, such as collaboration and training, and also building and maintaining a community that can lead to new collaborations and sustainability.
The second speaker discussed mimesis and the importance of female characters in Dutch literary fiction, comparing female characters in novels in the 1960s with those in the 2010s to see if literature is reflecting changes in society (mimesis). The speaker developed software to enable the automatic extraction of social networks from literary texts and wanted to investigate how each character’s social network changed in novels as the role of women changed in society. The idea was that female characters would be stronger and more central in the second period. The data used a corpus of 170 Dutch novels from 2013 and 152 Dutch novels from the 1960s. Demographic information on 2136 characters was manually compiled and a comprehensive network analysis was semi-automatically generated that identified character and gender resolution (based on first name and pronouns). Centrality scores were computed from the network diagrams to demonstrate how central a character was. The results shows that the data for the two time periods was the same on various metrics, with a 60/40 split of male to female characters in both periods. The speaker referred to this as ‘the golden mean of patriarchy’ where there are two male characters for every female one. The speaker stated that only one metric had a statistically significant result and that was network centrality, which for all speakers regardless of gender increased between the time periods. The speaker stated that this was due to a broader cultural trend towards more ‘relational’ novels with more of a focus on relations.
The third speaker discussed a quantitative analysis of digital scholarly editions as part of the ‘C21 Editions’. The research engaged with 50 scholars who have produced digital editions and produced a white paper on the state of the art of digital editions and also generated a visualisation of a catalogue of digital editions. The research brought together two existing catalogues of digital editions. One site (https://v3.digitale-edition.de/) contains 714 digital editions whereas the other one (https://dig-ed-cat.acdh.oeaw.ac.at/) contains 316.
The fourth speaker presented the results of a case study of big data using a learner corpus. The speaker pointed out that language-based research is changing due to the scale of the data, for example in digital communication such as Twitter, the digitisation of information such as Google Books and the capabilities of analytical tools such as Python and R. The speaker used a corpus of essays written by non-native English speakers as they were learning English. It contains more than 1 million texts by more than 100,000 learners from more than 100 countries, with many proficiency levels. The speaker was interested in lexical diversity in different tasks. He created a sub-corpus as only 20% of nationalities have more than 100 learners. He also had to strip out non-English text and remove duplicate texts. He then identified texts that were about the same topic using topic modelling, identifying keywords, such as cities, weather, sports and the corpus is available here: https://corpus.mml.cam.ac.uk/
After the break I attended a session about mapping and GIS, consisting of three speakers. The first was about the production of a ‘deep map’ of the European Grand Tour, looking specifically at the islands of Sicily and Cyprus and the identify of places mentioned in the tours. These were tours by mostly Northern European aristocrats to Southern European countries beginning at the end of the 17th Century. Richard Lassels’ Voyage of Italy in 1670 was one of the first. Surviving data the project analysed included diaries and letters full of details of places are the reasons for the visit, which might have been to learn about art or music, observe the political systems, visit universities and see the traces of ancient cultures. The data included descriptions of places and of routes taken, people met, plus opinions and emotions. The speaker stated that the subjectivity in the data is an area previously neglected but is important in shaping the identity of a place. The speaker stated that deep mapping (see for example http://wp.lancs.ac.uk/lakesdeepmap/the-project/gis-deep-mapping/) incorporates all of this in a holistic approach. The speaker’s project was interested in creating deep maps of Sicily and Cyprus to look at the development of a European identity forged by Northern Europeans visiting Southern Europe – what did the travellers bring back? And what influence did they leave behind? Sicily and Cyprus were chosen because they were less visited and are both islands with significant Greek and Roman histories. They also had a different political situation at the time, with Cyprus under the control of the Ottoman empire. The speaker discussed the project’s methodology, consisting of the selection of documents ( 18th century diaries of travellers interested in Classical times), looking at discussions of architecture, churches, food and accommodation. Adjectives were coded and the data was plotted using ArcGIS. Itineraries were plotted on a map, with different coloured lines showing routes and places marked. Eventually the project will produce an interactive web-based map but for now it just runs in ArcGIS.
The second paper in the session discussed using GIS to Illustrate and understand the influence of St Æthelthryth of Ely, a 7th century saint whose cult was one of the most enduring of the Middle Ages. The speaker was interested in plotting the geographical reach of the cult, looking at why it lasted so long, what its impact was and how DH tools could help with the research. The speaker stated that GIS tools have been increasingly used since the mid-2000s but are still not used much in medieval studies. The speaker created a GIS database consisting of 600 datapoints for things like texts, calendars, images and decorations and looked at how the cult expanded throughout the 10th and 11th centuries. This was due to reforming bishops arriving from France after the Viking pillages, bringing Benedictine rule. Local saints were used as role models and Ely was transformed. The speaker stated that one problem with using GIS for historical data is that time is not easy to represent. He created a sequence of maps to show the increases in land holding from 950 to 1066 and further development in the later middle ages as influence was moving to parish churches. He mapped parish churches that were dedicated to the saint or had images of her showing the change in distribution over time. Clusters and patterns emerged showing four areas. The speaker plotted these in different layers that could be turned on and off, and also overlaid the Gough Map (one of the earliest maps of Britain – http://www.goughmap.org/map/) as a vector layer. He also overlaid the itineraries of early kings to show different routes and possible pilgrimage routes emerged.
The final paper looked at plotting the history of the holocaust through holocaust literature and mapping, looking to narrate history through topography, noting the time and place of events specifically in the Warsaw ghetto and creating an atlas of holocaust literature (https://nplp.pl/kolekcja/atlas-zaglady/). The resource consists of three interconnected modules focussing on places, people and events, with data taken from the diaries of 17 notable individuals, such as the composer who was the subject of the film ‘The Pianist’. The resource features maps of the ghetto with areas highlighted depending on the data the user selects. Places are sometimes vague – they can be an exact address, a street or an area. There were also major mapping challenges as modern Warsaw is completely different to during the war and the boundaries of the ghetto changed massively during the course of the war. The memoirs also sometimes gave false addresses, such as intersections of streets that never crossed. At the moment the resource is still a pilot which took a year to develop, but it will be broadened out (with about 10 times more data) that will include memoirs written after the events and translations into English.
The final session of the day was another plenary, given by the CEO of ‘In the room’ (see https://hereintheroom.com/) who discussed the fascinating resource the company has created. It presents interactive video encounters using AI to enable users to ask spoken questions and for the system to pick out and play video clips that closely match the user’s topic. It began as a project at the National Holocaust Centre and Museum near Nottingham, which organised events where school children could meet survivors of the holocaust. The question arose as to how to keep these encounters going after the last survivors are no longer with us. The initial project recorded hundreds of answers to questions with videos responding to actual questions by users. Users reacted as if they were encountering a real person. The company was then set up to make a web-enabled version of the tool and to make it scalable. The tool responds to the ‘power of parasocial relationships’ (one sided relationships) and the desire for personalised experiences.
This led to the development of conversational encounters with famous 11 people, where the user can ask questions by voice, the AI matches the intent and plays the appropriate video clips. One output was an interview with Nile Rodgers in collaboration with the National Portrait Gallery to create a new type of interactive portrait experience. Nile answered about 350 questions over two days and the result (see the link above) was a big success, with fans reporting feeling nervous when engaging with the resource. There are also other possible uses for the technology in education, retail and healthcare (for example a database of 12,000 answers to questions about mental health).
The system can suggest follow-up questions to help the educational learning experience and when analysing a question the AI uses confidence levels. If the level is too low then a default response is presented. The system can work in different languages, with the company currently working with Munich holocaust survivors. The company is also working with universities to broaden access to lecturers. Students felt they got to know the lecturer better as if really interacting with them. A user survey suggested that 91% of 18-23 year olds believed the tool would be useful for learning. As a tool it can help to immediately identify which content is relevant and as it is asynchronous the answers can be found at any time. The speaker stated that conversational AI is growing and is not going away – audiences will increasingly expect such interactions in their lives.
The second day began with another parallel session. The first speaker in the session I chose discussed how a ‘National Collection’ could be located through audience research. The speaker discussed a project that used geographical information such as where objects were made, where they reside, the places they depict or describe and brought all this together on maps of locations where participants live. Data was taken from GLAMs (Galleries, Libraries, Archives, Museums) and Historic Environment and looked at how connections could be drawn between these. The project looked at a user centred approach when creating a map interface – looking at the importance of local identity in understanding audience motivations. The project conducted qualitative research (a user survey) and quantitative research (focus groups) and devised ‘pretotypes’ as focus group stimulus.
The idea was to create a map of local cultural heritage objects similar to https://astreetnearyou.org that displays war records related to a local neighbourhood. Objects that might have no interest to a person become interesting due to their location in their neighbourhood. The system created was based on the Pelagios methodology (https://pelagios.org/) and used a tool called locolligo (https://github.com/docuracy/Locolligo) to convert CSV data into JSONLD data.
The second speaker was a developer of DH resources who has worked at Sheffield’s DHI for 20 years. His talk discussed how best to manage the technical aspects of DH projects in future. He pointed out that his main task as a developer is to get data online for the public to use and that the interfaces are essentially the same. He pointed out that these days we mostly get out online content through ‘platforms’ such as Twitter, Tiktok and Instagram. There has been much ‘web consolidation’ away from individual websites to these platforms. However, this hasn’t happened in DH, which is still very much about individual website, discrete interfaces and individual voices. But these leads to a problem with maintenance of the resources. The speaker mentioned the AHDS service that used to be a repository for Arts and Humanities data, but this closed in 2008. The speaker also talked about FAIR data (Findable, Accessible, Interoperable, Reusable) and how depositing data in an institutional repository doesn’t really fit into this. Generally data is just a dataset. Project websites generally contain a lot of static ancillary pages and these can be migrated to a modern CMS such as WordPress, but what about the record level data? Generally all DH websites have a search form, search results and records. The underlying structure is generally the same too – a data store, a back end and a front end. These days at DHI the front end is often built using Rect.js or Angular, with Elastic Search for data store and Symphony as back end. The speaker in interested in how to automatically generate a DH interface for a project’s data, such as generating the search form by indexing the data. The generated front-end can then be customised but the data should need minimal interpretation by the front-end.
The third speaker in the session discussed exploratory notebooks for cultural heritage datasets, specifically Jupyter notebooks used with datasets at the NLS, which can be found here: https://data.nls.uk/tools/jupyter-notebooks/. The speaker stated that the NLS aims to have digitised a third of its 31 million objects by 2025 and has developed a data foundry to make data available to researchers. Data have to be open, transparent (i.e. include provenance) and practical (i.e. in usable file formats). Jupyter Notebooks allow people to explore and analyse the data without requiring any coding ability. Collections can be accessed as data and there are tutorials on things like text analysis. The notebooks use Python and the NLTK (https://www.nltk.org/) and the data has been cleaned and standardised, and is available in various forms such as lemmatised, normalised, stemmed. The notebooks allow for data analysis, summary and statistics such as lexical diversity in the novels of Lewis Grassic Gibbon over time. The notebooks launched in September 2020. The notebooks can also be run in the online service Binder (https://mybinder.org/).
After the break there was another parallel session and the one I attended mostly focussed on crowdsourcing. The first talk was given remotely by a speaker based in Belgium and discussed the Europeana photography collection, which currently holds many millions of items including numerous virtual exhibitions, for example one on migration (https://www.europeana.eu/en/collections/topic/128-migration). This allows you to share your migration story including adding family photos. The photo collection’s search options uses a visual similarity search that uses AI to perform pattern matching but there have been mixed results. Users can also create their own galleries and the project organised a ‘subtitle-a-thon’ which encouraged users to create subtitles for videos in their own languages. There is also a project called https://www.citizenheritage.eu/ to engage with people.
The second speaker discussed ‘computer vision and the history of printing’ and discussed the amazing work of the Visual Geometry Group at the University of Oxford (https://www.robots.ox.ac.uk/~vgg/). The speaker discussed a ‘computer vision pipeline’ through which images were extracted from a corpus and clustered by similarity and uniqueness. He first step was to extract illustrations from pages of text using an object detection model. This used the EfficientDet object detector (https://towardsdatascience.com/a-thorough-breakdown-of-efficientdet-for-object-detection-dc6a15788b73) which was trained on the Microsoft Common Objects for Context (COCO) dataset, which has labelled objects for 328,000 images. Some 3609 illustrated pages were extracted, although there were some false positives, such as bleed through, printers marks and turned up pages. Images were then passed through image segmentation where every pixel was annotated to identify text blocks, initials etc. The segmentation model used was Mask R-CNN (https://github.com/matterport/Mask_RCNN) and a study of image pretraining for historical document image analysis can be found here: https://arxiv.org/abs/1905.09113.
The speaker discussed image matching versus image classification and the VGG Image Search Engine (VISE, https://www.robots.ox.ac.uk/~vgg/software/vise/) that is image matching and can search and identify geometric features, matching features regardless of rotation and skewing (but it breaks with flipping or warping).
All of this was used to perform a visual analysis of chapbooks printed in Scotland to identify illustrations that are ‘the same’. There is variation in printing, corrections in pen etc but the thresholds for ‘the same’ depends on the purpose. The speaker mentioned that image classification using deep learning is different – it can be used to differentiate images of cats and dogs, for example.
The final speaker in the session followed on very nicely from the previous speaker, as his research was using many of the same tools. This project was looking at image recognition using images from the protestant reformation so discover how and where illustrations were used by both Protestants and Catholics during the period, looking specifically at printing, counterfeiting and illustrations of Martin Luther. The speaker discussed his previous project, which was called Ornamento and looked at 160,000 distinct editions – some 70 million pages – and extracted 5.7 million illustrations. This used Google Books and PDFs as source material. It identified illustrations and their coordinates on the page and then classified the illustrations, for example borders, devices, head pieces, music. These were preprocessed and the results were put in a database. So for example it was possible to say that a letter appeared in 86 books in five different places. There was also a nice comparison tool for comparing images such as using a slider.
For the current project the researcher aimed to identify anonymous books by use of illustrated letters. For example, the tool was able to identify 16 books that were all produced in the same workshop in Leipzig. The project looked at printing in the Holy Roman Empire from 1450-1600, religious books and only those with illustrations, so a much smaller project than the previous one.
The final parallel session of the day has two speakers. The first discussed how historical text collections can be unlocked by the use of AI. This project looked at the 80 editions of the Encyclopaedia Britannica that have been digitised by the NLS. AI was to be used to group similar articles and detect how articles have changed using machine learning. The processed included information extraction, the creation of ontologies and knowledge graphs and deep transfer learning (see https://towardsdatascience.com/what-is-deep-transfer-learning-and-why-is-it-becoming-so-popular-91acdcc2717a). The plan was to detect, classify and extract all of the ‘terms’ in the data. Terms could either be articles (1-2 paragraphs in length) and topics (several pages). The project used the Defoe Python library (https://github.com/alan-turing-institute/defoe) to read XML, ingest the text and perform NLP preprocessing. The system was set up to detect when articles and topics began and ended to store page coordinates for such breaks, although headers changed over the editions which made this trickier. The project then created an EB ontology and knowledge graph, which is available at https://francesnlp.github.io/EB-ontology/doc/index-en.html. The EB knowledge graph RDF then allowed querying such as looking at ‘science’ as a node and see how this connected across all editions. The graph at the above URL contains the data from the first 8 editions.
The second paper discussed a crowdsourcing project called ‘operation war diary’ that was a collaboration between The National Archives, Zooniverse and the Imperial War Museum (https://www.operationwardiary.org/). The presenter had been tasked with working with the crowdsourced data in order to produce something from it but the data was very messy. The paper discussed how to deal with uncertainty in crowdsourced data, looking at ontological uncertainty, aleatory uncertainty and epistemic uncertainty. The speaker discussed the differences between accuracy and precision – how a cluster of results can be precise (grouped closely together) but wrong. The researcher used OpenRefine (https://openrefine.org/) to work with the data in order to produce clusters of placenames, resulting in 26910 clusters from 500,000 datapoints. She also looked at using ‘nearest neighbour’ and Levenshtein but there were issues with false positives (e.g. ‘Trench A’ and ‘Trench B’ are only one character apart but are clearly not the same). The researcher also discussed the outcomes of the crowdsourcing project, stating that only 10% of the 900,000 records were completed. Many pages were skipped, with people stopping at the ‘boring bits’. The speaker stated that ‘History will never be certain, but we can make it worse’ which I thought was a good quote. She suggested that crowdsourcing outputs should be weighted in favour of the volunteers who did the most. The speaker also pointed out that there are currently no available outputs from the project, and it was hampered by being an early Zooniverse project before the tool was well established. During the discussion after the talk someone suggested that data could have different levels of acceptability like food nutrition labels. It was also mentioned that representing uncertainty in visualisations is an important research area, and that visualisations can help identify anomalies. Another comment was that crowdsourcing doesn’t save money and time – managers and staff are needed and in many cases the work could be done better by a paid team in the same time. The important reason to choose crowdsourcing is to democratise data, not to save money.
The final session of the day was another plenary. This discussed different scales of analysis in digital projects and was given by the PI of the ‘Living with Machines’ project (https://livingwithmachines.ac.uk/). The speaker stated that English Literature mostly focussed on close reading while DH mostly looked at distant reading. She stated that scalable reading was like archaeology – starting with an aerial photo at the large scale to observe patterns then moving to excavation of a specific area, then iterating again. The speaker had previously worked on the Tudor Networks of Power project (https://tudornetworks.net/) which has a lovely high-level visualisation of the project’s data. It dealt with around 130,000 letters. Next came Networking Archives project, which doesn’t appear to be online but has some information here: https://networkingarchives.github.io/blog/about/. This project dealt with 450,000 letters. Then came ‘Living with Machines’ which is looking at even larger corpora. How to move through different scales of analysis is an interesting research question. The Tudor project used the Tudor State Papers from 1509 to 1603 and dealt with 130,000 letters and 20,000 people. The top-level interface facilitated discovery rather than analysis. The archive is dominated by a small number of important people that can be discovered via centrality and betweenness – the number of times a person’s record is intersected. When looking at the network for a person you can then compare this to people with similar network profiles like a fingerprint. By doing so it is possible to identify one spy and then see if others with a similar profile may also have been spies. But actual deduction requires close reading so iteration is crucial. The speaker also mentioned the ‘mesa scale’ – the data in the middle. The resource enables researchers to identify who was at the same location at the same time – people who never corresponded with each other but may have interacted in person.
The Networking Archives project used the Tudor State Papers but also brought in the data from EMLO. The distribution was very similar with 1-2 highly connected people and most people only having a few connections. The speaker discussed the impact of missing data. We can’t tell how much data is already missing from the archive, but we can tell what impact it might have by progressively removing more data from what we do have. Patterns in the data are surprisingly robust even when 60-70% of the data has been removed, and when removing different types of data such as folios or years. The speaker also discussed ‘ego networks’ that show the shared connections two people have – the people in the middle between two figures.
The speaker then discussed the ‘Living with Machines’ project, which is looking at the effects of machination from 1780 to 1920, looking at newspapers, maps, books, census records and journals. It is a big project with 28 people in the project team. The project is looking at language model predictions using software called BERT (https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270) and using Word2Vec to cluster words into topics (https://www.tensorflow.org/tutorials/text/word2vec). One example was looking at the use of the terms ‘man’, ‘machine’ and ‘slave’ to see where they are interchangeable.
The speaker ended with a discussion of how many different types of data can come together for analysis. She mentioned a map reader system that could take a map, cut it into squares and then be trained to recognise rail infrastructure in the squares. Rail network data can then be analysed alongside census data to see how the proximity to stations affects the mean number of servants per household, which was fascinating to hear about.
And that was the end of the conference for me, as I was unable to attend the sessions on the Saturday. It was an excellent event, I learnt a great deal about new research and technologies and I’m really glad I was given the opportunity to attend.
Before travelling to the event on Wednesday this was just a regular week for me and I’ll give a summary of what I worked on now. For the SpeechStar project I updated the database of normative speech on the Seeing Speech version of the site to include the child speech videos, as previously these had only been added to the other site. I also changed all occurrences of ‘Normative’ to ‘non-disordered’ throughout the sites and added video playback speed options to the new Central Scottish phonetic features videos.
I also continued to process library registers and their images for the Books and Borrowing project. I processed three registers from the Royal High School, each of which required different amounts of processing and different methods. This included renaming images, adding in missing page records, creating entire new runs of page records, uploading hundreds of images and changing the order of certain page records. I also wrote a script to identify which page records still did not have associated image files after the upload, as each of the registers is missing some images.
For the Speak For Yersel project I arranged for more credit to be added to our mapping account in case we get lots of hits when the site goes live, and I made various hopefully final tweaks to text throughout the site. I also gave some advice to students working on the migration of the Scottish Theatre Studies journal, spoke to Thomas Clancy about further work on the Ayr Place-names project and fixed a minor issue with the DSL.