Week Beginning 5th September 2022

I attended the Digital Humanities Congress in Sheffield this week (https://www.dhi.ac.uk/dhc2022/ ), which meant travelling down on the Wednesday and attending the event on the Thursday and Friday (there were also some sessions on Saturday but I was unable to stay for those).  It was an excellent conference featuring some great speakers and plenty of exciting research and I’ll give an overview of the sessions I attended here.  The event kicked off with a plenary session by Marc Alexander, who as always was an insightful and entertaining speaker.  His talk was about the analysis of meaning at different scales, using the Hansard corpus as his main example and thinking about looking at the data from a distance (macro), close up (micro) but also the stuff in the middle that often gets overlooked, which he called the meso.  The Hansard corpus is a record of what has been said in parliament and it began in 1803 and currently runs up to 2003/5 and consists of 7.6 million speeches and 1.6 billion words, all of which have been tagged for part of speech and also semantics and it can be accessed at https://www.english-corpora.org/hansard/.  Marc pointed out that the corpus is not a linguistic transcript as it can be a summary rather than the exact words – it’s not verbatim but substantially so and doesn’t include things like interruptions and hesitations.  The corpus was semantically tagged using the SAMUELS tagger which annotates the texts using data from the Historical Thesaurus.

Marc gave some examples of analysis at different scales.  For micro analysis he looked at the use of ‘draconian’ and how this word does not appear much in the corpus until the 1970s.  He stated that we can use word vectors and collocates at this level of analysis, for example looking at the collocates of ‘privatisation’ after the 1970s, showing the words that appear most frequently are things like rail, electricity, British etc but there are also words such as ‘proceeds’ and ‘proposals’, ‘opposed’ and ‘botched’.  Marc pointed out that ‘botched’ is a word we mostly all know but would not use ourselves.  This is where semantic collocates come in useful – grouping words by their meaning and being able to search for the meanings of words rather than individual forms.  For example, it’s possible to look at speeches by women MPs in the 1990s and find the most common concepts they spoke about, which were things like ‘child’, ‘mother’ ‘parent’.  Male MPs on the other hand talked about things like ‘peace treaties’ and ‘weapons’.  Hansard doesn’t explicitly state the sex of the person so this is based on the titles that are used.

At the Macro level Marc discussed words for ‘uncivilised’ and 2046 references to ‘uncivil’.  At different periods in time there are different numbers of terms available to mean this concept.  The number of words that are available for a concept can show how significant a concept is at different time periods.  It’s possible with Hansard to identify places and also whether a term is used to refer to the past or present, so we can see what places appear near an ‘uncivilised’ term (Ireland, Northern Ireland, India, Russia and Scotland most often).  Also in the past ‘uncivilised’ was more likely to be used to refer to some past time whereas in more recent years it tends to be used to refer to the present.

Marc then discussed some of the limitations of the Hansard corpus.  It is not homogenous but is discontinuous and messy.  It’s also a lot bigger in recent times than historically – 30 million words a year now but much less in the past.  Also until 1892 it was written in the third person.

Marc then discussed the ‘meso’ level.  He discussed how the corpus was tagged by meaning using a hierarchical system with just 26 categories at the top level, so it’s possible to aggregate results.  We can use this to find which concepts are discussed the least often in Hansard, such as the supernatural, textiles and clothing and plants.  We can also compare this with other semantically tagged corpora such as SEEBO and compare the distributions of concept.  There is a similar distribution but a different order.  Marc discussed concepts that are ‘semantic scaffolds’ vs ‘semantic content’.  He concluded by discussing sortable tables and how we tend to focus on the stuff at the top and the bottom and ignore the middle, but that it is here that some of the important things may reside.

The second session I attended featured four short papers.  The first discussed linked ancient world data and research into the creation of and use of this data.  Linked data is of course RDF triples, consisting of a subject, predicate and object, for example ‘Medea was written by Euripides’.  It’s a means of modelling complex relationships and reducing disambiguation by linking through using URIs (e.g. to make it clear we’re talking about ‘Medea’ the play rather than a person).  However, there are barriers for use.  There are lots of authority files and vocabularies and some modelling approaches are incomplete.  Also, modelling uncertainty can be difficult and there is a reliance on external resources.  The speaker discussed LOUD data (Linked, Open, Usable Data) and conducted a survey of linked data use in ancient world studies, consisting of 212 participants and 16 in-depth interviews.  The speaker came up with five principles:

Transparency (openness, availability of export options, documentation); Extensibility (current and future integration based on existing infrastructure); Intuitiveness (making it easy for users to do what they need to do); Reliability (the tool / data does what it says it does consistently – this is a particular problem as SPARQL endpoints for RDF data can become unreachable as servers get overloaded); Sustainability (continued functionality of the resource).

The speaker concluded by stating that human factors are also important in the use of the data, such as collaboration and training, and also building and maintaining a community that can lead to new collaborations and sustainability.

The second speaker discussed mimesis and the importance of female characters in Dutch literary fiction, comparing female characters in novels in the 1960s with those in the 2010s to see if literature is reflecting changes in society (mimesis).  The speaker developed software to enable the automatic extraction of social networks from literary texts and wanted to investigate how each character’s social network changed in novels as the role of women changed in society.  The idea was that female characters would be stronger and more central in the second period.  The data used a corpus of 170 Dutch novels from 2013 and 152 Dutch novels from the 1960s.  Demographic information on 2136 characters was manually compiled and a comprehensive network analysis was semi-automatically generated that identified character and gender resolution (based on first name and pronouns).  Centrality scores were computed from the network diagrams to demonstrate how central a character was.  The results shows that the data for the two time periods was the same on various metrics, with a 60/40 split of male to female characters in both periods.  The speaker referred to this as ‘the golden mean of patriarchy’ where there are two male characters for every female one.  The speaker stated that only one metric had a statistically significant result and that was network centrality, which for all speakers regardless of gender increased between the time periods.  The speaker stated that this was due to a broader cultural trend towards more ‘relational’ novels with more of a focus on relations.

The third speaker discussed a quantitative analysis of digital scholarly editions as part of the ‘C21 Editions’.  The research engaged with 50 scholars who have produced digital editions and produced a white paper on the state of the art of digital editions and also generated a visualisation of a catalogue of digital editions.  The research brought together two existing catalogues of digital editions.  One site (https://v3.digitale-edition.de/) contains 714 digital editions whereas the other one (https://dig-ed-cat.acdh.oeaw.ac.at/) contains 316.

The fourth speaker presented the results of a case study of big data using a learner corpus.  The speaker pointed out that language-based research is changing due to the scale of the data, for example in digital communication such as Twitter, the digitisation of information such as Google Books and the capabilities of analytical tools such as Python and R.  The speaker used a corpus of essays written by non-native English speakers as they were learning English.  It contains more than 1 million texts by more than 100,000 learners from more than 100 countries, with many proficiency levels.  The speaker was interested in lexical diversity in different tasks.  He created a sub-corpus as only 20% of nationalities have more than 100 learners.  He also had to strip out non-English text and remove duplicate texts.  He then identified texts that were about the same topic using topic modelling, identifying keywords, such as cities, weather, sports and the corpus is available here: https://corpus.mml.cam.ac.uk/

After the break I attended a session about mapping and GIS, consisting of three speakers.  The first was about the production of a ‘deep map’ of the European Grand Tour, looking specifically at the islands of Sicily and Cyprus and the identify of places mentioned in the tours.  These were tours by mostly Northern European aristocrats to Southern European countries beginning at the end of the 17th Century.  Richard Lassels’ Voyage of Italy in 1670 was one of the first.  Surviving data the project analysed included diaries and letters full of details of places are the reasons for the visit, which might have been to learn about art or music, observe the political systems, visit universities and see the traces of ancient cultures.  The data included descriptions of places and of routes taken, people met, plus opinions and emotions.  The speaker stated that the subjectivity in the data is an area previously neglected but is important in shaping the identity of a place.  The speaker stated that deep mapping (see for example http://wp.lancs.ac.uk/lakesdeepmap/the-project/gis-deep-mapping/) incorporates all of this in a holistic approach.  The speaker’s project was interested in creating deep maps of Sicily and Cyprus to look at the development of a European identity forged by Northern Europeans visiting Southern Europe – what did the travellers bring back?  And what influence did they leave behind?  Sicily and Cyprus were chosen because they were less visited and are both islands with significant Greek and Roman histories.  They also had a different political situation at the time, with Cyprus under the control of the Ottoman empire.  The speaker discussed the project’s methodology, consisting of the selection of documents ( 18th century diaries of travellers interested in Classical times), looking at discussions of architecture, churches, food and accommodation.  Adjectives were coded and the data was plotted using ArcGIS.  Itineraries were  plotted on a map, with different coloured lines showing routes and places marked.  Eventually the project will produce an interactive web-based map but for now it just runs in ArcGIS.

The second paper in the session discussed using GIS to Illustrate and understand the influence of St Æthelthryth of Ely, a 7th century saint whose cult was one of the most enduring of the Middle Ages.  The speaker was interested in plotting the geographical reach of the cult, looking at why it lasted so long, what its impact was and how DH tools could help with the research.  The speaker stated that GIS tools have been increasingly used since the mid-2000s but are still not used much in medieval studies.  The speaker created a GIS database consisting of 600 datapoints for things like texts, calendars, images and decorations and looked at how the cult expanded throughout the 10th and 11th centuries.  This was due to reforming bishops arriving from France after the Viking pillages, bringing Benedictine rule.  Local saints were used as role models and Ely was transformed.  The speaker stated that one problem with using GIS for historical data is that time is not easy to represent.  He created a sequence of maps to show the increases in land holding from 950 to 1066 and further development in the later middle ages as influence was moving to parish churches.  He mapped parish churches that were dedicated to the saint or had images of her showing the change in distribution over time.  Clusters and patterns emerged showing four areas.  The speaker plotted these in different layers that could be turned on and off, and also overlaid the Gough Map (one of the earliest maps of Britain – http://www.goughmap.org/map/) as a vector layer.  He also overlaid the itineraries of early kings to show different routes and possible pilgrimage routes emerged.

The final paper looked at plotting the history of the holocaust through holocaust literature and mapping, looking to narrate history through topography, noting the time and place of events specifically in the Warsaw ghetto and creating an atlas of holocaust literature (https://nplp.pl/kolekcja/atlas-zaglady/).  The resource consists of three interconnected modules focussing on places, people and events, with data taken from the diaries of 17 notable individuals, such as the composer who was the subject of the film ‘The Pianist’.  The resource features maps of the ghetto with areas highlighted depending on the data the user selects.  Places are sometimes vague – they can be an exact address, a street or an area.  There were also major mapping challenges as modern Warsaw is completely different to during the war and the boundaries of the ghetto changed massively during the course of the war.  The memoirs also sometimes gave false addresses, such as intersections of streets that never crossed.  At the moment the resource is still a pilot which took a year to develop, but it will be broadened out (with about 10 times more data) that will include memoirs written after the events and translations into English.

The final session of the day was another plenary, given by the CEO of ‘In the room’ (see https://hereintheroom.com/) who discussed the fascinating resource the company has created.  It presents interactive video encounters using AI to enable users to ask spoken questions and for the system to pick out and play video clips that closely match the user’s topic.  It began as a project at the National Holocaust Centre and Museum near Nottingham, which organised events where school children could meet survivors of the holocaust.  The question arose as to how to keep these encounters going after the last survivors are no longer with us.  The initial project recorded hundreds of answers to questions with videos responding to actual questions by users.  Users reacted as if they were encountering a real person.  The company was then set up to make a web-enabled version of the tool and to make it scalable.  The tool responds to the ‘power of parasocial relationships’ (one sided relationships) and the desire for personalised experiences.

This led to the development of conversational encounters with famous 11 people, where the user can ask questions by voice, the AI matches the intent and plays the appropriate video clips.  One output was an interview with Nile Rodgers in collaboration with the National Portrait Gallery to create a new type of interactive portrait experience.  Nile answered about 350 questions over two days and the result (see the link above) was a big success, with fans reporting feeling nervous when engaging with the resource.  There are also other possible uses for the technology in education, retail and healthcare (for example a database of 12,000 answers to questions about mental health).

The system can suggest follow-up questions to help the educational learning experience and when analysing a question the AI uses confidence levels.  If the level is too low then a default response is presented.  The system can work in different languages, with the company currently working with Munich holocaust survivors.  The company is also working with universities to broaden access to lecturers.  Students felt they got to know the lecturer better as if really interacting with them.  A user survey suggested that 91% of 18-23 year olds believed the tool would be useful for learning.  As a tool it can help to immediately identify which content is relevant and as it is asynchronous the answers can be found at any time.  The speaker stated that conversational AI is growing and is not going away – audiences will increasingly expect such interactions in their lives.

The second day began with another parallel session.  The first speaker in the session I chose discussed how a ‘National Collection’ could be located through audience research.  The speaker discussed a project that used geographical information such as where objects were made, where they reside, the places they depict or describe and brought all this together on maps of locations where participants live.  Data was taken from GLAMs (Galleries, Libraries, Archives, Museums) and Historic Environment and looked at how connections could be drawn between these.  The project looked at a user centred approach when creating a map interface – looking at the importance of local identity in understanding audience motivations.  The project conducted qualitative research (a user survey) and quantitative research (focus groups) and devised ‘pretotypes’ as focus group stimulus.

The idea was to create a map of local cultural heritage objects similar to https://astreetnearyou.org that displays war records related to a local neighbourhood.  Objects that might have no interest to a person become interesting due to their location in their neighbourhood.  The system created was based on the Pelagios methodology (https://pelagios.org/) and used a tool called locolligo (https://github.com/docuracy/Locolligo) to convert CSV data into JSONLD data.

The second speaker was a developer of DH resources who has worked at Sheffield’s DHI for 20 years.  His talk discussed how best to manage the technical aspects of DH projects in future.  He pointed out that his main task as a developer is to get data online for the public to use and that the interfaces are essentially the same.  He pointed out that these days we mostly get out online content through ‘platforms’ such as Twitter, Tiktok and Instagram.  There has been much ‘web consolidation’ away from individual websites to these platforms.  However, this hasn’t happened in DH, which is still very much about individual website, discrete interfaces and individual voices.  But these leads to a problem with maintenance of the resources.  The speaker mentioned the AHDS service that used to be a repository for Arts and Humanities data, but this closed in 2008.  The speaker also talked about FAIR data (Findable, Accessible, Interoperable, Reusable) and how depositing data in an institutional repository doesn’t really fit into this.  Generally data is just a dataset.  Project websites generally contain a lot of static ancillary pages and these can be migrated to a modern CMS such as WordPress, but what about the record level data?  Generally all DH websites have a search form, search results and records.  The underlying structure is generally the same too – a data store, a back end and a front end.  These days at DHI the front end is often built using Rect.js or Angular, with Elastic Search for data store and Symphony as back end.   The speaker in interested in how to automatically generate a DH interface for a project’s data, such as generating the search form by indexing the data.  The generated front-end can then be customised but the data should need minimal interpretation by the front-end.

The third speaker in the session discussed exploratory notebooks for cultural heritage datasets, specifically Jupyter notebooks used with datasets at the NLS, which can be found here: https://data.nls.uk/tools/jupyter-notebooks/.  The speaker stated that the NLS aims to have digitised a third of its 31 million objects by 2025 and has developed a data foundry to make data available to researchers. Data have to be open, transparent (i.e. include provenance) and practical (i.e. in usable file formats).  Jupyter Notebooks allow people to explore and analyse the data without requiring any coding ability.  Collections can be accessed as data and there are tutorials on things like text analysis.  The notebooks use Python and the NLTK (https://www.nltk.org/) and the data has been cleaned and standardised, and is available in various forms such as lemmatised, normalised, stemmed.  The notebooks allow for data analysis, summary and statistics such as lexical diversity in the novels of Lewis Grassic Gibbon over time.  The notebooks launched in September 2020.  The notebooks can also be run in the online service Binder (https://mybinder.org/).

After the break there was another parallel session and the one I attended mostly focussed on crowdsourcing.  The first talk was given remotely by a speaker based in Belgium and discussed the Europeana photography collection, which currently holds many millions of items including numerous virtual exhibitions, for example one on migration (https://www.europeana.eu/en/collections/topic/128-migration).  This allows you to share your migration story including adding family photos.  The photo collection’s search options uses a visual similarity search that uses AI to perform pattern matching but there have been mixed results.  Users can also create their own galleries and the project organised a ‘subtitle-a-thon’ which encouraged users to create subtitles for videos in their own languages.  There is also a project called https://www.citizenheritage.eu/ to engage with people.

The second speaker discussed ‘computer vision and the history of printing’ and discussed the amazing work of the Visual Geometry Group at the University of Oxford (https://www.robots.ox.ac.uk/~vgg/).  The speaker discussed a ‘computer vision pipeline’ through which images were extracted from a corpus and clustered by similarity and uniqueness.  He first step was to extract illustrations from pages of text using an object detection model.  This used the EfficientDet object detector (https://towardsdatascience.com/a-thorough-breakdown-of-efficientdet-for-object-detection-dc6a15788b73) which was trained on the Microsoft Common Objects for Context (COCO) dataset, which has labelled objects for 328,000 images.   Some 3609 illustrated pages were extracted, although there were some false positives, such as bleed through, printers marks and turned up pages.  Images were then passed through image segmentation where every pixel was annotated to identify text blocks, initials etc.  The segmentation model used was Mask R-CNN (https://github.com/matterport/Mask_RCNN) and a study of image pretraining for historical document image analysis can be found here: https://arxiv.org/abs/1905.09113.

The speaker discussed image matching versus image classification and the VGG Image Search Engine (VISE, https://www.robots.ox.ac.uk/~vgg/software/vise/) that is image matching and can search and identify geometric features, matching features regardless of rotation and skewing (but it breaks with flipping or warping).

All of this was used to perform a visual analysis of chapbooks printed in Scotland to identify illustrations that are ‘the same’.  There is variation in printing, corrections in pen etc but the thresholds for ‘the same’ depends on the purpose.  The speaker mentioned that image classification using deep learning is different – it can be used to differentiate images of cats and dogs, for example.

The final speaker in the session followed on very nicely from the previous speaker, as his research was using many of the same tools.  This project was looking at image recognition using images from the protestant reformation so discover how and where illustrations were used by both Protestants and Catholics during the period, looking specifically at printing, counterfeiting and illustrations of Martin Luther.  The speaker discussed his previous project, which was called Ornamento and looked at 160,000 distinct editions – some 70 million pages – and extracted 5.7 million illustrations.  This used Google Books and PDFs as source material.  It identified illustrations and their coordinates on the page and then classified the illustrations, for example borders, devices, head pieces, music.  These were preprocessed and the results were put in a database.  So for example it was possible to say that a letter appeared in 86 books in five different places.  There was also a nice comparison tool for comparing images such as using a slider.

For the current project the researcher aimed to identify anonymous books by use of illustrated letters.  For example, the tool was able to identify 16 books that were all produced in the same workshop in Leipzig.  The project looked at printing in the Holy Roman Empire from 1450-1600, religious books and only those with illustrations, so a much smaller project than the previous one.

The final parallel session of the day has two speakers.  The first discussed how historical text collections can be unlocked by the use of AI.  This project looked at the 80 editions of the Encyclopaedia Britannica that have been digitised by the NLS.  AI was to be used to group similar articles and detect how articles have changed using machine learning.  The processed included information extraction, the creation of ontologies and knowledge graphs and deep transfer learning (see https://towardsdatascience.com/what-is-deep-transfer-learning-and-why-is-it-becoming-so-popular-91acdcc2717a).  The plan was to detect, classify and extract all of the ‘terms’ in the data.  Terms could either be articles (1-2 paragraphs in length) and topics (several pages).  The project used the Defoe Python library (https://github.com/alan-turing-institute/defoe) to read XML, ingest the text and perform NLP preprocessing.  The system was set up to detect when articles and topics began and ended to store page coordinates for such breaks, although headers changed over the editions which made this trickier.  The project then created an EB ontology and knowledge graph, which is available at https://francesnlp.github.io/EB-ontology/doc/index-en.html.  The EB knowledge graph RDF then allowed querying such as looking at ‘science’ as a node and see how this connected across all editions.  The graph at the above URL contains the data from the first 8 editions.

The second paper discussed a crowdsourcing project called ‘operation war diary’ that was a collaboration between The National Archives, Zooniverse and the Imperial War Museum (https://www.operationwardiary.org/).  The presenter had been tasked with working with the crowdsourced data in order to produce something from it but the data was very messy.  The paper discussed how to deal with uncertainty in crowdsourced data, looking at ontological uncertainty, aleatory uncertainty and epistemic uncertainty.  The speaker discussed the differences between accuracy and precision – how a cluster of results can be precise (grouped closely together) but wrong.  The researcher used OpenRefine (https://openrefine.org/) to work with the data in order to produce clusters of placenames, resulting in 26910 clusters from 500,000 datapoints.  She also looked at using ‘nearest neighbour’ and Levenshtein but there were issues with false positives (e.g. ‘Trench A’ and ‘Trench B’ are only one character apart but are clearly not the same).  The researcher also discussed the outcomes of the crowdsourcing project, stating that only 10% of the 900,000 records were completed.  Many pages were skipped, with people stopping at the ‘boring bits’.  The speaker stated that ‘History will never be certain, but we can make it worse’ which I thought was a good quote.  She suggested that crowdsourcing outputs should be weighted in favour of the volunteers who did the most.  The speaker also pointed out that there are currently no available outputs from the project, and it was hampered by being an early Zooniverse project before the tool was well established.  During the discussion after the talk someone suggested that data could have different levels of acceptability like food nutrition labels.  It was also mentioned that representing uncertainty in visualisations is an important research area, and that visualisations can help identify anomalies.  Another comment was that crowdsourcing doesn’t save money and time – managers and staff are needed and in many cases the work could be done better by a paid team in the same time.  The important reason to choose crowdsourcing is to democratise data, not to save money.

The final session of the day was another plenary.  This discussed different scales of analysis in digital projects and was given by the PI of the ‘Living with Machines’ project (https://livingwithmachines.ac.uk/).  The speaker stated that English Literature mostly focussed on close reading while DH mostly looked at distant reading.  She stated that scalable reading was like archaeology – starting with an aerial photo at the large scale to observe patterns then moving to excavation of a specific area, then iterating again.  The speaker had previously worked on the Tudor Networks of Power project (https://tudornetworks.net/) which has a lovely high-level visualisation of the project’s data.  It dealt with around 130,000 letters.  Next came Networking Archives project, which doesn’t appear to be online but has some information here: https://networkingarchives.github.io/blog/about/.  This project dealt with 450,000 letters.  Then came ‘Living with Machines’ which is looking at even larger corpora.  How to move through different scales of analysis is an interesting research question.  The Tudor project used the Tudor State Papers from 1509 to 1603 and dealt with 130,000 letters and 20,000 people.  The top-level interface facilitated discovery rather than analysis.  The archive is dominated by a small number of important people that can be discovered via centrality and betweenness – the number of times a person’s record is intersected.  When looking at the network for a person you can then compare this to people with similar network profiles like a fingerprint.  By doing so it is possible to identify one spy and then see if others with a similar profile may also have been spies.  But actual deduction requires close reading so iteration is crucial.  The speaker also mentioned the ‘mesa scale’ – the data in the middle.  The resource enables researchers to identify who was at the same location at the same time – people who never corresponded with each other but may have interacted in person.

The Networking Archives project used the Tudor State Papers but also brought in the data from EMLO.  The distribution was very similar with 1-2 highly connected people and most people only having a few connections.  The speaker discussed the impact of missing data.  We can’t tell how much data is already missing from the archive, but we can tell what impact it might have by progressively removing more data from what we do have.  Patterns in the data are surprisingly robust even when 60-70% of the data has been removed, and when removing different types of data such as folios or years.  The speaker also discussed ‘ego networks’ that show the shared connections two people have – the people in the middle between two figures.

The speaker then discussed the ‘Living with Machines’ project, which is looking at the effects of machination from 1780 to 1920, looking at newspapers, maps, books, census records and journals.  It is a big project with 28 people in the project team.  The project is looking at language model predictions using software called BERT (https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270) and using Word2Vec to cluster words into topics (https://www.tensorflow.org/tutorials/text/word2vec).  One example was looking at the use of the terms ‘man’, ‘machine’ and ‘slave’ to see where they are interchangeable.

The speaker ended with a discussion of how many different types of data can come together for analysis.  She mentioned a map reader system that could take a map, cut it into squares and then be trained to recognise rail infrastructure in the squares.  Rail network data  can then be analysed alongside census data to see how the proximity to stations affects the mean number of servants per household, which was fascinating to hear about.

And that was the end of the conference for me, as I was unable to attend the sessions on the Saturday.  It was an excellent event, I learnt a great deal about new research and technologies and I’m really glad I was given the opportunity to attend.

Before travelling to the event on Wednesday this was just a regular week for me and I’ll give a summary of what I worked on now.  For the SpeechStar project I updated the database of normative speech on the Seeing Speech version of the site to include the child speech videos, as previously these had only been added to the other site.  I also changed all occurrences of ‘Normative’ to ‘non-disordered’ throughout the sites and added video playback speed options to the new Central Scottish phonetic features videos.

I also continued to process library registers and their images for the Books and Borrowing project.  I processed three registers from the Royal High School, each of which required different amounts of processing and different methods.  This included renaming images, adding in missing page records, creating entire new runs of page records, uploading hundreds of images and changing the order of certain page records.  I also wrote a script to identify which page records still did not have associated image files after the upload, as each of the registers is missing some images.

For the Speak For Yersel project I arranged for more credit to be added to our mapping account in case we get lots of hits when the site goes live, and I made various hopefully final tweaks to text throughout the site.  I also gave some advice to students working on the migration of the Scottish Theatre Studies journal, spoke to Thomas Clancy about further work on the Ayr Place-names project and fixed a minor issue with the DSL.

 

Week Beginning 29th August 2022

I divided my time between a number of different projects this week.  For Speak For Yersel I replaced the ‘click’ transcripts with new versions that incorporated shorter segments and more highlighted words.  As the segments were now different I also needed to delete all existing responses to the ‘click’ activity.  I then completed the activity once for each speaker to test things out, and all seems to work fine with the new data.  I also changed the pop-up ‘percentage clicks’ text to ‘% clicks occurred here’, which is more accurate than the previous text which suggested it was the percentage of respondents.  I also fixed an issue with the map height being too small on the ‘where do you think this speaker is from’ quiz and ensured the page scrolls to the correct place when a new question is loaded.  I also removed the ‘tip’ text from the quiz intros and renamed the ‘where do you think this speaker is from’ map buttons on the map intro page.  I’d also been asked to trim down the number of ‘translation’ questions from the ‘I would never say that’ activity so I removed some of those.  I then changed and relocated the ‘heard in films and TV’ explanatory text removed the question mark from the ‘where do you think the speaker is from’ quiz intro page.

Mary had encountered a glitch with the transcription popups, whereby the page would flicker and jump about when certain popups were hovered over.  This was caused by the page height increasing to accommodate the pop-up, causing a scrollbar to appear in the browser, which changed the position of the cursor and made the pop-up disappear, making the scrollbar go and causing a glitchy loop.  I increased the height of the page for this activity so the scrollbar issue is no longer encountered, and I also made the popups a bit wider so they don’t need to be as long.  Mary also noticed that some of the ‘all over Scotland’ dynamically generated map answers seemed to be incorrect.  After some investigation I realised that this was a bug that had been introduced when I added in the ‘I would never say that’ quizzes on Friday.  A typo in the code meant that the 60% threshold for correct answers in each region was being used rather than ‘100 divided by the number of answer options’.  Thankfully once identified this was easy to fix.

I also participated in a Zoom call for the project this week to discuss the launch of the resource with the University’s media people.  It was agreed that the launch will be pushed back to the beginning of October as this should be a good time to get publicity.  Finally for the project this week I updated the structure of the site so that the ‘About’ menu item could become a drop-down menu, and I created placeholder pages for three new pages that will be added to this menu for things like FAQs.

I also continued to work on the Books and Borrowing project this week.  On Friday last week I didn’t quite get to finish a script to merge page records for one of the St Andrews registers as it needed further testing on my local PC before I ran it on the live data.  I tackled this issue first thing on Monday and it was a task I had hoped would only take half an hour or so.  Unfortunately things did not go well and it took most of the morning to sort out.  I initially attempted to run things on my local PC to test everything out, but I forgot to update the database connection details.  Usually this wouldn’t be an issue as generally the databases I work with use ‘localhost’ as a connection URL, so the Stirling credentials would have been wrong for my local DB and the script would have just quit, but Stirling (where the system is hosted) uses a full URL instead of ‘localhost’.  This meant that even though I had a local copy of the database on my PC and the scripts were running on a local server set up on my PC the scripts were in fact connecting to the real database at Stirling.  This meant the live data was being changed.  I didn’t realise this as the script was running and as it was taking some time I cancelled it, meaning the update quit halfway through changing borrowing records and deleting page records in the CMS.

 

I then had to write a further script to delete all of the page and borrowing records for this register from the Stirling server and reinstate the data from my local database.  Thankfully this worked ok.  I then ran my test script on the actual local database on my PC and the script did exactly what I wanted it to do, namely:

Iterate through the pages and for each odd numbered page move the records on these to the preceding even numbered page, and at the same time regenerate the ‘page order’ for each record so they follow on from the existing records.  Then the even page needs its folio number updated to add in the odd number (e.g. so folio number 2 becomes ‘2-3’) and generate an image reference based on this (e.g. UYLY207-2_2-3).  Then delete the odd page record and after all that is done regenerate the ‘next’ and ‘previous’ page links for all pages.

This all worked so I ran the script on the server and updated the live data.  However, I then noticed that there are gaps in the folio numbers and this has messed everything up.  For example, folio number 314 isn’t followed by 315 but 320.  320 isn’t an odd number so it doesn’t get joined to 314.  All subsequent page joins are then messed up.  There are also two ‘350’ pages in the CMS and two images that reference 350.  We have UYLY207-2_349-350 and also UYLY207-2_350-351.  There might be other situations where the data isn’t uniform too.

I therefore had to use my ‘delete and reinsert’ script again to revert to the data prior to the update as my script wasn’t set up to work with pages that don’t just increment their folio number by 1 each time.  After some discussion with the RA I updated the script again so that it would work with the non-uniform data and thankfully all worked fine after that.  Later in the week I also found some time to process two further St Andrews registers that needed their pages and records merged, and thankfully these went much smoother.

I also worked on the Speech Star project this week.  I created a new page on both of the project’s sites (which are not live yet) for viewing videos of Central Scottish phonetic features.  I also replaced the temporary logos used on the sites with the finalised logos that had been designed by a graphic designer.  However, the new logo only really works well on a white background as the white cut-out round the speech bubble into the star becomes the background colour of the header.  The blue that we’re currently using for the site header doesn’t work so well with the logo colours.  Also, the graphic designer had proposed using a different font for the site and I decided to make a new interface for the site, which you can see below.  I’m still waiting for feedback to see whether the team prefer this to the old interface (a screenshot of which you can see on this page: https://digital-humanities.glasgow.ac.uk/2022-01-17/) but I personally think it looks a lot better.

I also returned to the Burns Manuscript database that I’d begun last week.  I added a ‘view record’ icon to each row which if pressed on opens a ‘card’ view of the record on its own page.  I also added in the search options, which appear in a section above the table.  By default, the section is hidden and you can show/hide it by pressing on a button.  Above this I’ve also added in a placeholder where some introductory text can go.  If you open the ‘Search options’ section you’ll find text boxes where you can enter text for year, content, properties and notes.  For year you can either enter a specific year or a range.  The other text fields are purely free-text at the moment, so no wildcards.  I can add these in but I think it would just complicate things unnecessarily.  On the second row are checkboxes for type, location name and condition.  You can select one or more of each of these.

The search options are linked by AND, and the checkbox options are linked internally by OR.  For example, filling in ‘1780-1783’ for year and ‘wrapper’ for properties will find all rows with a date between 1780 and 1783 that also have ‘wrapper’ somewhere in their properties.  If you enter ‘work’ in content and select ‘Deed’ and ‘Fragment’ as types you will find all rows that are either ‘Deed’ or ‘Wrapper’ and have ‘work’ in their content.

If a search option is entered and you press the ‘Search’ button the page will reload with the search options open, and the page will scroll down to this section.  Any rows matching your criteria will be displayed in the table below this.  You can also clear the search by pressing on the ‘Clear search options’ button.  In addition, if you’re looking at search results and you press on the ‘view record’ button the ‘Return to table’ button on the ‘card’ view will reload the search results.  That’s this mini-site completed now, pending feedback from the project team, and you can see a screenshot of the site with the search box open below:

Also this week I’d arranged an in-person coffee and catch up with the other College of Arts developers.  We used to have these meetings regularly before Covid but this was the first time since then that we’d all met up.  It was really great to chat with Luca Guariento, Stevie Barrett and David Wilson again and to share the work we’d been doing since we last met.  Hopefully we can meet again soon.

Finally this week I helped out with a few WordPress questions from a couple of projects and I also had a chance to update all of the WordPress sites I manage (more than 50) to the most recent version.

 

Week Beginning 22nd August 2022

I continued to spend a lot of my time working on the Speak For Yersel project this week.  We had a team meeting on Monday at which we discussed the outstanding tasks and particularly how I was going to tackle converting the quiz questions into dynamic answers.  Previously the quiz question answers were static, which will not work well as the maps the users will reference in order to answer a question are dynamic, meaning the correct answer may evolve over time.  I had proposed a couple of methods that we could use to ensure that the answers were dynamically generated based on the currently available data and we finalised our approach today.

Although I’d already made quite a bit of progress with my previous test scripts, there was still a lot to do to actually update the site.  I needed to update the structure of the database, the script that outputs the data for use in the site, the scripts that handle the display of questions and the evaluation of answers, and the scripts that store a user’s selected answers.

Changes to the database allow for dynamic quiz questions to be stored (non-dynamic ones have fixed ‘answer options’ but dynamic ones don’t).  Changes also allow for references to the relevant answer option of the survey question the quiz question is about to be stored (e.g. that the quiz is about the ‘mother’ map and specifically about the use of ‘mam’).  I made significant updates to the script that outputs data for use in the site to integrate the functions from my earlier test script that calculated the correct answer.  I updated these functions to change the logic somewhat.  They now only use ‘method 1’ as mentioned in an earlier post.  This method also now has a built-in check to filter out regions that have the highest percentage of usage but only a limited amount of data.  Currently this is set to a minimum of 10 answers for the option in question (e.g. ‘mam’) rather than total number of answers in a region.  Regions are ordered by their percentage usage (highest first) and the script iterates down through the regions and will pick as ‘correct’ the first one that has at least 10 answers.  I’ve also added in a contingency in cases where none of the regions have at least 10 answers (currently the case for the ‘rocket’ question).  In such cases the region marked as ‘correct’ will be the one that has the highest raw count of answers for the answer option rather than the highest percentage.

With the ‘correct’ region picked out the script then picks out all other regions where the usage percentage is at least 10% lower than the correct percentage.  This is to ensure that there isn’t an ‘incorrect’ answer that is too similar to the ‘correct’ one.  If this results in less than three regions (as regions are only returned if they have clicks for the answer option) then the system goes through the remaining regions and adds these in with a zero percentage.  These ‘incorrect’ regions are then shuffled and three are picked out at random.  The ‘correct’ answer is then added to these three and the options are shuffled again to ensure the ‘correct’ option is randomly positioned.  The dynamically generated output is then plugged into the output script that the website uses.

I then updated the front-end to work with this new data.  This also required me to create a new database table to hold the user’s answers, storing the region the user presses on and whether their selection was correct, along with the question ID and the person ID.  Non-dynamic answers store the ID of the ‘answer option’ that the user selected, but these dynamic questions don’t have static ‘answer options’ so the structure needed to be different.

I then implemented the dynamic answers for the ‘most of Scotland’ questions.  For these questions the script needs to evaluate whether a form is used throughout Scotland or not.  The algorithm gets all of the answer options for the survey question (e.g. ‘crying’ and ‘greetin’) and for each region works out the percentage of responses for each option.  The team had previously suggested a fixed percentage threshold of 60%, but I reckoned it might be better for the threshold to change depending on how many answer options there are.  Currently I’ve set the threshold to be 100 divided by the number of options.  So where there are two options the threshold is 50%.  Where there are four options (e.g. the ‘wean’ question) the threshold is 25% (i.e. if 25% or more of the answers in a region are for ‘wean’ it is classed as present in the region).  Where there are three options (e.g. ‘clap’) the threshold is 33%.  Where there are 5 options (e.g. ‘clarty’) the threshold is 20%.

The algorithm counts the number of regions that meet the threshold, and if the number is 8 or more then the term is considered to be found throughout Scotland and ‘Yes’ is the correct answer.  If not then ‘No’ is the correct answer.  I also had to update the way answers are stored in the database so these yes/no answers can be saved (as they have no associated region like the other questions).

I then moved onto tackling the non-standard (in terms of structure) questions to ensure they are dynamically generated as well.  These were rather tricky to do as they each had to be handled differently as they were asking different things of the data (e.g. a question like ‘What are you likely to call the evening meal if you live in Tayside and Angus (Dundee) and didn’t go to Uni?’).  I also made the ‘Sounds about right’ quiz dynamic.

I then moved onto tackling the ‘I would never say that’ quiz, which has been somewhat tricky to get working as the structure of the survey questions and answers is very different.  Quizzes for the other surveys involved looking at a specific answer option but for this survey the answer options are different rating levels that each need to be processed and handled differently.

For this quiz for each region the system returns the number of times each rating level has been selected and works out the percentages for each.  It then adds the ‘I’ve never heard this’ and ‘people elsewhere say this’ percentages together as a ‘no’ percentage and adds the ‘people around me say this’ and ‘I’d say this myself’ percentages together as a ‘yes’ percentage.  Currently there is no weighting but we may want to consider this (e.g. ‘I’d say this’ would be worth more than ‘people around me’).

With these ratings stored the script handled question types differently.  For the ‘select a region’ type of question the system works in a similar way to the other quizzes:  It sorts the regions by ‘yes’ percentage with the biggest first.  It then iterates through the regions and picks as the correct answer the first it comes to where the total number of responses for the region is the same or greater than the minimum allowed (currently set to 10).  Note that this is different to the other quizzes where this check for 10 is made against the specific answer option rather than the number of responses in the region as a whole.

If no region passes the above check then the region with the highest ‘yes’ percentage without a minimum allowed check is chosen as the correct answer.  The system then picks out all other regions with data where the ‘yes’ percentage is at least 10% lower than the correct answer, adds in regions with no data if less than three have data, shuffles the regions and picks out three.  These are then added to the ‘correct’ region and the answers are shuffled again.

I changed the questions that had an ‘all over Scotland’ answer option so that these are now ‘yes/no’ questions, e.g. ‘Is ‘Are you wanting to come with me?’ heard throughout most of Scotland?’.  For these questions the system uses 8 regions as the threshold, as with the other quizzes.  However, the percentage threshold for ‘yes’ is fixed.  I’ve currently set this to 60% (i.e. at least 60% of all answers in a region are either ‘people around me say this’ or ‘I’d say this myself’).  There is currently no minimum number of responses limit for this question type, so a region with 1 single answer that’s ‘people around me say this’ will have a 100% ‘yes’ and the region will included.  This is also the case for the ‘most of Scotland’ questions in the other quizzes, as we may need to tweak this.

As we’re using percentages rather than exact number of dots the questions can sometimes be a bit tricky.  For example the first question currently has Glasgow as the correct answer because all but two of the markers in this region are ‘people around me say this’ or ‘I’d say this myself’.  But if you turn off the other two categories and just look at the number of dots you might surmise that the North East is the correct answer as there are more dots there, even though proportionally fewer of them are the high ratings.  I don’t know if we can make it clearer that we’re asking which region has proportionally more higher ratings without confusing people further, though.

I also spent some time this week working on the Book and Borrowing project.  I had to make a few tweaks to the Chambers map of borrowers to make the map work better on smaller screens.  I ensured that both the ‘Map options’ section on the left and the ‘map legend’ on the right are given a fixed height that is shorter than the map and the areas become scrollable, as I’d noticed that on short screens both these areas could end up longer than the map and therefore their lower parts were inaccessible.  I’ve also added a ‘show/hide’ button to the map legend, enabling people to hide the area if it obscures their view of the map.

I also sent on some renamed library register files from St Andrews to Gerry for him to align with existing pages in the CMS, replaced some of the page images for the Dumfries register and renamed and uploaded images for a further St Andrews register that already existed in the CMS, ensuring the images became associated with the existing pages.

I started to work on the images for another St Andrews register that already exists in the system, but for this one the images are a double page spread so I need to merge two pages into one in the CMS.  The script needs to find all odd numbered pages then move the records on these to the preceding even numbered page, and at the same time regenerate the ‘page order’ for each record so they follow on from the existing records.  Then the even page needs its folio number updated to add in the odd number (e.g. so folio number 2 becomes ‘2-3’.  Then I need to delete the odd page record and after all that is done I need to regenerate the ‘next’ and ‘previous’ page links for all pages.  I completed everything except the final task, but I really need to test the script out on a version of the database running on my local PC first, as if anything goes wrong data could very easily be lost. I’ll need to tackle this next week as I ran out of time this week.

I also participated in our six-monthly formal review meeting for the Dictionaries of the Scots Language where we discussed our achievements in the past six months and our plans for the next.  I also made some tweaks to the DSL website, such as splitting up the ‘Abbreviations and symbols’ buttons into two separate links, updating the text found on a couple of the old maps pages and considering future changes to the bibliography XSLT to allow links in the ‘oral sources’

Finally this week I made a start on the Burns manuscript database for Craig Lamont.  I wrote a script that extracts the data from Craig’s spreadsheet and imports it into an online database.  We will be able to rerun this whenever I’m given a new version of the spreadsheet.  I then created an initial version of a front-end for the database within the layout for the Burns Correspondence and Poetry site. Currently the front-end only displays the data in one table with columns for type, date, content, physical properties, additional notes and locations.  The latter contains the location name, shelfmark (if applicable) and condition (if applicable) for all locations associated with a record, each on a separate line with the location name in bold.  Currently it’s possible to order the columns by clicking on them.  Clicking a second time reverses the order.  I haven’t had a chance to create any search or filter options yet but I’m intending to continue with this next week.

Week Beginning 15th August 2022

I spent the majority of the week continuing to work on the Speak For Yersel resource, working through a lengthy document of outstanding tasks that need to be completed before the site is launched in September.  First up was the output for the ‘Where do you think is speaker is from’ click activity.  The page features some explanatory text and a drop-down through which you can select a speaker.  When this is clicked on the user is presented with the option to play the audio file and can view the transcript.

I decided to make the transcript chunks visible with a green background that’s slightly different from the colour of the main area.  I thought it would be useful for people to be able to tell which of the ‘bigger’ words was part of which section, as it may well be that the word that caused a user to ‘click’ a section is not the word that we’ve picked for the section.  For example, in the Glasgow transcript ‘water’ is the chosen word for one section but I personally I clicked this section because ‘hands’ was pronounced ‘honds’.  Another reason to make the chunks visible is because I’ve managed to set up the transcript to highlight the appropriate section as the audio plays.  Currently the section that is playing is highlighted in white and this really helps to get your eye in whilst listening to the audio.

In terms of resizing the ‘bigger’ words, I chose the following as a starting point:  Less than 5% of the clicks: word is bold but not bigger (default font size for the transcript area is currently 16pt); 5-9%: 20pt;  10-14%: 25pt; 15-19%: 30pt; 20-29%: 35pt; 30-49%: 40pt; 50-74%: 45pt; 75% or more: 50pt.

I’ve also given the ‘bigger’ word a tooltip that displays the percentage of clicks responsible for its size as I thought this might be useful for people to see.  We will need to change the text, though.  Currently it says something like ‘15% of respondents clicked in this section’ but it’s actually ‘15% of all clicks for this transcript were made in this section’ which is a different thing, but I’m not sure how best to phrase it.  Where there is a pop-up for a word it appears in the blue font and the pop-up text contains the text that the team has specified.  Where the pop-up word is also the ‘bigger’ word (most but not always the case) then the percentage text also appears in the popup, below the text.  Here’s a screenshot of how the feature currently looks:

I then moved onto the ‘I would never say that’ activities.  This is a two-part activity, with the first part involving the user dragging and dropping sentences into either a ‘used in Scots’ or ‘isn’t used in Scots’ column and then checking their answers.  The second part has the user translating a Scots sentence into Standard English by dragging and dropping possible words into a sentence area.  My first task was to format the data used for the activity, which involved creating a suitable data structure in JSON and then migrating all of the data into this structure from a Word document.  With this in place I then began to create the front-end.  I’d created similar drag and drop features before (including for another section of the current resource) and therefore used the same technologies:  The jQuery UI drag and drop library (https://jqueryui.com/draggable/).  This allowed me to set up two areas where buttons could be dropped and then create a list of buttons that could be dragged.  I then had to work on the logic for evaluating the user’s answers.  This involved keeping a tally of the number of buttons that had been dropped into one or other of the boxes (which also had to take into consideration that the user can drop a button back in the original list) and when every button has been placed in a column a ‘check answers’ button appears.  On pressing, the code then fixes the draggable buttons in place and compares the user’s answers with the correct answers, adding a ‘tick’ or ‘cross’ to each button and giving an overall score in the middle.  There are multiple stages to this activity so I also had to work on the logic for loading a new set of sentences with their own introductory text, or moving onto part two of the activity if required.  Below is a screenshot of part 1 with some of the buttons dragged and dropped:

Part two of the activity involved creating a sentence by choosing words to add.  The original plan was to have the user click on a word to add it to the sentence, or click on the word in the sentence to remove it if required.  I figured that using a drag and drop method and enabling the user to move words around the sentence after they have dropped them if required would be more flexible and would fit in better with the other activities in the site.  I was just going to use the same drag and drop library that I’d used for part one, but then I spotted a further jQuery interaction called sortable that allowed for connected lists (https://jqueryui.com/sortable/#connect-lists).  This allowed items within a list to be sortable, but also for items to be dragged and dropped from one list to another.  This sounded like the ideal solution, so I set about investigating its usage.

It took some time to style the activity to ensure that empty lists were still given space on the screen, and to ensure the word button layout worked properly, but after that the ‘sentence builder’ feature worked very well – the user could move words between the sentence area and the ‘list of possible words’ area and rearrange their order as required.  I set up the code to ensure a ‘check answers’ button appeared when at least one word had been added to the sentence (disappearing again if the user removes all words).  When the ‘check answers’ button is pressed the code grabs the content of the buttons in the sentence area in the order the buttons have been added and creates a sentence from the text.  It then compares this to one of the correct sentences (of which there may be more than one).  If the answer is correct a ‘tick’ is added after the sentence and if it’s wrong a ‘cross’ is added.  If there are multiple correct answers the other correct possibilities are displayed, and if the answer was wrong all correct answers are displayed.  Then it’s on to the next sentence, or the final evaluation.  Here’s a screenshot of part 2 with some words dragged and dropped:

Whilst working on part two it became clear that the ‘sortable’ solution I’d developed for part 2 worked better than the other draggable method I’d used for part one.  This is because the ‘sortable’ solution uses HTML lists and ‘snaps’ each item into place whereas the previous method just leaves the draggable item wherever the user drops it (so long as it’s in the confines of the droppable box).  This means things can look a bit messy.  I therefore revisited part 1 and replaced the method.  This took a bit of time to implement as I had to rework a lot of the logic, but I think it was worth it.

Also this week I spent a bit of time working for the Dictionaries of the Scots Language.  I had a conversation with Pauline Graham about the workflow for updates to the online data.  I also investigated a couple of issues with entries for Ann Fergusson.  One entry (sleesh) wasn’t loading as there were spaces in the entry’s ‘slug’.  Spaces in URLs can cause issues and this is what was happening with this entry.  I updated the URL information in the database so that ‘sleesh_n1 and v’ has been changed to ‘sleesh_n1_and_v’ and this has fixed the issue.  I also updated the XML in the online system so the first URL is now <url>sleesh_n1_and_v</url>.  I checked the online database and thankfully no other entries have a space in their ‘slug’ so this issue doesn’t affect anything else.  The second issue related to an entry that doesn’t appear in the online database.  It was not in the data I was sent and wasn’t present in several previous versions of the data, so in this case something must have happened prior to the data getting sent to me.  I also had a conversation about the appearance of yogh characters in the site.

I also did a bit more work for the Books and Borrowing project this week.  I added two further library registers from the NLS to our system.  This means there should now only be one further register to come from the NLS, which is quite a relief as each register takes some time to process.  I also finally got round to processing the four registers for St Andrews, which had been on my ‘to do’ list since late July.  It was very tricky to rename the images into a format that we can use on the server because the lack of trailing zeros meant a script to batch process the images loaded them in the wrong order.  The was made worse because rather than just being numbered sequentially the image filenames were further split into ‘parts’.  For example, the images beginning ‘UYLY 207 11 Receipt book part 11’ were being processed before images beginning ‘UYLY 207 11 Receipt book part 2’ as programming languages when ordering strings consider 11, 12 etc to come before 2.  This was also then happening within each ‘part’, e.g. ‘UYLY207 15 part 43_11.jpg’ was coming before ‘UYLY207 15 part 43_2.jpg’.  It took most of the morning to sort this out, but I was then able to upload the images to the server and I create new registers, generate pages and associate images for the two new registers (207-11 and 207-15).

However, the other two registers already exist in the CMS as page records with associated borrowing records.  Each image of the register is an open spread showing two borrowing pages and we had previously decided that I should run a script to merge pages in the CMS and then associate the merged record with one of the page images.  However, I’m afraid this is going to need some manual intervention.  Looking at the images for 206-1 and comparing them to the existing page records for this register, it’s clear that there are many blank pages in the two-page spreads that have not been replicated in the CMS.  For example, page 164 in the CMS is for ‘Profr Spens’.  The corresponding image (in my renamed images) is ‘UYLY206-1_00000084.jpg’.  The data is on the right-hand page and the left-hand page is blank.  But in the CMS the preceding page is for ‘Prof. Brown’, which is on the left-hand page of the preceding image.  If I attempted to automatically merge these two page records into one this would therefore result in an error.

I’m afraid what I need is for someone who is familiar with the data to look through the images and the pages and create a spreadsheet noting which pages correspond to which image.  Where multiple pages correspond to one page I can then merge the records.  So for example: Pages 159 (id 1087) and 160 (ID 1088) are found on image UYLY206-1_00000082.jpg.  Page 161 (1089) corresponds to UYLY206-1_00000083.jpg.  The next page in the CMS is 164 (1090) and this corresponds to UYLY206-1_00000084.jpg. So a spreadsheet could have two columns:

Page ID                 Image

1087                       UYLY206-1_00000082.jpg

1088                       UYLY206-1_00000082.jpg

1089                       UYLY206-1_00000083.jpg

1900                       UYLY206-1_00000084.jpg

Also, the page numbers in the CMS don’t tally with the handwritten page numbers in the images (e.g. the page record 1089 mentioned above has page 161 but the image has page number 162 written on it).  And actually, the page numbers would need to include two pages, e.g. 162-163.  Ideally whoever is going to manually create the spreadsheet could add new page numbers as a further column and I could then fix these when I process the spreadsheet too.  This task is still very much in progress.

Also for the project this week I created a ‘full screen’ version of the Chambers map that will be pulled into an iframe on the Edinburgh University Library website when they create an online exhibition based on our resource.

Finally this week I helped out Sofia from the Iona Place-names project who as luck would have it was also wanting help with embedding a map in an iframe.  As I’d already done some investigation about this very issue for the Chambers map I was able to easily set this up for Sofia.

 

Week Beginning 8th August 2022

I should have been back at work on Monday this week, after having a lovely holiday last week.  Unfortunately I began feeling unwell over the weekend and ended up off sick on Monday and Tuesday.  I had a fever and a sore throat and needed to sleep most of the time, but it wasn’t Covid as I tested negative.  Thankfully I began feeling more normal again on Tuesday and by Wednesday I was well enough to work again.

I spent the majority of the rest of the week working on the Speak For Yersel project.  On Wednesday I moved the ‘She sounds really clever’ activities to the ‘maps’ page, as we’d decided that these ‘activities’ really were just looking at the survey outputs and so fitting in better on the ‘maps’ page.  I also updated some of the text on the ‘about’ and ‘home’ pages and updated the maps to change the gender labels, expanding ‘F’ and ‘M’ and replacing ‘NB’ with ‘other’ as this is a more broad option that better aligns with the choices offered during sign-up.  I also an option to show and hide the map filters that defaults to ‘hide’ but remembers the users selection when other map options are chosen.  I added titles to the maps on the ‘Maps’ page and made some other tweaks to the terminology used in the maps.

On Wednesday we had a meeting to discuss the outstanding tasks still left for me to tackle.  This was a very useful meeting and we managed to make some good decisions about how some of the larger outstanding areas will work.  We also managed to get confirmation from Rhona Alcorn of the DSL that we will be able to embed the old web-based version of the Schools Dictionary app for use with some of our questions, which is really great news.

One of the outstanding tasks was to investigate how the map-based quizzes could have their answer options and the correct answer dynamically generated.  This was never part of the original plan for the project, but it became clear that having static answers to questions (e.g. where do people use ‘ginger’ for ‘soft drink’) wasn’t going to work very well when the data users are looking at is dynamically generated and potentially changing all the time – we would be second guessing the outputs of the project rather than letting the data guide the answers.  As dynamically generating answers wasn’t part of the plan and would be pretty complicated to develop this has been left as a ‘would be nice if there’s time’ task, but at our call it was decided that this should now become a priority.  I therefore spent most of Thursday investigating this issue and came up with two potential methods.

The first method looks at each region individually to compare the number of responses for each answer option in the region.  It counts the number of responses for each answer option and then generates a percentage of the total number of responses in the area.  So for example:

North East (Aberdeen)

Mother: 12 (8%)

Maw: 4 (3%)

Mam: 73 (48%)

Mammy: 3 (2%)

Mum: 61 (40%)

So of the 153 current responses in Aberdeen, 73 (48%) were ‘Mam’.  The method then compares the percentages for the particular answer option across all regions to pick out the highest percentage.  The advantage of this approach is that by looking at percentages any differences caused by there being many more respondents in one region over another are alleviated.  If we look purely at counts then a region with a large number of respondents (as with Aberdeen at the moment) will end up with an unfair advantage, even for answer options that are not chosen the most.  E.g. ‘Mother’ has 12 responses, which is currently by far the most in any region, but taken as a percentage it’s roughly in line with other areas.

But there are downsides.  Any region where the option has been chosen but the total number of responses is low will end up with a large percentage.  For example, both Inverness and Dumfries & Galloway currently only have two respondents, but in each case one of these was for ‘Mam’, meaning they pass Aberdeen and would be considered the ‘correct’ answer with 50% each.  If we were to use this method then I would have to put something in place to disregard small samples.  Another downside is that as far as users are concerned they are simply evaluating dots on a map, so perhaps we shouldn’t be trying to address the bias of some areas having more respondents than others because users themselves won’t be addressing this.

This then led me to develop method 2, which only looks at the answer option in question (e.g. ‘Mam’) rather than the answer option within the context of other answer options.  This method takes a count of the number of responses for the answer option in each region and for the number generates a percentage of the total number of answers for the option across Scotland.  So for ‘Mam’ the counts and percentages are as follows:

Ayrshire

1 (1%)

Fife

2 (2%)

Glasgow

2 (2%)

North East (Aberdeen)

73 (84%)

Stirling and Falkirk

2 (2%)

Lothian (Edinburgh)

1 (1%)

Tayside and Angus (Dundee)

4 (5%)

Dumfries and Galloway

1 (1%)

Highlands (Inverness)

1 (1%)

Across Scotland there are currently a total of 87 responses where ‘Mam’ was chosen and 73 of these (84%) were in Aberdeen.  As I say, this simple solution probably mirrors how a user will analyse the map – they will see lots of dots in Aberdeen and select this option.  However, it completely ignores the context of the chosen answer.  For example, if we get a massive rush of users from Glasgow (say 2000) and 100 of these choose ‘Mam’ then Glasgow ends up being the correct answer (beating Aberdeen’s 73), even though as a proportion of all chosen answers in Glasgow 100 is only 5% (the other 1900 people will have chosen other options), meaning it would be a pretty unpopular choice compared to the 48% who chose ‘Mam’ over other options in Aberdeen as mentioned near the start.  But perhaps this is a nuance that users won’t consider anyway.

This latter issue became more apparent when I looked at the output for the use of ‘rocket’ to mean ‘stupid’.  The simple count method has Aberdeen with 45% of the total number of ‘rocket’ responses, but if you look at the ‘rocket’ choices in Aberdeen in context you see that only 3% of respondents in this region selected this option.

There are other issues we will need to consider too.  Some questions currently have multiple regions linked in the answers (e.g. lexical quiz question 4 ‘stour’ has answers ‘Edinburgh and Glasgow’, ‘Shetland and Orkney’ etc.)  We need to decide whether we still want this structure.  This is going to be tricky to get working dynamically as the script would have to join two regions with the most responses together to form the ‘correct’ answer and there’s no guarantee that these areas would be geographically next to each other.  We should perhaps reframe the question; we could have multiple buttons that are ‘correct’ and ask something like ‘stour is used for dust in several parts of Scotland.  Can you pick one?’  Or I guess we could ask the user to pick two.

We also need to decide how to handle the ‘heard throughout Scotland’ questions (e.g. lexical question 6 ‘is greetin’ heard throughout most of Scoatland’).  We need to define what we mean by ‘most of Scotland’.  We need to define this in a way that can be understood programmatically, but thinking about it, we probably also need to better define what we mean by this for users too.  If you don’t know where most of the population of Scotland is situated and purely looked at the distribution of ‘greetin’ on the map you might conclude that it’s not used throughout Scotland at all, but only in the central belt and up the East coast.  But returning to how an algorithm could work out the correct answer for this question:  We need to set thresholds for whether an option is used throughout most of Scotland or not.  Should the algorithm only look at certain regions?  Should it count the responses in each region and consider it in use in the region if (for example) 50% or more respondents chose the option?  The algorithm could then count the number of regions that meet this threshold compared to the total number of regions and if (for example) 8 out of our 14 regions surpass the threshold the answer could be deemed ‘true’.  The problem is humans can look at a map and quickly estimate an answer but an algorithm needs more rigid boundaries.

Also, question 15 of the ‘give your word’ quiz asks about the ‘central belt’ but we need to define what regions make this up.  Is it just Glasgow and Lothian (Edinburgh), for example?  We also might need to clarify this for users too.  The ‘I would never say that’ quiz has several questions where one possible answer is ‘All over Scotland’.  If we’re dynamically ascertaining the correct answer then we can’t guarantee that this answer will be one that comes up.  Also, ‘All over Scotland’ may in fact be the correct answer for questions that we haven’t considered this to be an answer for.  What should be do about this?  Two possibilities: Firstly, the code for ascertaining the correct answer (for all of the map-based quizzes) also has a threshold that when reached would mean the correct answer is ‘All over Scotland’ and this option would then be included in the question.  This could use the same logic as ‘heard throughout Scotland’ yes/no questions that I mentioned above.  Secondly, we could reframe the questions that currently have an ‘All over Scotland’ answer option to be the same as the ‘heard throughout Scotland yes/no questions as found in the lexical quiz and we don’t bother to try and work out whether an ‘all over Scotland’ option needs to be added to any of the other questions.

I also realised that we may end up with a situation where more than one region has a similar number of markers, meaning the system will still easily be able to ascertain which is correct, but users might struggle.  Do we need to consider this eventuality?  I could for example add in a check to see whether any other regions have a similar score to the ‘correct’ one and ensure any that are too close never get picked as the randomly generated ‘wrong’ answer options.  Linked to this: we need to consider whether it is acceptable that the ‘wrong’ answer options will always be randomly generated. The options will be different each time a user loads the quiz question and if they are entirely random this means the question may sometimes be very easy and other times very hard.  Do I need to update the algorithm to add some sort of weighting to how the ‘wrong’ options are chosen?  This will need further discussion with the team next week.

I decided to move onto some of the other outstanding tasks and to leave the dynamically generated map answers issue until Jennifer and Mary are back next week.  I managed to complete the majority of minor updates to the site that were still outstanding during this time, such as updating introductory and explanatory text for the surveys, quizzes and activities, removing or rearranging questions, rewording answers, reinstating the dictionary based questions and tweaking the colour and justification of some of the site text.

This leaves several big issues left to tackle before the end of the month including  dynamically generating answers for quiz questions, developing the output for the ‘click’ activity and developing the interactive activities for ‘I would never say that’.  It’s going to be a busy few weeks.

Also this week I continued to process the data for the Books and Borrowing project.  This included uploading images for one more Advocates library register from the NLS, including generating pages, associating images and fixing the page numbering to align with the handwritten numbers.  I also received images for a second register for Haddington library from the NLS, and I needed some help with this as we already have existing pages for this register in the CMS, but the number of images received didn’t match.  Thankfully the RA Kit Baston was able to look over the images and figure out what needed to be done, which included inserting new pages in the CMS and then me writing a script to associate images with records.  I also added two missing pages to the register for Dumfries Presbytery and added in a missing image for Westerkirk library.

Finally, I tweaked the XSLT for the Dictionaries of the Scots Language bibliographies to ensure the style guide reference linked to the most recent version.

Week Beginning 25th July 2022

I was on holiday for most of the previous two weeks, working two days during this period.  I’ll also be on holiday again next week, so I’ve had quite a busy time getting things done.  Whilst I was away I dealt with some queries from Joanna Kopaczyk about the Future of Scots website.  I also had to investigate a request to fill in timesheets for my work on the Speak For Yersel project, as apparently I’d been assigned to the project as ‘Directly incurred’ when I should have been ‘Directly allocated’.  Hopefully we’ll be able to get me reclassified but this is still in-progress.  I also fixed a couple of issues with the facility to export data for publication for the Berwickshire place-name project for Carole Hough, and fixed an issue with an entry in the DSL, which was appearing in the wrong place in the dictionary.  It turned out that the wrong ‘url’ tag had been added to the entry’s XML several years ago and since then the entry was wrongly positioned.  I fixed the XML and this sorted things.  I also responded to a query from Geert of the Anglo-Norman Dictionary about Aberystwyth’s new VPN and whether this would affect his access to the AND.  I also investigated an issue Simon Taylor was having when logging into a couple of our place-names systems.

On the Monday I returned to work I launched two new resources for different projects.  For the Books and Borrowing project I published the Chambers Library Map (https://borrowing.stir.ac.uk/chambers-library-map/) and reorganised the site menu to make space for the new page link.  The resource has been very well received and I’m pretty pleased with how it’s turned out.  For the Seeing Speech project I launched the new Gaelic Tongues resource (https://www.seeingspeech.ac.uk/gaelic-tongues/) which has received a lot of press coverage, which is great for all involved.

I spent the rest of the week dividing my time primarily between three projects:  Speak For Yersel, Books and Borrowing and Speech Star.  For Books and Borrowing I continued processing the backlog of library register image files that has built up.  There were about 15 registers that needed to be processed, and each needed to be handled in a different way.  This included nine registers from Advocates Library that had been digitised by the NLS, for which I needed to batch process the images to rename them, delete blank pages, create page records in the CMS and then tweak the automatically generated folio numbers to account for discrepancies in the handwritten page number in the images.  I also processed a register for the Royal High School, which involved renaming the images so they match up with image numbers already assigned to page records in the CMS, inserting new page records and updating the ‘next’ and ‘previous’ links for pages for which new images had been uncovered and generating new page records for many tens of new pages that follow on from the ones that have already been created in the CMS.  I also uploaded new images for the Craigston register and created a new register including all page records and associated image URLs for a further register for Aberdeen.  I still have some further RHS registers to do and a few from St Andrews, but these will need to wait until I’m back from my holiday.

For Speech Star I downloaded a ZIP containing 500 new ultrasound MP4 videos.  I then had to process them to generate ‘poster’ images for each video (these are images that get displayed before the user chooses to play the video).  I then had to replace the existing normalised speech database with data from a new spreadsheet that included these new videos plus updates to some of the existing data.  This included adding a few new fields and changing the way the age filter works, as much of the new data is for child speakers who have specific ages in months and years, and these all need to be added to a new ‘under 18’ age group.

For Speak For Yersel I had an awful lot to do.  I started with a further large-scale restructuring of the website following feedback from the rest of the team.  This included changing the site menu order, adding in new final pages to the end of surveys and quizzes and changing the text of buttons that appear when displaying the final question.

I then developed the map filter options for age and education for all of the main maps.  This was a major overhaul of the maps.  I removed the slide up / slide down of the map area when an option is selected as this was a bit long and distracting.  Now the map area just updates (although there is a bit of a flicker as the data gets replaced).  The filter options unfortunately make the options section rather big, which is going to be an issue on a small screen.  On my mobile phone the options section takes up 100% of the width and 80% of the height of the map area unless I press the ‘full screen’ button.  However, I figured out a way to ensure that the filter options section scrolls if the content extends beyond the bottom of the map.

I also realised that if you’re in full screen mode and you select a filter option the map exits full screen as the map section of the page reloads.  This is very annoying, but I may not be able to fix it as it would mean completely changing how the maps are loaded.  This is because such filters and options were never intended to be included in the maps and the system was never developed to allow for this.  I’ve had to somewhat shoehorn in the filter options and it’s not how I would have done things had I known from the beginning that these options were required.  However, the filters work and I’m sure they will be useful.  I’ve added in filters for age, education and gender, as you can see in the following screenshot:

I also updated the ‘Give your word’ activity that asks to identify younger and older speakers to use the new filters too.  The map defaults to showing ‘all’ and the user then needs to choose an age.  I’m still not sure how useful this activity will be as the total number of dots for each speaker group varies considerably, which can easily give the impression that more of one age group use a form compared to another age group purely because one age group has more dots overall.  The questions don’t actually ask anything about geographical distribution so having the map doesn’t really serve much purpose when it comes to answering the question.  I can’t help but think that just presenting people with percentages would work better, or some other sort of visualisation like a bar graph or something.

I then moved on to working on the quiz for ‘she sounds really clever’ and so far I have completed both the first part of the quiz (questions about ratings in general) and the second part (questions about listeners from a specific region and their ratings of speakers from regions).  It’s taken a lot of brain-power to get this working as I decided to make the system work out the correct answer and to present it as an option alongside randomly selected wrong answers.  This has been pretty tricky to implement (especially as depending on the question the ‘correct’ answer is either the highest or the lowest) but will make the quiz much more flexible – as the data changes so will the quiz.

Part one of the quiz page itself is pretty simple.  There is the usual section on the left with the question and the possible answers.  On the right is a section containing a box to select a speaker and the rating sliders (readonly).  When you select a speaker the sliders animate to their appropriate location.  I decided to not include the map or the audio file as these didn’t really seem necessary for answering the questions, they would clutter up the screen and people can access them via the maps page anyway (well, once I move things from the ‘activities’ section).  Note that the user’s answers are stored in the database (the region selected and whether this was the correct answer at the time).  Part two of the quiz features speaker/listener true/false questions and this also automatically works out the correct answer (currently based on the 50% threshold).  Note that where there is no data for a listener rating a speaker from a region the rating defaults to 50.  We should ensure that we have at least one rating for a listener in each region before we let people answer these questions.  Here is a screenshot of part one of the quiz in action, with randomly selected ‘wrong’ answers and a dynamically outputted ‘right’ answer:

I also wrote a little script to identify duplicate lexemes in categories in the Historical Thesaurus as it turns out there are some occasions where a lexeme appears more than once (with different dates) and this shouldn’t happen.  These will need to be investigated and the correct dates will need to be established.

I will be on holiday again next week so there won’t be another post until the week after I’m back.

 

Week Beginning 4th July 2022

I had a lovely week’s holiday last week and returned to work for one week only before I head off for a further two weeks.  I spent most of my time this week working on the Speak For Yersel project implementing a huge array of changes that the team wanted to make following the periods of testing in schools a couple of weeks ago.  There were also some new sections of the resource to work on as well.

By Tuesday I had completed the restructuring of the site as detailed in the ‘Roadmap’ document, meaning the survey and quizzes have been separated, as have the ‘activities’ and ‘explore maps’.  This has required quite a lot of restructuring of the code, but I think all is working as it should.  I also updated the homepage text.  One thing I wasn’t sure about is what should happen when the user reaches the end of the survey.  Previously this led into the quiz, but for now I’ve created a page that provides links to the quiz, the ‘more activities’ and the ‘explore maps’ options for the survey in question.

The quizzes should work as they did before, but they now have their own progress bar.  Currently at the end of the quiz the only link offered is to explore the maps, but we should perhaps change this.  The ‘more activities’ work slightly differently to how these were laid out in the roadmap.  Previously a user selected an activity then it loaded an index page with links to the activities and the maps.  As the maps are now separated this index page was pretty pointless, so instead when you select an activity it launches straight into it.  The only one that still has an index page is the ‘Clever’ one as this has multiple options.  However, thinking about this activity:  it’s really just an ‘explore’ like the ‘explore maps’ rather than an actual interactive activity per se, so we should perhaps move this to the ‘explore’ page.

I also made all of the changes to the ‘sounds about right’ survey including replacing sound files and adding / removing questions.  I ended up adding a new ‘question order’ field to the database and questions are now ordered using this, as previously the order was just set by the auto-incrementing database ID which meant inserting a new question to appear midway through the survey was very tricky.  Hopefully this change of ordering hasn’t had any knock-on effects elsewhere.

I then made all of the changes to two other activities:  the ‘lexical’ one and the ‘grammatical’ one.  These included quite a lot of tweaks to questions, question options, question orders and the number of answers that could be selected for questions.  With all of this in place I moved onto the ‘Where do you think this speaker is from’ sections.  The ‘survey’ now only consists of the click map and when you press the ‘Check Answers’ button some text appears under the buttons with links through to where the user can go next.

For the ‘more activities’ section the main click activity is now located here.  It took quite a while to get this to work, as moving sections introduced some conflicts in the code that were a bit tricky to identify.  I replaced the explanatory text and I also added in the limit to the number of presses.  I’ve added a section to the right of the buttons that displays the number of presses the user has left.  Once there are no presses left the ‘Press’ button gets disabled.  I still think people are going to reach the 5 click limit too soon and will get annoyed when they realise they can’t add further clicks and they can’t reset the exercise to give it another go.  After you’ve listened to the four speakers a page is displayed saying you’ve completed the activity and giving links to other parts.  Below is a screenshot of the new ‘click’ activity with the limit in place (and also the new site menu):

 

The ’Quiz’ has taken quite some time to implement but is now fully operational.  I had to do a lot of work behind the scenes to get the percentages figured out and to get the quiz to automatically work out which answer should be the correct one, but it all works now.  The map displays the ‘Play’ icons as I figured people would want to be able to hear the clips as well as just see the percentages.  Beside each clip icon the percentage of respondents who correctly identified the location of the speaker is displayed.  The markers are placed at the ‘correct’ points on the map, as shown when you view the correct locations in the survey activities.  Question 1 asks you to identify the most recognised, question 2 the least recognised.  Quiz answers are logged in the database so we’ll be able to track answers.  Here’s a screenshot of the quiz:

I also added the percentage map to the ‘explore maps’ page too, and I gave people the option of focussing on the answers submitted from specific regions.  An ‘All regions’ map displays the same data as the quiz map, but then the user can choose (for example) Glasgow and view the percentages of correctly identified speakers that respondents from the Glasgow area identified, thus allowing them to compare how people in each area managed to identify speakers in the areas.  I decided to add a count of the number of people that have responded too.

The ‘explore maps’ for ‘guess the region’ has a familiar layout – buttons on the left that when pressed on load a map on the right.  The buttons correspond to the region of people who completed the ‘guess the region’ survey.  The first option shows the answers of all respondents from all regions.  This is exactly the same as the map in the quiz, except I’ve also displayed the number of respondents above the map.  Two things to be aware of:

Firstly, a respondent can complete the quiz as many times as they want, so each respondent may have multiple datasets.  Secondly, the click map (both quiz and ‘explore maps’) currently includes people from outside of Scotland as well as people who selected an area when registering.  There are currently 18 respondents and 3 of these are outside of Scotland.

When you click on a specific region button in the left-hand column the results of respondents from that specific region only are displayed on the map.  The number of respondents is also listed above the map.  Most of the regions currently have no respondents, meaning an empty map is displayed and a note above the map explains why.  Ayrshire has one respondent.  Glasgow has two.  Note that the reason there are such varied percentages in Glasgow from just two respondents (rather than just 100%, 50% and 0%) is because one or more of the respondents has completed the quiz more than once.  Lothian has two respondents.  North East has 10.  Here’s how the maps look:

On Friday I began to work on the ‘click transcription’ visualisations, which will display how many times speakers have clicked in each of the sections of the transcriptions they listen to in the ‘click’ activity.  I only managed to get as far as writing the queries and scripts to generate the data, rather than any actual visualisation of the data.  When looking at the aggregated data for the four speakers I discovered that the distribution of clicks across sections was a bit more uniform that I thought it might be.  We might need to consider how we’re going to work out the thresholds for different sizes.  I was going to base it purely on the number of clicks, but I realised that this would not work as the more responses we get the more clicks there will be.  Instead I decided to use percentages of the total number of clicks for a speaker.  E.g. for speaker 4 there are currently a total of 65 clicks so the percentages for each section would be:

 

11% Have you seen the TikTok vids with the illusions?
6% They’re brilliant!
9% I just watched the glass one.
17% The guy’s got this big glass full of water in his hands.
8% He then puts it down,
8% takes out one of those big knives
6% and slices right through it.
6% I sometimes get so fed up with Tiktok
8% – really does my head in –
8% but I’m not joking,
14% I want to see more and more of this guy.

 

(which adds up to 101% with rounding).  But what should the thresholds be?  E.g. 0-6% = regular, 7-10% = bigger, 11-15% even bigger, 16%+ biggest?  I’ll need input from the team about this.  I’m not a statistician but there may be better approaches, such as using standard deviation and such things.

I still have quite a lot of work to do for the project, namely:  Completing the ‘where do you think the speaker is from’ as detailed above; implementing the ‘she sounds really clever’ updates; adding in filter options to the map (age ranges and education levels); investigating dynamically working out the correct answers to map-based quizzes.

In addition to my Speak For Yersel work I participated in an interview with the AHRC about the role of technicians in research projects.  I’d participated in a focus group a few weeks ago and this was a one-on-one follow-up video call to discuss in greater detail some of the points I’d raised in the focus group.  It was a good opportunity to discuss y role and some of the issues I’ve encountered over the years.

I also installed some new themes for the OHOS project website and fixed an issue with the Anglo-Norman Dictionary website, as the editor had noticed that cognate references were not always working.  After some investigation I realised that this was happening when the references for a cognate dictionary included empty tags as well as completed tags.  I had to significantly change how this section of the entry is generated in the XSLT from the XML, which took some time to implement and test.  All seems to be working, though.

I also did some work for the Books and Borrowing project.  Whilst I’d been on holiday I’d been sent page images for a further ten library registers and I needed to process these.  This can be something of a time-consuming process as each set of images needs to be processed in a different way, such as renaming images, removing unnecessary images at the start and end, uploading the images to the server, generating the page images for each register and then bringing the automatically generated page numbers into line with any handwritten page numbers on the images, which may not always be sequentially numbered.  I processed two registers for the Advocates library from the NLS and three registers from Aberdeen library.  I looked into processing the images for a register from the High School of Edinburgh, but I had some questions about the images and didn’t hear back from the researcher before the end of the week, so I needed to leave these.  The remaining registers were from St Andrews and I had further questions about these, as the images are double-page spreads but existing page records in the CMS treat each page separately.  As the researcher dealing with St Andrews was on holiday I’ll need to wait until I’m back to deal with these too.

Also this week I completed the two mandatory Moodle courses about computer security and GDPR, which took a bit longer that I thought they might.

Week Beginning 20th June 2022

I completed an initial version of the Chambers Library map for the Books and Borrowing project this week.  It took quite a lot of time and effort to implement the subscription period range slider.  Searching for a range when the data also has a range of dates rather than a single date means we needed to make a decision about what data gets returned and what doesn’t.  This is because the two ranges (the one chosen as a filter by the user and the one denoting the start and end periods of subscription for each borrower) can overlap in many different ways.  For example, the period chosen by the user is 05 1828 to 06 1829.  Which of the following borrowers should therefore be returned?

  1. Borrowers range is 06 1828 to 02 1829: Borrower’s range is fully within the period so should definitely be included
  2. Borrowers range is 01 1828 to 07 1828: Borrower’s range extends beyond the selected period at the start and ends within the selected period.  Presumably should be included.
  3. Borrowers range is 01 1828 to 09 1829: Borrower’s range extends beyond the selected period in both directions.  Presumably should be included.
  4. Borrowers range is 05 1829 to 09 1829: Borrower’s range begins during the selected period and ends beyond the selected period. Presumably should be included.
  5. Borrowers range is 01 1828 to 04 1828: Borrower’s range is entirely before the selected period. Should not be included
  6. Borrowers range is 07 1829 to 10 1829: Borrower’s range is entirely after the selected period. Should not be included.

Basically if there is any overlap between the selected period and the borrower’s subscription period the borrower will be returned.  But this means most borrowers will always be returned a lot of the time.  It’s a very different sort of filter to one that purely focuses on a single date – e.g. filtering the data to only those borrowers whose subscription periods *begins* between 05 1828 and 06 1829.

Based on the above assumptions I began to write the logic that would decide which borrowers to include when the range slider is altered.  It was further complicated by having to deal with months as well as years.  Here’s the logic in full if you fancy getting a headache:

if(((mapData[i].sYear>startYear || (mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth || mapData[i].sYear>startYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth <=endMonth) || mapData[i].eYear<endYear) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)) || (((mapData[i].sYear==startYear && mapData[i].sMonth>=startMonth) || mapData[i].sYear>startYear) && ((mapData[i].sYear==endYear && mapData[i].sMonth <=endMonth) || mapData[i].sYear<endYear) && ((mapData[i].eYear==endYear && mapData[i].eMonth >=endMonth) || mapData[i].eYear>endYear)) || ((mapData[i].sYear<startYear ||(mapData[i].sYear==startYear && mapData[i].sMonth<=startMonth)) && ((mapData[i].eYear==startYear && mapData[i].eMonth >=startMonth) || mapData[i].eYear>startYear)))

I also added the subscription period to the popups.  The only downside to the range slider is that the occupation marker colours change depending on how many occupations are present during a period, so you can’t always tell an occupation by its colour. I might see if I can fix the colours in place, but it might not be possible.

I also noticed that the jQuery UI sliders weren’t working very well on touchscreens so installed the jQuery TouchPunch library to fix that (https://github.com/furf/jquery-ui-touch-punch).  I also made the library marker bigger and gave it a white border to more easily differentiate it from the borrower markers.

I then moved onto incorporating page images in the resource too.  Where a borrower has borrower records the relevant pages where these borrowing records are found now appear as thumbnails in the borrower popup.  These are generated by the IIIF server based on dimensions passed to it, which is much nicer than having to generate and store thumbnails directly.  I also updated the popup to make it wider when required to give more space for the thumbnails.  Here’s a screenshot of the new thumbnails in action:

Clicking on a thumbnail opens a further popup containing a zoomable / pannable image of the page.  This proved to be rather tricky to implement.  Initially I was going to open a popup in the page (outside of the map container) using a jQuery UI Dialog.  However, I realised that this wouldn’t work when the map was being viewed in full-screen mode, as nothing beyond the map container is visible in such circumstances.  I then considered opening the image in the borrower popup but this wasn’t really big enough.  I then wondered about extending the ‘Map options’ section and replacing the contents of this with the image, but this then caused issues for the contents of the ‘Map options’ section, which didn’t reinitialise properly when the contents were reinstated.  I then found a plugin for the Leaflet mapping library that provides a popup within the map interface (https://github.com/w8r/Leaflet.Modal) and decided to use this.  However, it’s all a little complex as the popup then has to include another mapping library called OpenLayers that enables the zooming and panning of the page image, all within the framework of the overall interactive map.  It is all working and I think it works pretty well, although I guess the map interface is a little cluttered, what with the ‘Map Options’ section, the map legend, the borrower popup and then the page image popup as well.  Here’s a screenshot with the page image open:

All that’s left to do now is add in the introductory text once Alex has prepared it and then make the map live.  We might need to rearrange the site’s menu to add in a link to the Chambers Map as it’s already a bit cluttered.

Also for the project I downloaded images for two further library registers for St Andrews that had previously been missed.  However, there are already records for the registers and pages in the CMS so we’re going to have to figure out a way to work out which image corresponds to which page in the CMS.  One register has a different number of pages in the CMS compared to the image files so we need to work out how to align the start and end and if there are any gaps or issues in the middle.  The other register is more complicated because the images are double pages whereas it looks like the page records in the CMS are for individual pages.  I’m not sure how best to handle this.  I could either try and batch process the images to chop them up or batch process the page records to join them together.  I’ll need to discuss this further with Gerry, who is dealing with the data for St Andrews.

Also this week I prepared for and gave a talk to a group of students from Michigan State University who were learning about digital humanities.  I talked to them for about an hour about a number of projects, such as the Burns Supper map (https://burnsc21.glasgow.ac.uk/supper-map/), the digital edition I’d created for New Modernist Editing (https://nme-digital-ode.glasgow.ac.uk/), the Historical Thesaurus (https://ht.ac.uk/), Books and Borrowing (https://borrowing.stir.ac.uk/) and TheGlasgowStory (https://theglasgowstory.com/).  It went pretty and it was nice to be able to talk about some of the projects I’ve been involved with for a change.

I also made some further tweaks to the Gentle Shepherd Performances page which is now ready to launch, and helped Geert out with a few changes to the WordPress pages of the Anglo-Norman Dictionary.  I also made a few tweaks to the WordPress pages of the DSL website and finally managed to get a hotel room booked for the DHC conference in Sheffield in September.  I also made a couple of changes to the new Gaelic Tongues section of the Seeing Speech website and had a discussion with Eleanor about the filters for Speech Star.  Fraser had been in touch with about 500 Historical Thesaurus categories that had been newly matched to OED categories so I created a little script to add these connections to the online database.

I also had a Zoom call with the Speak For Yersel team.  They had been testing out the resource at secondary schools in the North East and have come away with lots of suggested changes to the content and structure of the resource.  We discussed all of these and agreed that I would work on implementing the changes the week after next.

Next week I’m going to be on holiday, which I have to say I’m quite looking forward to.

Week Beginning 13th June 2022

I worked for several different projects this week.  For the Books and Borrowing project I processed and imported a further register for the Advocates library that had been digitised by the NLS.  I also continued with the interactive map of Chambers library borrowers, although I couldn’t spend as much time on this as I’d hoped as my access to Stirling University’s VPN had stopped working and without VPN access I can’t connect to the database and the project server.  It took a while to resolve the issue as access needs to be approved by some manager or other, but once it was sorted I got to work on some updates.

One thing I’d noticed last week was that when zooming and panning the historical map layer was throwing out hundreds of 403 Forbidden errors to the browser console.  This was not having any impact on the user experience, but was still a bit messy and I wanted to get to the bottom of the issue.  I had a very helpful (as always) chat with Chris Fleet at NLS Maps, who provided the historical map layer and he reckoned it was because the historical map only covers a certain area and moving beyond this was still sending requests for map tiles that didn’t exist.  Thankfully an option exists in Leaflet that allows you to set the boundaries for a map layer (https://leafletjs.com/reference.html#latlngbounds) and I updated the code to do just that, which seems to have stopped the errors.

I then returned to the occupations categorisation, which was including far too many options.  I therefore streamlined the occupations, displaying the top-level occupation only.  I think this works a lot better (although I need to change the icon colour for ‘unknown’).  Full occupation information is still available for each borrower via the popup.

I also had to change the range slider for opacity as standard HTML range sliders don’t allow for double-ended ranges.  We require a double-ended range for the subscription period and I didn’t want to have two range sliders that looked different on one page.  I therefore switched to a range slider offered by the jQuery UI interface library (https://jqueryui.com/slider/#range).  The opacity slider still works as before, it just looks a little different.  Actually, it works better than before, as the opacity now changes as you slide rather than only updating after you mouse-up.

I then began to implement the subscription period slider.  This does not yet update the data.  It’s been pretty tricky to implement this.  The range needs to be dynamically generated based on the earliest and latest dates in the data, and dates are both year and month, which need to be converted into plain integers for the slider and then reinterpreted as years and months when the user updates the end positions.  I think I’ve got this working as it should, though.  When you update the ends of the slider the text above that lists the months and years updates to reflect this.  The next step will be to actually filter the data based on the chosen period.  Here’s a screenshot of the map featuring data categorised by the new streamlined occupations and the new sliders displayed:

For the Speak For Yersel project I made a number of tweaks to the resource, which Jennifer and Mary are piloting with school children in the North East this week.  I added in a new grammatical question and seven grammatical quiz questions.  I tweaked the homepage text and updated the structure of questions 27-29 of the ‘sound about right’ activity.  I ensured that ‘Dumfries’ always appears as ‘Dumfries and Galloway’ in the ‘clever’ activity and follow-on and updated the ‘clever’ activity to remove the stereotype questions.  These were the ones where users had to rate the speakers from a region without first listening to any audio clips and Jennifer reckoned these were taking too long to complete.  I also updated the ‘clever’ follow-on to hide the stereotype options and switched the order of the listener and speaker options in the other follow-on activity for this type.

For the Speech Star project I replaced the data for the child speech error database with a new, expanded dataset and added in ‘Speaker Code’ as a filter option.  I also replicated the child speech and normalised speech databases from the clinical website we’re creating on the more academic teaching site we’re creating and also pulled in the IPA chart from Seeing Speech into this resource too.  Here’s a screenshot of how the child speech error database looks with the new ‘speaker code’ filter with ‘vowel disorder’ selected:

I also responded to Craig Lamont in Scottish literature with some further feedback on the structure of his Burns Manuscript Database spreadsheet, which is now shaping up nicely.  Craig had also sent me an updated spreadsheet with data for the Ramsay Gentle Shepherd performances project.  I’d set this up (interactive map, timeline and filterable tabular data) a few weeks ago, migrating it to the University’s T4 website management system.  All had worked then but when I logged into T4 and previewed the page I previously created I discovered it longer worked.  The page hadn’t been updated since the end of May and I had no idea what’s gone wrong.  I can only assume that the linked content (i.e. the links to the JavaScript files) had somehow become unlinked.  I decided, therefore, that it would be easier to just host the JavaScript files on another server I have direct access to rather than having to shoehorn it all into T4.  I made an updated version with the new dataset and this is working well.

I also made a couple of tweaks to the DSL this week, installing the TablePress plugin for the ancillary pages and creating a further alternative logo for the DSL’s Facebook posts.  I also returned to going some work for the Anglo-Norman Dictionary, offering some advice to the editor Geert about incorporating publications and overhauling how cross references are displayed in the Dictionary Management System.

I updated the ‘View Entry’ page in the DMS.  Previously it only included cross references FROM the entry you’re looking at TO any other entries.  I.e. it only displayed content when the entry was of type ‘xref’ rather than ‘main’.  Now in addition to this there’s a further section listing all cross references TO the entry you’re looking at from any entry of type ‘xref’ that links to it.

In addition there is a button allowing you to view all entries that include a cross reference to the current entry anywhere in their XML – i.e. where an <xref> tag that features the current entry’s slug is found at any level in any other main entry’s XML.  This code is hugely memory intensive to run, as basically all 27,464 main entries need to be pulled into the script, with the full XML contents of each checked for matching xrefs.  For this reason the page doesn’t run the code each time the ‘view entry’ page is loaded but instead only runs when you actively press the button.  It takes a few seconds for the script to process, but after it does the cross references are listed in the same manner as the ‘pure’ xrefs in the preceding sections.

Finally I participated in a Zoom-based focus group for the AHRC about the role of technicians in research projects this week.  It was great to participate to share my views on my role and to hear from other people with similar roles at other organisations.

Week Beginning 6th June 2022

I’d taken Monday off this week to have an extra-long weekend following the jubilee holidays on Thursday and Friday last week.  On Tuesday I returned to another meeting for Speak For Yersel and a list of further tweaks to the site, including many changes to three of the five activities and a new set of colours for the map marker icons, which make the markers much more easy to differentiate.

I spent most of the week working on the Books and Borrowing project.  We’d been sent a new library register from the NLS and I spent a bit of time downloading the 700 or so images, processing them and uploading them into our system.  As usual, page numbers go a bit weird.  Page 632 is written as 634 and then after page 669 comes not 670 but 700!  I ran my script to bring the page numbers in the system into line with the oddities of the written numbers.  On Friday I downloaded a further library register which I’ll need to process next week.

My main focus for the project was the Chambers Library interactive map sub-site.  The map features the John Ainslie 1804 map from the NLS, and currently it uses the same modern map as I’ve used elsewhere in the front-end for consistency, although this may change.  The map defaults to having a ‘Map options’ pane open on the left, and you can open and close this using the button above it.  I also added a ‘Full screen’ button beneath the zoom buttons in the bottom right.  I also added this to the other maps in the front-end too. Borrower markers have a ‘person’ icon and the library itself has the ‘open book’ icon as found on other maps.

By default the data is categorised by borrower gender, with somewhat stereotypical (but possibly helpful) blue and pink colours differentiating the two.  There is one borrower with an ‘unknown’ gender and this is set to green.  The map legend in the top right allows you to turn on and off specific data groups.  The screenshot below shows this categorisation:

The next categorisation option is occupation, and this has some problems.  The first is there are almost 30 different occupations, meaning the legend is awfully long and so many different marker colours are needed that some of them are difficult to differentiate.  Secondly, most occupations only have a handful of people.  Thirdly, some people have multiple occupations, and if so these are treated as one long occupation, so we have both ‘Independent Means > Gentleman’ and then ‘Independent Means > Gentleman, Politics/Office Holders > MP (Britain)’.  It would be tricky to separate these out as the marker would then need to belong to two sets with two colours, plus what happens if you hide one set?  I wonder if we should just use the top-level categorisation for the groupings instead?  This would result in 12 groupings plus ‘unknown’, meaning the legend would be both shorter and narrower.  Below is a screenshot of the occupation categorisation as it currently stands:

The next categorisation is subscription type, which I don’t think needs any explanation.  I then decided to add in a further categorisation for number of borrowings, which wasn’t originally discussed but as I used the page I found myself looking for an option to see who borrowed the most, or didn’t borrow anything.  I added the following groupings, but these may change: 0, 1-10, 11-20, 21-50, 51-70, 70+ and have used a sequential colour scale (darker = more borrowings).  We might want to tweak this, though, as some of the colours are a bit too similar.  I haven’t added in the filter to select subscription period yet, but will look into this next week.

At the bottom of the map options is a facility to change the opacity of the historical map so you can see the modern street layout.  This is handy for example for figuring out why there is a cluster of markers in a field where ‘Ainslie Place’ was presumably built after the historical map was produced.

I decided to not include the marker clustering option in this map for now as clustering would make it more difficult to analyse the categorisation as markers from multiple groupings would end up clustered together and lose their individual colours until the cluster is split.  Marker hover-overs display the borrower name and the pop-ups contain information about the borrower.  I still need to add in the borrowing period data, and also figure out how best to link out to information about the borrowings or page images.  The Chambers Library pin displays the same information as found in the ‘libraries’ page you’ve previously seen.

Also this week I responded to a couple of queries from the DSL people about Google Analytics and the icons that gets used for the site when posting on Facebook.  Facebook was picking out the University of Glasgow logo rather than the DSL one, which wasn’t ideal.  Apparently there’s a ‘meta’ tag that you need to add to the site header in order for Facebook to pick up the correct logo, as discussed here: https://stackoverflow.com/questions/7836753/how-to-customize-the-icon-displayed-on-facebook-when-posting-a-url-onto-wall

I also created a new user for the Ayr place-names project and dealt with a couple of minor issues with the CMS that Simon Taylor had encountered.  I also investigated a certificate error with the ohos.ac.uk website and responded to a query about QR codes from fellow developer David Wilson.  Also, Craig Lamont in Scottish Literature got in touch about a spreadsheet listed Burns manuscripts that he’s been working on with a view to turning it into a searchable online resource and I gave him some feedback about the structure of the spreadsheet.

Finally, I did a bit of work for the Historical Thesaurus, working on a further script to match up HT and OED categories based on suggestions by researcher Beth Beattie.  I found a script I’d produced in from 2018 that ran pattern matching on headings and I adapted this to only look at subcats within 02.02 and 02.03, picking out all unmatched OED subcats from these (there are 627) and then finding all unmatched HT categories where our ‘t’ numbers match the OED path.  Previously the script used the HT oedmaincat column to link up OED and HT but this no longer matches (e.g. HT ‘smarten up’ has ‘t’ nums 02.02.16.02 which matches OED 02.02.16.02 ‘to smarten up’ whereas HT ‘oedmaincat’ is ’02.04.05.02’).

The script lists the various pattern matches at the top of the page and the output is displayed in a table that can be copied and pasted into Excel.  Of the 627 OED subcats there are 528 that match an HT category.  However, some of them potentially match multiple HT categories.  These appear in red while one to one matches appear in green.  Some of these multiple matches are due to Levenshtein matches (e.g. ‘sadism’ and ‘sadist’) but most are due to there being multiple subcats at different levels with the exact same heading.  These can be manually tweaked in Excel and then I could run the updated spreadsheet through a script to insert the connections.  We also had an HT team meeting this week that I attended.