Week Beginning 8th November 2021

I spent a bit of time this week working for the DSL.  I needed to act as the go-between for the DSL’s new IT people who are updating their email system and the University’s IT people who manage the DNS record on behalf of the DSL.  IT took a few attempts before the required changes were successfully in place.  I also read through a document that had been prepared about automatically ‘fixing’ the DSL’s dates to make them machine readable, and gave some feedback on the many different procedures that will need to be performed on the various date forms to produce the desired structure.

I also looked into an issue with cross references within citations that work in the live site but are not functioning in the new site or in the DSL’s editing system.  After some investigation it seems like it’s another case of the original API ‘fixing’ the XML in some way each time it’s processed in order for these links to work.  The XML for ‘put_v’ stored in the original API is as follows:

<cit><cref><date>1591</date> <title>Edinb. B. Rec.</title> V 41 (see <ref>Putting</ref> <i>vbl. n.</i> 1 (1)).</cref></cit>

There is a <ref> tag but no other information in this tag.  This is the same for the XML exported from DPS and used in the new dsl site (which has an additional bibliographic reference in):

<cit><cref refid=”bib013153″><date>1591</date> <title>Edinb. B. Rec.</title> V 41 (see <ref>Putting</ref> <i>vbl. n.</i> 1 (1)).</cref></cit>

The XSLT for both the live and new sites doesn’t include anything to process a <ref> that doesn’t include any attributes so both the live and new sites shouldn’t be displaying a link through to ‘putting’.  But of course the live site does.  I had generated and stored the XML that the original API (which I did not develop) outputs whenever the live site asks for an entry.  When looking at this I found the following:

<cit><cref ref=”db674″><date>1591</date> <title>Edinb. B. Rec.</title> V 41 (see <ref action=”link” href=”dost/putting”>Putting</ref> <i>vbl. n.</i> 1 (1)).</cref></cit>

You can see that the original API is injecting both a bibliographical cross-reference and the ‘putting’ reference.  The former we previously identified and sorted but the latter unfortunately hasn’t, although references that are not in citations do seem to have been fixed.  I updated the XSLT on the new dsl site to process the <ref> so the link now works, however this is not an approach that can be relied upon as all the XSLT is currently doing is taking the contents of the tag (Putting) and making a link out of it.  If the ‘slug’ of the entry doesn’t match the display form then the link is not going to work.  The original API includes a table containing cross references, but this doesn’t differentiate ones in citations from regular ones, and as the ‘putting_v’ entry contains 83 references it’s not going to be easy to pick out from this the ones that still need to be added.  This will need further discussion with the editors.

Continuing on a dictionary theme, I also did some further work for the Anglo-Norman Dictionary.  Last week I processed entries where a varlist date needed to be used as the citation date, but we noticed that the earliest date for entries hadn’t been updated in many cases where it should have been.  This week I figured out what went wrong.  My script only updated the entry’s date if the new date from the varlist was earlier than the existing earliest date for the entry.  This is obviously not what we want as in the majority of cases the varlist date will be later and should replace the earlier date that is erroneous.  Thankfully it was easy to pick out all of the entries that have a ‘usevardate’ and I then reran a corrected version of the script that checks and replaces an entry’s earliest date.

The editor spotted a couple of entries that still hadn’t been updated after this process and I then had to investigate them.  One of them had an error in the edited markup that was preventing the update from being applied.  For the other I realised that my code to update the XML wasn’t looking at all senses, just the first in each entry.  My script was attempting to loop through all senses as follows:

foreach($xml->main_entry->sense -> attestation as $a){

//process here

}

Which unfortunately only loops through all attestations in the first sense.  What I needed to do was:

 

foreach($xml->main_entry->sense as $s){

foreach($s->attestation as $a){

//process here

}

}

As the sense that needed updating for ‘aspreté’ was the last one the XML wasn’t getting changed, this meant ‘usevardate’ wasn’t present in the XML therefore my update to regenerate the earliest dates didn’t catch this entry (despite all dates for citations being successfully updated in the database for the entry).  I then fixed my script and regenerated all data again, including fixing the data so the ones with XML errors were updated.  I then ran a further spreadsheet containing entries that needed updated through the fixed script, resulting in a further 257 citations that had their dates updated.

Finally, I updated the Dictionary Management System so that ‘usevardate’ dates are taken into consideration when processing and publishing uploaded XML files.  If a ‘usevardate’ is found then this date is used for the attestation, which automatically affects the earliest date that is generated for the entry and also the dates used for attestations for search purposes.  I tried this out by downloading the XML for ‘admirable’, which features a ‘usevardate’.  I then edited the XML to remove the ‘usevardate’ before uploading and publishing this version.  As expected the dates for the attestation and the entry’s earliest date were affected by this change.  I then edited the XML to reinstate the ‘usevardate’ and uploaded and published this version, which took into consideration the ‘usevardate’ when generating the entry’s earliest date and attestation dates and returned the entry to the way it was before the test.

Also this week I set up a WordPress site that will be used for the archive of the International Journal of Scottish Theatre and Screen and migrated one of the issues to WordPress, which required me to do the following:

  1. Open the file in a PDF viewer for reference (e.g. Adobe Acrobat)
  2. Open the file in MS Word, which converts it into an editable format
  3. Create a WordPress page for the article with the article’s title as the page title and setting the page ‘parent’ as Volume 1
  4. Copy and paste the article contents from Word into WordPress
  5. Go through the article in WordPress, referencing the file in Acrobat, and manually fixing any issues that I spotted (e.g. fixing the display of headings, fixing some line breaks that were erroneously added). Footnotes proved to be particularly tricky as their layout was not handled very well by Word.  It’s possible that some footnotes are not quite right, especially with the ‘Trainspotting’ article that has more than 70(!) footnotes.
  6. Publish the WordPress page and update the ‘Volume 1’ page to add a link to it.

None of this was particularly difficult to do, but it was somewhat time-consuming.  There are a further 18 issues left to do (as far as I can tell), although some of these will take longer as they contain more articles, and some of these are more structurally complicated (e.g. including images).  Gerry Carruthers is getting a couple of students to do the rest and we have a meeting scheduled next week where I’ll talk through the process.

I also made some further tweaks to the WordPress site for the ‘Our Heritage, Our Stories’ site and dealt with renewing the domain for TheGlasgowStory.com site, which is now safe for a further nine years.  I also generated an Excel spreadsheet of the full lexical dataset from Mapping Metaphor for Wendy Anderson after she had a request for the data from some researchers in Germany.

I spent the rest of the week working for the Speak For Yersel project, continuing to generate mockups of the interactive exercises.  I completed an initial version of the overall structure for both the accessibility and word choice question types for the grammar exercise, so it will be possible to just ‘plug in’ any number of other questions that fit these templates.  What I haven’t done yet is incorporate the maps, the post-questionnaire ‘explore’ or the final quiz, as these need more content.  Here’s how things currently look:

I used another different font for the heading (Slackey), with the same one used for the ‘Question x of y’ text too’.  I also used CSS gradients quite a bit in this version, as the team seemed quite keen on these.  There’s a subtle diagonal gradient in the header and footer backgrounds, and a more obvious  top-to-bottom one in the answer buttons.  I used different combinations of colours too.  I created a progress bar, which works, but with only two questions in the system it’s not especially obvious what it does.  Rather than having people click an answer and then click a ‘next’ button to continue I’ve made it so that clicking an answer automatically loads the next step, and clicking an answer loads a panel with a ‘map’ – this is just a static image for now.  It also loads a ‘next’ button if there is a next question.  Clicking the ‘next’ button slides up the map panel, loads the next question in and advances the progress bar.  Users will be accessing this on many different screen sizes and I’ve tested it out on my Android phone and my iPad in both portrait and landscape orientations and all seems to work well to me.  However, the map panel will be displayed below rather than beside the questions on narrower screens.

I then began experimenting with randomly positioned markers in polygonal areas.  Initially I wanted to see whether this would be possible in ArcGIS, and a bit of Googling suggested it would be, see for example this post: http://gis.mtu.edu/?p=127 which is 10 years old, so the instructions don’t in any way match up to how things work in the current version of ArcGIS, but it at least showed it should be possible.  I loaded the desktop version of ArcGIS up via Glasgow Anywhere and after some experimentation and a fair bit of exasperation I managed to create a polygon shape and add 100 randomly placed marker points to it, which you can see here:

Something we will have to bear in mind is how such points will look when zoomed:

This is just 100 points over a pretty large geographical area.  We might end up with thousands of points, which might make this approach unusable.  Another issue is it took ArcGIS more than a minute to generate and process these 100 random points.  I don’t know how much of this is down to running the software via Glasgow Anywhere, but if we’re dealing with tens of polygons and hundreds or thousands of data points this is just not going to be feasible.

An issue of greater concern is that as far as I can tell (after more than an hour of investigation) the ‘create random points’ option is not available via ArcGIS Online, which is the tool we would need to use to generate maps to share online (if we choose to use ArcGIS).  The online version seems to be really pared back in terms of functionality compared to the desktop version and I just couldn’t see any way of incorporating the random points system.  However, I discovered a way of generating random points using Leaflet and another javascript based geospatial library called turf.js (http://turfjs.org/).  The information about how to go about it is here:  https://gis.stackexchange.com/questions/163044/mapbox-how-to-generate-a-random-coordinate-inside-a-polygon

I created a test map using the SCOSYA area for Campbeltown and the SCOSYA base map.  As a solution I’d say it’s working pretty well – it’s very fast and seems to do what we want it to.  You can view an example of the script output here:

The script generates 100 randomly placed markers each time you load the page.  At zoomed out levels the markers are too big, but I can make them smaller – this is just an initial test.  There is unfortunately going to be some clustering of markers as well, due to the nature of the random number generator.  This may give people to wrong impression.  I could maybe update the code to reject markers that are in too close proximity to another one, but I’d need to see about that.  I’d say it’s looking promising, anyway!