I continued to develop the ‘Dictionary Management System’ for the Anglo-Norman Dictionary this week, following on with the work I began last week to allow the editors to drag and drop sets of entry XML files into the system. I updated the form to add in another option underneath the selection of phase statement called ‘Phase Statements for existing records’. Here the editor can choose whether to retain existing statements or replace them. If ‘retain’ is selected then any XML entries attached to the form that either have an existing entry ID in their filename or have a slug that matches an existing entry in the system will retain whatever phase statement the existing entry has, no matter what phase statement is selected in the form. The phase statement selected in the form will still be applied to any XML entries attached to the form that don’t have an existing entry in the system. Selecting ‘replace existing statements’ will ignore all phase statements of existing entries and will overwrite them with whatever phase statement is selected in the form. I also updated the system so that it extracts the earliest date for an entry at the point of upload. I added two new columns to the holding area (for earliest date and the date that is displayed for this) and have ensured that the display date appears on the ‘review’ page too. In addition, I added in an option to download the XML of an entry in the holding area, if it needs further work.
I ran a large-scale upload test, comprising of around 3,200 XML files from the ‘R’ data to see how the system would cope with this, but unfortunately I ran into difficulties with the server rejecting too many requests in a short space of time and only about 600 of the files made it through. I asked Arts IT Support to see whether the server limits can be removed for this script, but haven’t heard anything back yet. I ran into a similar issue when processing files for the Mull and Ulva place-names project in January last year and Raymond was able to update the whitelist for the Apache module mod_evasive that was blocking such uploads and I’m hoping he’ll be able to do something similar this time. Alternatively, I’ll need to try and throttle the speed of uploads in the browser.
In the meantime, I continued with the scripts for publishing entries that had been uploaded to the holding area, using a test version of the site that I set up on my local PC to avoid messing up the live database. I updated the ‘holding area’ page quite significantly. At the top of the page is a box for publishing selected items, and beneath this is the table containing the holding items. Each row now features a checkbox, and there is an option above the table to select / deselect all rows on the page (so currently up to 200 entries can be published in one batch as 200 is the page limit). The ‘preview’ button has been replaced with an ‘eye’ icon but the preview page works in the same way as before. I was intending to add the ‘publish’ options to this page but I’ve moved this to the holding area page instead to allow multiple entries to be selected for publication at any one time.
Once all of the selected items are published there is one final task that the page performs, which is to completely regenerate the cross references data. This is something that unfortunately needs to be done after each batch (even if it’s only one record) because cross references rely on database IDs and when a new version of an existing entry is published it receives a new ID. This means any existing cross references to that item will no longer work. The publication log will state that the regeneration is taking place and then after about 30 seconds another statement will say it is complete. I tested this process on my local PC, publishing single items, a few items and entire pages (200 items) at a time and all seemed to be working fine so I then copied the new scripts to the server.
Also this week I continued with the processing of library registers for the Books and Borrowing project. These are coming in rather quickly now and I’m getting a bit of a backlog. This is because I have to download the image files, then process then to generate tilesets, and then upload all of the images and their tilesets to the server. It’s the tilesets that are the real sticking point, as these consist of thousands of small files. I’m only getting an upload speed of about 70KB/s and I’m having to upload many gigabytes of data. I did a test where I zipped up some of the images and uploaded this zip file instead and was getting a speed of around 900KB/s and as it looks like I can get command-line access to the server I’m going to investigate whether zipping up the files, then uploading them then unzipping them will be a quicker process. I also had to spend some time sorting out connection issues to the server as the Stirling VPN wasn’t letting me connect. It turned out that they had switched to multi-factor authentication and I needed to set this up before I could continue.
Also this week I wrote a summary of the work I’ve done so far for the Place-names of Iona project for a newsletter they’re putting together, spoke to people about the new ‘Comparative Kingship’ place-names project I’m going to be involved with, spoke to the Scots Language Policy people about setting up a mailing list for the project(it turns out that the University has software to handle this, available here: https://www.gla.ac.uk/myglasgow/it/emaillists/) and fixed an issue relating to the display of citations that have multiple dates for the DSL.