Technology had it in for me this week. On Monday I noticed that the SCOSYA project website and content management system was completely down and displaying a 503 error message (or in Chrome an ‘ERR_EMPTY_RESPONSE’ message). Rather strangely parts of the site that didn’t use WordPress were still working. This was odd because I hadn’t updated WordPress on the site for a few weeks, hadn’t changed anything to do with the site for a while and it was working the previous week. I emailed Chris about this and he reckoned it was a WordPress issue because by renaming the ‘plugins’ folder the site came back online. After a bit of testing I worked out that it was WordPress’s own Jetpack plugin that appeared to be causing the problem as renaming the plugin’s directory brought the site back. This was very odd because the plugin hadn’t been updated recently either.
More alarmingly, I began to realise that the problem was not limited to the SCOSYA website but had in actual fact affected the majority of the 20 WordPress websites that I currently manage. Even more strangely, it was affecting different websites in different ways. Some were completely down, others broke only when certain links were clicked on and one was working perfectly, even though it had the same version of Jetpack installed as all of the others. I spent the majority of Monday trying to figure out what was going wrong. Was it some change in the University firewall or server settings that was blocking access to the WordPress server, resulting in my sites being knocked offline? Was it some change at the WordPress end that was to blame? I was absolutely stumped and resorted to disabling Jetpack (and in some cases other plugins) simply to get the sites back online until some unknown fix could be found.
Thankfully Chris managed to find the cause of the problem. It wasn’t an issue with WordPress, Jetpack or any other plugins at all. The problem was being caused by a corrupt library file on the server that for some unknown reason was knocking out some sites. So despite looking like an issue that was my responsibility to fix it was in fact a server issue that I couldn’t possibly fix myself. Thank goodness Chris managed to identify the problem and to replace the library file with a fixed version. It’s just a pity I spent the best part of a day looking for a solution that couldn’t possibly exist, but these things happen.
My technological woes continued thanks to the Windows 10 ‘Creators Update’ that chose to install itself on Monday. I opted to postpone the installation until I shut down my PC, which is a really nice feature. Unfortunately it only processes a tiny part of the update as you shut down your PC and the rest sits and waits until you next turn your PC on again. So when I came to turn my PC on again the next day I had a mandatory update that completely locked me out of my PC for 1 and a half hours. Hugely frustrating!
And to round off my week of woes, I work from home on Thursdays and my broadband signal broke. Well, it still works but it’s losing packets. At one point 86% of packets were being lost, resulting in connections quitting, things failing to upload and download and everything going very slowly. I was on the phone to Virgin Media for an hour but the problem couldn’t be fixed and they’re going to have to send an engineer out. Yay for technology.
Now to actually discuss the work I did this week. I spent about a day on AHRC review duties this week and apart from that practically my entire week was spend working on the LDNA project, specifically on the Thematic Heading visualisations that I last worked on before Easter. It took me a little while to get back up to speed with the visualisation and how they fitted together but once I’d got over that hurdle I managed to make some good progress, as follows:
- I have identified the cause of the search results displaying a blank page. This was caused by the script using more memory than the server was allowing it to use. This occurred when large numbers of results needed to be processed. Chris increased the memory limit so the script should always process the results, although it may take a while if there are thousands of sparklines.
- It is now possible to search for plateaus in any time period rather than just after 1500.
- I’ve created a database table that holds cached values of the minimums and maximums used in the search form for every possible combination of decade. When you change the period these new values are now pulled in pretty much instantaneously so no more splines need to be reticulated. It took rather a long time to run the script that generated the cached data (several hours in fact) but it’s great to have it all in the database and it makes the form a LOT faster to use now.
- Min/Max values for trauma rise and fall are in place, and I have also implemented the rise and fall searches. These currently just use the end of the full period as the end date so searches only really work properly for the full period. I’ll need to properly update the searches so everything is calculated for the user’s selected period. Something to look at next week
- Plateau now uses ‘minimum frequency of mode’ rather than ‘minimum mode’ so you can say ‘show me only those categories where the mode appears 20 times’.
- I also investigated what red spots for the peaks were not appearing some times. It turns out this occurs when you select a time period within which the category’s peak doesn’t appear. So not having the red dot for peaks when the peak is beyond the selected period is probably the best approach, and that the missing red dots is actually a feature rather than a bug.
On Friday we had a big project meeting for LDNA, which involved Marc, Fraser and I Skyping from Marc’s office into a meeting being held at Sheffield. It was interesting to hear about the progress the project has been making and its future plans, and also to see the visualisation that Matt from Sheffield’s DHI (the renamed HRI) has been working on. I didn’t go into any details about the visualisation I’m working on as they’re not yet in a fit state to share. But hopefully I’ll be able to send round the URL to the team soon.
It’s a bank holiday next Monday so it will be a four-day week. I need to get back into the SCOSYA project next week and also to continue with these LDNA visualisations so it’s shaping up to be pretty busy.