Hosting changes, status pages, the parkrun API, and points

By Richard Leyton, parkrun systems engineer
— Thursday 19 December 2013

Some of you might already have picked this up from vague tweets I've made in the previous weeks, but we're in the middle of a major migration of the parkrun servers to Amazon Web Services (AWS). As the recent problem report summarises, our hosting 'evolved' as things grew (mandatory link to my favourite graph), and a few months ago we decided to address these issues by consolidating onto a service provider that can provide all the services we need, and many more we soon will, under one roof.

The move to AWS allows us to bring everything under one virtual roof. Things should work better and more reliably - fewer pieces and dependencies, and we can now scale more dynamically. That is, add resource when we need it (Saturday!) and switch it off when we don't (Tuesday at 3am). It also opens up lots of tasty new services which should allow us to offer a better service. For example, we're already starting to use of Cloudfront, which means a lot of our resources are served from servers closer to you than just now. A parkrunner in Australia gets served more content locally, so things should quite soon start to feel a bit faster than they did. A page that took 5s to load before, should soon start to load in a couple of seconds as parkrunners in Europe already (broadly) experience.

There are settling challenges, so apologies if you've seen blank pages or had errors in the last week or so. I'm poking about attempting to strike the right balance, and some things do work differently in the new environment compared to the old, and we're moving a few systems 'up' to CentOS 6, and enforcing the tighter restrictions (SELinux is enabled everywhere; just our webfarm was previously), and working out the cost-effectiveness of m1.small vs m1.medium and load balancer configurations. I'm also pushing hard on our puppet configuration. As any sysadmin knows, it's very tempting to 'just sort it', rather than spending the time getting the puppet manifests right.

New status page

I'm working hard to be open about things, and part of that is there's a new Status page. You can now visit to see a high-level service summary. Afraid it's not pretty right now (unless you like a mix of colours). I'd really rather hoped to have a lovely page of green ticks to show off when announcing this, but I was woken at 5am today when our backups got their knickers in a twist (cf. settling issues!), and it locked everything right up. But warts and all, that's a good page to bookmark for a status overview.

An API update

A quick note about the API. That's proceeding very well, but we've not been able to open it out yet quite as I'd hoped. A couple of reasons. Firstly, as this post explains, our hosting platform has had to take priority: it needs to be completed before the API is ready for prime time - it'll be hosted there too of course. Secondly, we're waiting for the terms and conditions to be finalised: I'm sure you appreciate we can't open up access to it without a set of these. We hope those will be ready early in the new year.

A point on Points

I'm sure many will know that the points not adding up is a Feature-Not-A-Bug. Alas, the last couple of weekends there has been a bit of a bug too: We hit problems with the revised way of working we introduced for the The Big September Speedup when we split the results processing (what time you ran in), from the summary processing (how many points you had, PBs etc). That's why you get your text message quickly, and results pages are normally updated.

Anyway, the last few weeks that's not been working properly, and many events have been in touch via Event Support regarding that. PSH has been working hard on addressing the issue, and re-calculating the points. He's pretty much nailed it now, so normal service should be restored very soon. We're optimistic the new results processing engine will be ready in the new year.

So, much more to come in January. We'll be moving more services to AWS this week, and early in the new year, and I'll hopefully find some time to post a longer piece about the setup and what we're using then.

Wishing you a very Merry Christmas, and a fantastic New Year,


twitter @rleyton

Showing 1 to 10 of 100 entries