How do you get website infastructure in place that is able to handle a years worth of traffic in a single week?
For about five years now we've been managing a website for a client who runs a popular annual event. We manage a few event websites, but this one is probably one of my favourite websites to work on because of the variety of tasks and the intensity of the work. But with the excitement, there also comes some...issues.
Take a look at this Google Analytics chart:
Yes, you read that right. This is the week leading into the event and we went from ~6,700 sessions to ~19,000 sessions - that's 32,000 pageviews to 150,000 pageviews... massive growth. During the week of the event itself, the site peaks at 52,000 sessions comprising of 385,000 pageviews per day. This is growth of 1200% in a week.
Now if you've never managed a website before, you may be wondering why this is a problem. Let me first explain in simple terms how a website server works.
When you put a website on the internet, there isn't any magic involved. To host a website, someone has to set up a computer, plug it in to power and connect it to a decent internet connection. These computers are called 'servers'. It's possible to do this yourself if you want to pay for an awesome internet connection and live and breathe countless software updates and security patches... but really why would you. Most of the time it is best to hand this to external professionals like Net Logistics or Internode.
When you visit a website, your computer sends a message to the server along the lines of "Hi there, can you send me the content of the home page", and the server responds with "Here you go" and starts sending back the content. Then your computer sends another messge: "I'm back and I noticed this page has an image up the top. Can you send that too? Oh and also there is a stylesheet file can you send that?", and the server again responds with "Here you go" and for each request it sends back the content.
Herein lies the issue. The server does a certain amount of processing for each of these requests, of which there may be dozens (or hundreds in a poorly optimised site) for a single page which is sent to the user. The software for serving web pages is very good at this, but when lots of users hit the website all at once you need additional oomph available on your hosting platform, or it will all fall apart.
This series of articles addresses the following questions:
- How do you handle load like this?
- How do you scale it up for the event and then scale it down, and keep it cost effective for the client?
Our initial solutions to this problem we're to optimise the site and simply get a bigger box, with a ton of processing power, and migrate the site there a month before, and migrate back to a simpler platform a month after.
This was an inefficient method and it seemed that regardless of the size of the server, we were still having performance issues during peak times.
Over time, we ended up asking some more questions:
- Can me make it so that the site's performance is never compromised?
- Can we make it so that there is (virtually) no chance of the website going down... ever?
A website is obviously quite a crucial aspect of a big event like this, so if we could come up with completely crash-proof solution, that would be desirable for both our client and our sanity.
Continue onto part two