Category: Web Operations

Data centers are complex beasts, and no amount of operator monitoring by itself can keep track of everything. That’s why automated monitoring is so important.

So what should you monitor? You can divide up your monitoring into a couple of strategic areas. Just as with metrics collection, there is business & application level monitoring and then there is lower level system monitoring which is also important.

Web-facing database servers receive a barrage of activity 24 hours a day. Sessions are managed for users logging in, ratings are clicked and comments are added. Even more complex are web-based ecommerce applications. All of this activity is organized into small chunks called transactions. They are discrete sets of changes. If you’re editing a word processing document, it might autosave every five minutes. If you’re doing something in excel it may provide a similar feature. There is also an in-built mechanism for undo and redo of recent edits you have made. These are all analogous to transactions in a database.

These are important because all of these transactions are written to logfiles. They make replication possible, by replaying those changes on another database server downstream.

If you have lost your database server because of hardware failure or instance failure in EC2, you’ll be faced with the challenge of restoring your database server. How is this accomplished? Well the first step would be to restore from the last full backup you have, perhaps a full database dump that you perform everyday late at night. Great, now you’ve restored to 2am. How do I get the rest of my data?

That is where point-in-time recovery comes in. Since those transactions were being written to your transaction logs, all the changes made to your database since the last full backup must be reapplied. In MySQL this transaction log is called the binlog, and there is a mysqlbinlog utility that reads the transaction log files, and replays those statements. You’ll tell it the start time – in this case 2am when the backup happened. And you’ll tell it the end time, which is the point-in-time you want to recover to. That time will likely be the time you lost your database server hardware.

Point-in-time recovery is crucial to high availability, so be sure to backup your binlogs right alongside your full database backups that you keep every night. If you lose the server or disk that the database is hosted on, you’ll want an alternate copy of those binlogs available for recovery!

A lot of technical forums and discussions have highlighted the limitations of EC2 and how it loses on performance when compared to physical servers of equal cost. They argue that you can get much more hardware and bigger iron for the same money. So it then seems foolhardy to turn to the cloud. Why this mad rush to the cloud then? Of course if all you’re looking at is performance, it might seem odd indeed. But another way of looking at it is, if performance is not as good, it’s clearly not the driving factor to cloud adoption.

CIOs and CTOs are often asking questions more along the lines of, “Can we deploy in the cloud and settle with the performance limitations, and if so how do we get there?”

Another question, “Is it a good idea to deploy your database in the cloud?” It depends! Let’s take a look at some of the strengths and weaknesses, then you decide.

8 big strengths of the cloud

Flexibility in disaster recovery – it becomes a script, no need to buy additional hardware

Many of these challenges can be mitigated against. The promise of the infrastructure deployed in the cloud is huge, so digging our heels in with gradual adoption is perhaps the best option for many firms. Mitigate the weaknesses of the cloud by:

Use encrypted filesystems and backups where necessary

Also keep offsite backups inhouse or at an alternate cloud provider

Mitigate against EBS performance – cache at every layer of your application stack

Sharding is a way of partitioning your datastore to benefit from the computing power of more than one server. For instance many web-facing databases get sharded on user_id, the unique serial number your application assigns to each user on the website.

Sharding can bring you the advantages of horizontal scalability by dividing up data into multiple backend databases. This can bring tremendous speedups and performance improvements.

Sharding, however has a number of important costs.

reduced availability

higher administrative complexity

greater application complexity

High Availability is a goal of most web applications as they aim for always-on or 24×7 by 365 availability. By introducing more servers, you have more components that have to work flawlessly. If the expected downtime of any one backend database is 1/2 hour per month and you shard across five servers, your downtime has now increased by a factor of five to 2.5 hours per month.

Administrative complexity is an important consideration as well. More databases means more servers to backup, more complex recovery, more complex testing, more complex replication and more complex data integrity checking.

Since Sharding keeps a chunk of your data on various different servers, your application must accept the burden of deciding where the data is, and fetching it there. In some cases the application must make alternate decisions if it cannot find the data where it expects. All of this increases application complexity and is important to keep in mind.

Look at your website’s current traffic patterns, pageviews or visits per day, and compare that to your server infrastructure. In a nutshell your current capacity would measure the ceiling your traffic could grow to, and still be supported by your current servers. Think of it as the horsepower of you application stack – load balancer, caching server, webserver and database.

Capacity planning seeks to estimate when you will reach capacity with your current infrastructure by doing load testing, and stress testing. With traditional servers, you estimate how many months you will be comfortable with currently provisioned servers, and plan to bring new ones online and into rotation before you reach that traffic ceiling.

Your reaction to capacity and seasonal traffic variations becomes much more nimble with cloud computing solutions, as you can script server spinups to match capacity and growth needs. In fact you can implement auto-scaling as well, setting rules and thresholds to bring additional capacity online – or offline – automatically as traffic dictates.

In order to be able to do proper capacity planning, you need good data. Pageviews and visits per day can come from your analytics package, but you’ll also need more complex metrics on what your servers are doing over time. Packages like Cacti, Munin, Ganglia, OpenNMS or Zenoss can provide you with very useful data collection with very little overhead to the server. With these in place, you can view load average, memory & disk usage, database or webserver threads and correlate all that data back to your application. What’s more with time-based data and graphs, you can compare changes to application change management and deployment data, to determine how new code rollouts affect capacity requirements.

Devops is one of those fancy contractions that tech folks just love. One part development or developer, and another part operations. It imagines a blissful marriage where the team that develops software and builds features that fit the business, works closely and in concert with an operations and datacenter team that thinks more like developers themselves.

In the long tradition of technology companies, two separate cultures comprise these two roles. Developers, focused on development languages, libraries, and functionality that match the business requirements keep their gaze firmly in that direction. The servers, network and resources those components of software are consuming are left for the ops teams to think about.

So too, ops teams are squarely focused on uptime, resource consumption, performance, availability, and always-on. They will be the ones worken up at 4am if something goes down, and are thus sensitive to version changes, unplanned or unmanaged deployments, and resource heavy or resource wasteful code and technologies.

Lastly there are the QA teams tasked with quality assurance, testing, and making sure the ongoing dearth of features don’t break anything previously working or introduce new show stoppers.

Devops is a new and I think growing area where the three teams work more closely together. But devops also speaks to the emerging area of cloud deployments, where servers can be provisioned with command line api calls, and completely scripted. In this new world, infrastructure components all become components in software, and thus infrastructure itself, long the domain of manual processes, and labor intensive tasks becomes repeatable, and amenable to the techniques of good software development. Suddenly version control, configuration management, and agile development methodologies can be applied to operations, bringing a whole new level of professionalism to deployments.