Posts Tagged ‘cloud computing’

Some of you may know I love food, and in my effort to both learn more about social networking, wordpress architecture, cloud computing, and search engine optimization I decided to create a blog devoted to my adventures in seeking out the best places to eat around.

Creating a blog is easy, though customizing it for your use is can be challenging unless you take advantage of the work of others with the thousands of free themes, plugins, widgets, and other tools that are available to streamline the customization you wish to add. Here we will discuss how to tie your Facebook fan page into your word press blog so you can share information between the two seamlessly and promote traffic to your site.

There are many word press to Facebook plugins, but the one I chose was Add Link To Facebook. This allows me to publish articles composed on the site to the Facebook Page I have setup for the blog and share the comments and likes between the two. It was rather simple to setup and configure as I already had a Facebook developer account which was required to create the Facebook application needed to bridge the two together. The instructions were super simple and I was up and running in no time.

If you have a blog and a Facebook Page I highly recommend you integrate the two to steam line the content management between the two.

Imagine that you could click a few buttons and migrate your environment from a dedicated server to a cloud computing platform. You won’t have to imagine for very long because that’s something that we’re working on right now.

It seems that one of the biggest obstacles for entry into cloud computing is the migration from your old server. We are working hard to find the best way to migrate dedicated servers or managed servers with very little input (or risk) from the end user. That would include migration of your operating system, all installed applications and of course, your precious data.

Testing has gone very well and we are confident we can roll this out in the very near future. At a minimum this provides a mechanism to get you quickly and easily into cloud computing and enjoying all of the benefits while sidestepping the hassles of a typical server migration.

Another area that is of great interest for us is helping clients migrate from server co-location to cloud computing. It’s essentially the same thing as migrating a dedicated server but the benefits and cost savings would be off the charts. We are still in the testing phase but making excellent progress and we expect to push something out to the public very soon. We have already successfully migrated from various platforms and the response from our techs is always the same, jaw-dropping ease and mind numbing efficiency.

Stand by for more news on this exciting development as we continue working hard to bring you the best cloud computing platform available.

One of the things that I find most interesting about cloud computing storage is “local storage” versus “centralized storage”. For a quick primer, local storage means the physical hard drives that reside in the servers that are used to run your instances in our cloud. Centralized storage would mean separate storage arrays that store your instances which are separate from the cloud servers. Since the option exists to select one or the other, let’s go ahead and break down the pros and cons for each one.

Local Storage:
If you select the local storage option, that simply means that your instances are running and being physically stored on the same servers. The upside to this is that your disk I/O speeds will typically be a bit quicker because everything is connected to the same bus on that server. This is really good if you’re running large database applications or have requirements for very fast disk reads and writes. The downside to this is that you give up the high availability options that are typically native to cloud computing. In other words, if that particular server goes down, your websites go down with it and they won’t be automatically migrated over to a different machine as that process has to be done manually because you’re not utilizing centralized storage. With local storage you are still able to make snapshots and restore from the snapshots to another available server, but it’s a manual process and does require human intervention. So even if you select local storage you still have the peace of mind of being able to automate snapshots and for those to be stored off of the server. Of course for a recovery, those snapshots have to be converted to a template and then the new instance would have to be spun up from those templates, but you will be back up and running again quickly. Again it requires manual intervention and while it takes less time than recovering from a typical dedicated server, it still takes a little time.

Centralized Storage:
If you select the centralized storage option, that means new instances are running on a local server but your actual instances are being stored on a separate storage device. So essentially the server that your instances are running on is being utilized solely for CPU and memory while all of the storage requirements are handled by a separate device which is attached to the network. The upside of this is the high-availability options which will just automatically work if the server that happens to be running your instances goes down. If that did happen, one of our management consoles would detect the failed server and will immediately scan the network for other available servers and instruct the server with the greatest amount of free resources to mosey over to the storage device and spin up those instances right away. This is much different than having to do a restore because there really is nothing to restore because your data is all still intact on the storage device. Free servers will spring into action, snatch up your instances and provide CPU and memory to them so they can spin up again and resume as normal. This entire high-availability recovery option typically gets completed in under a minute. So to recap, if the server that your instances are running on fails, others servers will take over operations within a minute without any intervention from human at all.

Hybrid Hosting:
Another option that is very viable and widely used is a hybrid approach which combines cloud computing and a dedicated server or managed server. If you have for example, (4) websites and you have (1) database that requires faster disk access, you can run your websites on the cloud using centralized storage for the (HA) high availability and only run your database on an instance that utilizes local storage for the speed. That way you get the best of both worlds, over-the-top high-availability for your websites and ultra fast storage for your database.

So as you can see there are plenty of options and it’s relatively simple to mix and match to find a solution that best suits your needs. We are always happy and eager to help come up with solutions for our clients, so let us know what we can do to help you.

When I look at cloud computing, the primary differentiator that keeps jumping out at me is the ability to quickly recover from failure. Since I have a group of servers that host various sites, I can fully understand what the benefits of cloud computing would mean for me.

Going back to the ability to recover quickly from a failure, let’s look at the tried and trusted method of recovering from the failure of a dedicated server. Let me preface this by saying that dedicated servers have proven to be an excellent platform for hosting sites both large and small. They give you complete control, you have 100% of the resources of the server available to you and you are completely isolated from other websites. However in the event of a failure, the restoration process can be tedious at best. In a perfect world your dedicated server would have a raid configuration and if you lost a hard drive, the system would automatically fail-over to the 2nd drive and notify you that the other drive had failed and needs replacement. This provides the opportunity to swap the drive in a very controlled manner and during a maintenance window. The restore process is fairly straightforward and has been done thousands upon thousands of times by various providers with varying degrees of success depending upon conditions. Backup and restore can be a tricky process and often times we are at the mercy of Companies who develop the software and hardware for backup systems.

Initially the problem must be identified and in this case let’s assume that it is a failed primary hard drive. The server has to be powered down and the failed hard drive has to be swapped. This can take go quickly or slowly depending on various circumstances and conditions. Then the server has to be brought online and the restore process from the backup systems is initiated. This step is relatively quick and provided there are no errors along the way the restore process should begin without incident. This is where it gets tricky though because depending on how much data you have, the restore and can either finish quickly or take a very long time. If you have a simple Linux server with a few gigs of data, that should restore very quickly. However if you have for example a Windows server running SQL Server and you have several terabytes of data to be restored, that might take a while. The real problem with this is that your server is down during the restore process and will be unavailable for your clients to access until it’s completed and the server has gone through a final reboot and system check. This is where cloud computing kills the dedicated server in my opinion.

Now let me outline the restore process for cloud computing. We refer to the backups in cloud computing as snapshots. The reason for this is that a normal backup typically does either a file by file or block by block backup of the entire hard drive or drives. Not only does this take a while but the format of those files which are more than likely highly compressed, are specific to your backup system and are in the format that your system requires to perform a successful restore. A snapshot on the other hand is literally just that, it’s like a photograph was taken of your hard drive in its current state and moved to a storage device. That snapshot is not a highly compressed and highly modified version of your data and operating system, it is a fully functioning duplicate that in the event of a primary failure, can simply be booted up. So the restore process is reduced from a series of steps that require lots of manual intervention and maybe even a technician to pull your server and do physical work on the server, to you simply clicking a button that says “restore this snapshot”. Let me make sure that you understand this because even though this is an incredibly simple concept, people often times still don’t get it. So the system takes a snapshot of your cloud computing environment and instantly stores that snapshot on a storage device. When the system fails for whatever reason whether it is hacked beyond recognition, an angry ex employee went in and deleted all of your content or whatever the case may be, you instruct the system to restore whichever snapshot you want and all it does his boot up that snapshot and your environment is restored. How cool is that.!

The other benefits of cloud computing are very obvious but the ability to recover quickly and completely from any type of failure is what really jumps out at me. Cloud Computing is still in its infancy but the writing is on the wall, the upside is crystal clear and I predict that eventually everyone will hop on the cloud.