Author Archive

As was made public today, Xen released a security advisory for their Hypervisor code. Upon receiving the embargoed security news late last week, our teams quickly evaluated our use of Xen and determined that a reboot of virtual instances was not necessary and there would be no customer impact.

At GoGrid, we are fully supportive of the Xen community and are eager to give back in any way we can.

A new major security vulnerability impacting Linux customers who leverage Bash as their shell was announced yesterday. GoGrid strongly recommends customers exposed to this vulnerability apply the appropriate security patch as soon as possible. Below are instructions for patching your systems:

In a hyper connected world, both consumers and businesses are under constant security threats. All the more reason to manage data more effectively. Let’s not kid ourselves. We shouldn’t be treating all data the same and not everyone or business needs high level security. When a business does need it, though, there are a multitude of options available that are much more secure than home-grown solutions that often result in single points of failure.

Listen to GoGrid CEO, John Keagy, and Cloud Technology Partner author, David Linthicum, chat on how “cloud” isn’t really the culprit here, rather “cloud” can be more secure than any other enterprise method available.

When folks refer to “Big Data” these days, what is everyone really talking about? For several years now, Big Data has been THE buzzword used in conjunction with just about every technology issue imaginable. The reality, however, is that Big Data isn’t an abstract concept. Whether you like it or not, you’re already inundated with Big Data. How you source it, what insights you derive from it, and how quickly you act on it will play a major role in determining the course—and success—of your company. To help you get started understanding the key Big Data trends, take a look at this infographic: “60-Second Guide to Big Data and the Cloud.”

Handling the increased volume, variety, and velocity—the “3V/s”—of data (shown in the center of the infographic) requires a fundamental shift in the makeup of the platform required to capture, store, and analyze the data. A platform that’s capable of handling and capitalizing on Big Data successfully requires a mix of structured data-handling relational databases, unstructured data-handling NoSQL databases, caching solutions, and map reducing Hadoop-style tools.

As the need for new technologies to handle the “3V/s” of Big Data has grown, open source solutions have become the catalysts for innovation, generating a steady launch of new, relevant products to tackle Big Data challenges. Thanks to the skyrocketing pace of innovation in specialized databases and applications, businesses can now choose from a variety of proprietary and open source solutions, depending on the database type and their specific database requirements.

Given the wide variety of new and complex solutions, however, it’s no surprise that a recent survey of IT professionals showed that more than 55% of Big Data projects fail to achieve their goals. The most significant challenge cited was a lack of understanding of and the ability to pilot the range of technologies on the market. This challenge systematically pushes companies toward a limited set of proprietary platforms that often reduce the choice down to a single technology. Perpetuating the tendency to seek one cure-all technology solution is no longer a realistic strategy. No single technology such as a database can solve every problem, especially when it comes to Big Data. Even if such a unique solution could serve multiple needs, successful companies are always trialing new solutions in the quest to perpetually innovate and thereby achieve (or maintain) a competitive edge.

Let’s say you’ve already done your due diligence and decided you want to run a NoSQL database. The only problem is that you’ve now got to figure out how to deploy the cluster in an environment that lets you scale within a single data center and also across multiple data centers. To save money, this is when many people trial Cassandra on cheap hardware with limited RAM across clusters that are simply inadequate for the job.

That’s a mistake, but luckily, there’s a better way. At GoGrid, we’ve made it possible to deploy a production-ready 5-node Cassandra cluster on robust, high-performance machines with the click of a button. Check out the specs of the orchestrated deployment we’re providing using our 1-Button Deploy™ technology:

Once you’ve deployed the first cluster, you can add more nodes as you need them via simple point-and-click. Consider for a moment what you can do with this technology: You can run a user/session store for your application, run a distributed priority job queue, use it to manage sensor data, or any number of other things with just a few clicks of the mouse. And you can do it all in 3 easy steps: