business

If you guys have read any of my past blogs, you know how much I LOVE jQuery, but every good developer knows that if there’s an easier or more efficient way of doing something: DO IT. With all the new developments with CSS3, HTML5, etc. etc., sometimes we have to get back to basics to relearn how to do things more efficiently, so here it goes!

Nearly every website has some form of 2.0/dynamic/generated content nowadays, and if your site doesn’t… well, it probably should catch up! I’ll show you how with some new CSS tricks and how it can reduce a lot of overhead of including the entire jQuery library (which would save you approximately 84kb per page load, assuming you have no other asynchronous/client side functionality you need).

I’ll start off with an easy example, since I know most of you take these examples and let your creativity run wild for your own projects. (Note to self: start a “Code Gone Wild” series.)

Usually this is the part where I say “First, let’s include the jQuery library as always.” Not this time, let’s break the rules!

FIRST, start off your document like any other (with the basic structure, set your DOCTYPE appropriately, i.e. strict vs transitional):

<!DOCTYPE html>
<html>
<head>
</head>
<body>
</body>
</html>

Wow, you can already tell this generated content’s going to be a TON easier than using jQuery (for those of you whom aren’t already jQuery fans).

Now let’s add in a div there; every time we hover over that div, we’re going to display our generated content with CSS. Inside of our div, we’re going to place a simple span, like so:

As you can see, the span content contains a simple question and the data-title attribute contains the answer to that question.

Now let’s just make this div a little bit prettier before we get into the fancy stuff.

Add some style to the <head> section of our document:

<style>
.slisawesome {
/* Will TOTALLY be making another blog about the cool CSS gradients soon */
background:linear-gradient(to bottom, #8dd2d9 , #58c0c7);
padding: 20px; /* give the box some room to breathe */
width: 125px; /* give it a fixed width since we know how wide it should be */
margin: 100px auto; /* move it away from the top of the screen AND center it */
border: 1px solid black; /* this is just a little border */
position: relative; /* this is to help with our generated content positioning */
}
</style>

Now you should have something that looks like this:

This is good; this is what you should have. Now let’s make the magic happen and add the rest of our CSS3:

<style>
.slisawesome {
/* Will TOTALLY be making another blog about the cool CSS gradients soon */
background:linear-gradient(to bottom, #8dd2d9 , #58c0c7);
padding: 20px; /* give the box some room to breathe */
width: 125px; /* give it a fixed width since we know how wide it should be */
margin: 100px auto; /* move it away from the top of the screen AND center it */
border: 1px solid black; /* this is just a little border */
position: relative; /* this is to help with our generated content positioning */
}
.slisawesome span::before {
content:attr(data-title); /* assigning the data-title attribute value to the content */
opacity: 0; /* hiding data-title until we hover over it */
position: absolute; /* positioning our data-title content */
margin-top: 50px; /* putting more space between our question and answer */
/* Fancy transitions for our data-title when we hover over our question */
/* which I’m TOTALLY going to write another blog for ;) If you guys want, of course */
-webkit-transition:opacity 0.4s; /* determines the speed of the transition */
transition:opacity 0.4s; /* determines the speed of the transition */
}
</style>

Despite my anticlimactic adding of “the magic,” we just added a :hover that will show full opacity when we hover, so refresh your page and try it out! You should see something like this when you hover over THE QUESTION:

Of course you could REALLY start getting fancy with this by adding some php variables for the logged in user, or perhaps make it dynamic to location, time, etc. The possibilities are endless, so go… go and expand on this awesome generated content technique!

Why it's OK to be a server hugger—a cloud server hugger.

(This is the second post in a three-part series. Read the first post here.)

By now, you probably understand the cloud enough to know what it is and does. Maybe it's something you've even considered for your own business. But you're still not sold. You still have nagging concerns. You still have questions that you wish you could ask, but you're pretty sure no cloud company would dignify those questions with an honest, legitimate response.

Well we’re a cloud company, and we’ll answer those questions.

Inspired by a highly illuminating (!) thread on Slashdot about the video embedded below, we've noticed that some of you aren't ready to get your head caught up in the cloud just yet. And that's cool. But let's see if maybe we can put a few of those fears to rest right now.

"[With the cloud], someone you don't know manages [your cloud servers], and they can get really unaccountable at times."

Hmm. Sounds like somebody's had a bad experience. (We're sorry to hear that.) But in truth, cloud computing companies are nothing without reputation, integrity, and, well, security upon security upon security measures. Accountability is the name of the game when it comes to you trusting us with your critical information. Research, research, research the company you choose before you hand anything over. If the measures that a potential cloud provider take don't cut the mustard with you, jump ship immediately—your business is way too important! But you're bound to find one that has all the necessary safeguards in place to provide you with plenty of peace of mind.

Oh, and by the way, have we mentioned that some cloud infrastructure providers put the deployment, management, and control in the hands of their customers? Yup. They just hand the reins right over and give you complete access to easy-to-use management tools, so you can automate your cloud solution to fit your unique needs. So there's that.

"The nickel-and-dime billing that adds up awfully damned quickly. Overall, if you're not careful you can rack upwards of $4k/mo just to host a handful of servers with hot backups and a fair amount of data and traffic on them."

You're right. That's why it's important to plan your cloud architecture before you go jumping in. Moving to the cloud isn't something you do with your eyes closed and with a lack of information. Know your company's business needs and find the best solution that fits those needs—every single one of those needs. Be realistic. Assess intelligently. Know your potential provider's add-on costs (if any) ahead of time so that you can anticipate them. Sure, add-ons can pile up if you're caught off-guard. But we know you're too smart for that to be a problem.

Play around with your possibilities before you sign on that dotted line. If you can't, search for a provider who'll let you play before you pay.

"Many cloud services break many privacy laws. The service provider can see/use the data too. Some of us are even bound by law to maintain the integrity of certain classes of information (personal, medical, financial). Yielding physical control to another organization, no matter what their reputation, removes your ability to perform due diligence. How do I know that what I legally have to keep private really is private?"

Sigh. Okay, we hear this fear; we really do, but it's just not true. Not for any reputable cloud solutions provider that wants to stay in business, anyway. We, grown-ups of cloud computing, take the security of your data very, very seriously. There are hackers. There are malicious attacks. There are legal compliance issues. And for those, we have Intrusion Protection Software, firewalls, SSL certificates, and compliance standards, just to name a few. We can handle what you throw at us, and we respect and honor the boundaries of your data.

So let's talk nitty gritty details. You're probably most familiar with the public cloud, or virtual servers. Yes, infrastructure platforms are shared, but that doesn't mean they're pooled—and it certainly doesn't mean universal accessibility. Your virtual server is effectively siloed from the virtual servers of every other client on that public server, and your data is accessible by you and only you. If you think about it like an apartment complex, it makes a lot of sense. The building itself is multi-tenant, but only you have the key to the contents of your individual unit.

On the other hand, bare metal servers are mansions. You're the only one taking up residence on that dedicated server. That big bad house is yours, and the shiny key belongs to you, and you only. (Check you out, Mr. Big Stuff.) You have complete and utter control of this server, and you can log, monitor, and sic the dogs on any and all activity occurring on it. Bare metal servers do share racks and other network gear with other bare metal servers, but you actually need that equipment to ensure complete isolation for your traffic and access. If we use the real estate analogy again and bare metal servers are mansions, then anything shared between bare metal servers are access roads in gated communities and exist only to make sure the mailman, newspaper delivery boy, and milkman can deliver the essential items you need to function. But no one's coming through that front door without your say so.

We cloud folk love our clients, and we love housing and protecting their data—not sneaking peeks at it and farming it out. Your security means as much to us as it means to you. And those who don't need access don't have it. Plain and simple.

"I don't want [my data] examined, copied, or accidentally Googled."

You don't say? Neither do we.

"What happens to my systems when all of your CxOs decide that they need more yachts so they jack up the pricing?"

They stay put, silly. No one takes systems on the boat while yachting. Besides, we don't do yachts here at SoftLayer—we prefer helicopters.

Stay tuned for the last post in this series, where we discuss your inner control freak, invisible software, and real, live people.

When you begin a household project, you must first understand what you will need to complete the task. Before you begin, you check your basement or garage to make sure you have the tools to do the work. Building a secure cloud-based solution requires similar planning. You’re in luck—SoftLayer has all the tools needed, including a rapidly maturing set of security products and services to help you build, deploy, and manage your cloud solution. Over the next couple of months, we will take a look at how businesses leverage cloud technologies to deliver new value to their employees and customers, and we’ll discuss how SoftLayer provides the tools necessary to deliver your solutions securely.

Hurricane plan of action: Water: Check. Food: Check. Cloud: Check?

Let’s set the scene here: A hurricane is set to make landfall on the United States’ Gulf Coast, and the IT team at an insurance company must elastically scale its new claim application to accommodate the customers and field agents who will need it in the storm’s aftermath. The team needs to fulfill short-term computing needs and long-term hosting of additional images from the claims application, thereby creating a hybrid cloud environment. The insurance company’s IT staff meet to discuss their security requirements, and together, they identify several high-level needs:

Data cannot be shifted across borders, and data at rest or in use must be encrypted. SoftLayer leaves data where customers place it, and will never transfer customers’ data. IBM Cloud Marketplace partners like Vormetric offer encryption solutions to ensure sensitive data-at-rest is not stored in clear text, and that customers maintain complete control of the encryption keys. Additionally, the IT team in our example would have the ability to encrypt all sensitive PHI data in database using data-in-use solutions from Eperi.

Ensure multi-layered security for network zone segmentation.

Users and administrators in the confidential area of insurance need confidence that their network is securely partitioned. SoftLayer native and vendor solutions such as SoftLayer VLANs, Vyatta Gateway, Fortigate firewall, and Citrix Netscaler allow administrators to securely partition a network, creating segmentation according to organizational needs, and providing the routing and filtering needed to isolate users, workloads, and domains.

The IT team can apply best-of-breed third-party solutions, such as Nessus Vulnerability Scanner, McAfee Antivirus, and McAfee Host Intrusion Protection. These capabilities give administrators the means to ensure that infrastructure is protected from malware and other host attacks, enhancing both system availability and performance.

Define and enforce security policies for the hybrid cloud environment, and audit any policy changes.

Administrators can manage overall policies for the combined public-private environment using IBM solutions like QRadar, Hosted Security Event and Log Management Service, and xForce Threat Analysis Service. Admins can use solutions from vendors like CloudPassage, Sumo Logic, and ObserveIT to automatically define policies around firewall rules, file integrity, security configuration, and access control, and to audit adherence to such policies.

The insurance company’s IT department already knew from SoftLayer’s reputation that it is one of the highest performing cloud infrastructures available, with a wide range of integrated and automated cloud computing options, all through a private network and advanced management system, but now it knows from experience that SoftLayer offers the security solutions needed to get the job done.

When business needs spike and companies need additional capacity, SoftLayer delivers quickly and securely. Stay tuned for Part 2 where we will talk secure development and test activities.

Even with the knowledge that images can live on forever to haunt you, people continue to snap self-portraits in compromising positions (it’s your prerogative). Heck, before smart phones came along, people were using Polaroids to capture the moment. And, if history teaches us anything, people will continue the trend—instead of a smart phone, it’ll be a holodeck (a la Star Trek). Ugh, can you imagine that?

The recent high-profile hack of nude celebrity photos came from private phones. They weren’t posted to Facebook or Instagram. These celebrities didn’t hashtag.

After speculation the hack stemmed from an iCloud® security vulnerability, Apple released a statement saying, “We have discovered that certain celebrity accounts were compromised by a very targeted attack on user names, passwords and security questions, a practice that has become all too common on the Internet.” The cloud platform was secure. The users’ security credentials weren’t.

These were private photos intended for private use, so how did they get out there? How can you protect your data; your images; your privacy?

You’ve heard it once; twice; probably every time you create a new account online (and in this day in age, we all have dozens of user accounts online):

Use a strong password. This SoftLayer Blog is an oldie but a goodie where the author gives the top three ways to make a password: 1) use a random generator like random.org; 2) use numbers in place of letters—for example, “minivan” becomes “m1n1v4n”; 3) write your passwords down in plain sight using “Hippocampy Encryption” (named in honor of the part of the brain that does memory type activities). Or take the XKCD approach to password security.

And for heaven’s sake, don’t use the same password for every account. If you duplicate usernames and passwords across sites, a hacker just needs to access one account, and he or she will be able to get into all of your accounts!

Craft little-known answers to security questions. Don’t post a childhood photo of you and your dog on Facebook with the description, “Max, the best pup ever” and then use Max as a security validation answer for “What’s the name of your favorite pet?” It’s like you’re giving the hackers the biggest hint ever.

If available, use a two-factor authentication security enhancement. The government (FISMA), banks (PCI) and the healthcare industry are huge proponents of two-factor authentication—a security measure that requires two different kinds of evidence to prove that you are who you say you are and that you should have access to what you're trying to access. Read our blog or KnowledgeLayer Article for more details.

Remember passwords are like underwear—don’t share them with friends and change them often. When it comes to passwords, at least once a year should suffice. For underwear, we recommend changing more regularly.

We won’t tell you what to do with your sensitive selfies. But do yourself a favor, and be smart about protecting them.

Why it's OK to be a server hugger—a cloud server hugger.

By now, you probably understand the cloud enough to know what it is and does. Maybe it's something you've even considered for your own business. But you're still not sold. You still have nagging concerns. You still have questions that you wish you could ask, but you're pretty sure no cloud company would dignify those questions with an honest, legitimate response.

Well we’re a cloud company, and we’ll answer those questions.

Inspired by a highly illuminating (!) thread on Slashdot about the video embedded below, we've noticed that some of you aren't ready to get your head caught up in the cloud just yet. And that's cool. But let's see if maybe we can put a few of those fears to rest right now.

"I'm worried about cloud services going down or disappearing, and there’s nothing anyone can do about it."

Let's just get one thing straight here: we're human, and the devices and infrastructures and networks we create are fallible. They're intelligent and groundbreaking and mind-boggling, but they are—like us—susceptible to bad things and prone to error at any given time.

But it's not the end of the world if or when it happens. Your cloud service provider has solutions. And so do you.

First, be smart about who you choose to work with. The larger, more reputable a company you select, the less likely you are to experience outages or outright disappearances. It's the nature of the beast—the big guys aren't going out of business any time soon. And if the worst should happen, they're not going down without a fight for your precious data.

Most outages end up being mere temporary blips that generally don’t last long. It'd take a major disaster (think hurricane or zombie apocalypse) to take any cloud-based platform out for more than a few hours. Which, of course, sounds like a long time, but we're talking worst case scenario here. And in the event of a zombie apocalypse, you probably have bigger fish to fry anyway.

But the buck doesn't stop there. Moving data to the cloud doesn't mean you get to kick up your heels, and set cruise control. (You don't really want that anyway, and you know it.) Be proactive. Know your service-level agreements, and make sure your system structures are built in a way that you're not losing out when it comes to outages and downtime. Know your provider's plan for redundancy. Know what monitoring systems are in place. Identify which applications and data are critical and should be treated differently in the event of a worst case scenario. Have a plan in the event of doomsday. You wouldn't go head first into sharknado season without a strategy for what to do if disaster hits, right? Why would the (unlikely) downfall of your data be any different?

Remember when we backed things up to external hard drives; before we'd ever heard of that network in the sky (a quaint concept, we know)? Well, we think it would behoove you to have a backup of what's essential to you and your business.

In any event, don't panic. You think you're freaking out about the cloud going down? Chances are, your provider is one step ahead of you already.

"Most of the time you don't find out about the cloud host's deficiencies until far too late." "One cloud company I had a personal Linux server with got hit with a DOS attack, and their response was to ignore their customer service email and phone for almost a week while trying to clean it up.”

Uh. Call us crazy, but we're guessing that company's no longer around—just a hunch.

We cloud infrastructure providers don't exactly pride ourselves on hoarding your data and then being completely inaccessible to you. Do your research on potential providers. Find out how easy it is (or difficult as the case may be) to get a hold of your customer service team. Make sure your potential provider's customer support meets your business needs. Make sure there's extra expertise available to you if you need personal attention or a little TLC. Make sure those response times are to your liking. Make sure those methods of contact are diverse enough and align with the way you do work.

We know you don't want to need us, but when you do need us, we are here for you.

"Of course, you have to either provide backup yourself, or routinely hard-verify the cloud provider's backup scheme. And you'd better have a backup-backup offsite recovery contract for when the cloud provider announces it can't really recover (e.g. Hurricane Sandy). And a super-backup-backup plan in case the cloud provider disappears with no forwarding address or has all its servers confiscated by DHS."

Hey, you don't have to have any of these things if your data's not that important to you. But if you'd have backups of your local servers, why wouldn't you have backups of anything you put in the cloud?

We thought so.

Nota bene: Sounds like you might want to take up some of this beef with Hurricane Sandy.

Stay tuned for part two where we tackle accountability, security, and buying ourselves new yachts.

I know you may think that’s just a catchy title to get you to read my blog, but it’s not. I’ve actually had someone ask me that at a party. In fact, that’s the first thing anyone asks me when they find out I work for SoftLayer. The funny thing is, everyone is already in the cloud—they just don’t realize it! To make my point, I pick up their smart phone and tell them they already are in the cloud, and walk away. That, of course, sparks more conversation and the opportunity to educate my friends and family on the magic and mystery that is the cloud. But truthfully, it really is a very simple concept:

On demand

Compute

Consumption-based billing

That’s it. At its core. But if you want more detail, check out this document: NIST.

And, just to shed light on the backend of what the cloud is, well, it’s nothing but servers. I know, you were expecting something more exciting—maybe unicorns and fairy dust. But it’s not. We house the servers. We care for them daily. We store them and protect them. All from our data center.

What makes SoftLayer stand out from others in the cloud space is that we offer more than one-size-fits-all servers. We offer both public and private virtual servers like other cloud providers, but we also offer highly customizable and high performance bare metal, servers. And as with any good infrastructure, we offer all the ancillary services such as load balancing, firewalls, attached storage, DNS, etc…

So when you hear “The Cloud,” don’t be mystified, and don’t feel inadequate. Now you too can be the cloud genius at your next party. When they talk cloud, just say things like, “Oh yeah, it’s totally on demand computing that bills based on consumption.” Chicks dig that, trust me.

Think quickly. You hear that your new app will be featured on the front page of TechCrunch in less than two hours. Because it’s a resource-intensive application you know that a flood of new users will bog down its current cloud infrastructure and you’ll need to scale out.

What do you do? Choose virtual servers to guarantee quick deployment and more flexibility? Opt for bare metal servers to deliver the best user experience (while crossing your fingers that the servers are online in time for the flood of traffic)? In times like these, you shouldn’t have to choose between flexibility and power.

You need hourly bare metal servers.

We’ve streamlined the deployment of four of our most popular bare metal configurations, and with that speed, we’re able to offer them with hourly billing! With the hardware pre-configured, you tell us where you want the server to be provisioned—Dallas, San Jose, Washington D.C., London, Toronto, Amsterdam, Singapore, and Hong Kong—and which operating system you’d like us to install— CentOS, Red Hat, FreeBSD, or Ubuntu. And in less than 30 minutes, your server will be online, fully integrated with your other SoftLayer servers and services, and ready for you.

Use the server for as long as you need it. Spin it down when you’re done. Pay for the hours you had it on your account. It’s that easy. No virtualization. No noisy neighbors. Just your computing-intensive workload, the hardware configuration you need, and a phobia-proof commitment.

Why you need hourly bare metal servers in your cloud life?

Processing Power: You have short-term workloads that require significant amounts of processing power. To get the same performance from virtual servers, you might have to provision twice as many nodes or run them for twice as long.

Schedule-based Workloads: You have a number of applications that require compute and storage resources on a set schedule (i.e., once every month), and you don’t want to deploy (and pay for) high-end machines that will sit idle at all other times.

Example: payroll processing or claims payment processing.

Performance Testing: Certify or validate how an application performs on a specific hardware configuration.

With bare metal performance available on demand and on hourly terms, you don’t have to compromise performance for flexibility. When TechCrunch comes calling, you have peace of mind that your app’s success and popularity won’t bring it down.

Last week, we celebrated the official launch of our Toronto (TOR01) data center—the fourth new SoftLayer data center to go live in 2014, and our first in Canada! To catch you up on our progress this year, we unveiled a data center in Hong Kong in June to provide regional redundancy in Asia. In July, we added similar redundancy in Europe with the grand opening of our London data center, and we cut the ribbon on a SoftLayer data center designed specifically for federal workloads in Richardson, TX. The new Toronto location joins our data center pods in Washington, D.C., as our second location in the northeast region of North America.

As you can imagine, our development and operations teams have been working around the clock to get these new facilities built, so they were fortunate to have Tim Hortons in Toronto to keep them going. Fueled by countless double-doubles and Timbits, they officially brought TOR01 online August 11! This data center launch is part of IBM’s massive $1.2 billion commitment to in expanding our global cloud footprint. A countless number of customers have asked us when we were going to open a facility in Canada, so we prioritized Toronto to meet that demand. And because the queue had been building for so long, as soon as the doors were opened, we had a flood of new orders to fulfill. Many of these customers expressed a need for data residency in Canada to handle location-sensitive workloads, and expanding our private network into Canada means in the region will see even better network performance to SoftLayer facilities around the world.

Here are what a few of our customer had to say about the Toronto launch:

Brenda Crainic, CTO and co-founder of Maegan said, “We are very excited to see SoftLayer open a data center in Toronto, as we are now expanding our customer base in Canada. We are looking forward to host all our data in Canada, in addition to their easy-to-use services and great customer service."

Frederic Bastien, CEO at mnubo says, “We are very pleased to have a data center in Canada. Our customers value analytics performance, data residency and privacy, and deployment flexibility—and with SoftLayer we get all that and a lot more! SoftLayer is a great technology partner for our infrastructure needs.”

With our new data center, we’re able to handle Canadian infrastructure needs from A to Zed.

While we’d like to stick around and celebrate with a Molson Canadian or two, our teams are off to the next location to get it online and ready. Where will it be? You won’t have to wait very long to find out.

I’d like to welcome the new Canucks (both employees and customers) to SoftLayer. If you’re interested in getting started with a bare metal or virtual server in Canada, we’re running a limited-time launch promotion that’ll save up to $500 on your first order in Toronto: Order Now!

-John

P.S. I included a few Canadianisms in this post. If you need help deciphering them, check out this link.

As a "techy turned marketing turned social media turned compliance turned security turned management" guy, I have had the pleasure of talking to many different customers over the years and have heard horror stories about data loss, data destruction, and data availability. I have also heard great stories about how to protect data and the differing ways to approach data protection.

On a daily basis, I deal with NIST 800-53 rev.4, PCI, HIPAA, CSA, FFIEC, and SOC controls among many others. I also deal with specific customer security worksheets that ask for information about how we (SoftLayer) protect their data in the cloud.

My first response is always, WE DON’T!

The looks I’ve seen on faces in reaction to that response over the years have been priceless. Not just from customers but from auditors’ faces as well.

They ask how we back up customer data. We don’t.

They ask how we make it redundant. We don’t.

They ask how we make it available 99.99 percent of the time. We don’t.

I have to explain to them that SoftLayer is simply infrastructure as a service (IaaS), and we stop there. All other data planning should be done by the customer. OK, you busted me, we do offer managed services as an additional option. We help the customer using that service to configure and protect their data.

We hear from people about Personal Health Information (PHI), credit card data, government data, banking data, insurance data, proprietary information related to code and data structure, and APIs that should be protected with their lives, etc. What is the one running theme? It’s data. And data is data folks, plain and simple!

Photographers want to protect their pictures, chefs want to protect their recipes, grandparents want to protect the pictures of their grandkids, and the Dallas Cowboys want to protect their playbook (not that it is exciting or anything). Data is data, and it should be protected.

So how do you go about doing that? That's where PLEB, the weird acronym in the title of this post, comes in!

PLEB stands for Physical, Logical, Encryption, Backups.

If you take those four topics into consideration when dealing with any type of data, you can limit the risk associated with data loss, destruction, and availability. Let’s look at the details of the four topics:

Physical Security—In a cloud model it is on the shoulders of the cloud service provider (CSP) to meet strict requirements of a regulated workload. Your CSP should have robust physical controls in place. They should be SOC2 audited, and you should request the SOC2 report showing little or no exceptions. Think cameras, guards, key card access, bio access, glass alarms, motion detectors, etc. Some, if not all, of these should make your list of must-haves.

Logical Access—This is likely a shared control family when dealing with cloud. If the CSP has a portal that can make changes to your systems and the portal has a permissions engine allowing you to add users, then that portion of logical access is a shared control. First, the CSP should protect its portal permission system, while the customer should protect admin access to the portal by creating new privileged users who can make changes to systems. Second, and just as important, when provisioning you must remove the initial credentials setup and add new, private credentials and restrict access accordingly. Note, that it’s strictly a customer control.

Encryption—There are many ways to achieve encryption, both at rest and in transit. For data at rest you can use full disk encryption, virtual disk encryption, file or folder encryption, and/or volume encryption. This is required for many regulated workloads and is a great idea for any type of data with personal value. For public data in transit, you should consider SSL or TLS, depending on your needs. For backend connectivity from your place of business, office, or home into your cloud infrastructure, you should consider a secure VPN tunnel for encryption.

Backups—I can’t stress enough that backups are not just the right thing to do, they are essential, especially when using IaaS. You want a copy at the CSP you can use if you need to restore quickly. But, you want another copy in a different location upon the chance of a disaster that WILL be out of your control.

So take the PLEB and mitigate risk related to data loss, data destruction, and data availability. Trust me—you will be glad you did.

“Forget about being a futurist, become a now-ist.” With those words, Joi Ito, the director of the MIT Media Lab, ends his most recent talk at TED. What thrills me the most is his encouragement to apply agile principles throughout any innovation process, and creating in the moment, building quickly and improving constantly is the story we’ve been advocating at SoftLayer for a long while.

Joi says that this new approach is possible thanks to the Internet. I actually want to take it further. Because the Internet has been around a lot longer than these agile principles, I argue that the real catalyst for the startups and technology disruptors we see nowadays was the widespread, affordable availability of cloud resources. The chance of deploying infrastructure on demand without long-term commitments, anywhere in the world, and with an option to scale it up and down on the fly decreased the cost of innovation dramatically. And fueling that innovation has always been raison d'être of SoftLayer.

Joi compares two innovation models: the before the Internet (I will go ahead and replace “Internet” with “cloud,” which I believe makes the case even stronger) and the new model. The world seemed to be much more structured before the cloud, governed by a certain set of rules and laws. When the cloud happened, it became very complex, low cost, and fast, with Newtonian rules being often defied.

Before, creating something new would cost millions of dollars. The process started with commercial minds, aka MBAs, who’d write a business plan, look for money to support it, and then hire designers and engineers to build the thing. Recently, this MBA-driven model has flipped: first designers and engineers build a thing, then they look for money from VCs or larger organizations, then they write a business plan, and then they move on to hiring MBAs.

A couple of months ago, I started to share this same observation more loudly. In the past, if an organization wanted to bring something new to the market, or just make iteration to the existing offering, it involved a lot of resources, from time, to people, to supporting infrastructure. Only a handful of ideas, after cumbersome fights with processes, budget restrictions, and people (and their egos), got to see the daylight. Change was a luxury.

Nowadays the creators are people who used to be in the shadows, mainly taking instructions from “management” and spinning the hamster wheel they were put on. Now, the “IT crowd” no longer sits in the basements of their offices. They are creating new revenue streams and becoming driving forces within their organizations, or they are rolling out their own businesses as startup founders. There is a whole new breed of technology entrepreneurs thriving on what the cloud offers.

Coming back to the TED talk, Joi brings great examples proving that this new designers/engineers-driven model has pushed innovation to the edges and beyond not only in software development, but also in manufacturing, medicine, and other disciplines. He describes bottom-up innovation as democratic, chaotic, and hard to control, where traditional rules don’t apply anymore. He replaces the demo-or-die motto with a new one: deploy or die, stating that you have to bring something to the real world for it to really count.

He walks us through the principles behind the new way of doing things, and for each of those, without any hesitation, I can add, “and that’s exactly what the cloud enables” as an ending to each statement:

Principle 1: Pull Over Push is about pulling the resources from the network as you need them, rather than stocking them in the center and controlling everything. And that’s exactly what the cloud enables.

Principle 2: Learning Over Education means drawing conclusions and learning on the go—not from static information, but by experimenting, testing things in real life, playing around with your idea, seeing what comes out of it, and applying the lessons moving forward. And that’s exactly what the cloud enables.

Principle 3: Compass Over Maps calls out the high cost of writing a plan or mapping the whole project, as it usually turns out not to be very accurate nor useful in the unpredictable world we live in. It’s better not to plan the whole thing with all the details ahead, but to know the direction you’re headed and leave yourself the freedom of flexibility, to adjust as you go, taking into account the changes resulting from each step. And that’s exactly what the cloud enables.

I dare to say that all the above is the true power of cloud without fluff, leaving you with an easy choice when facing the deploy-or-die dilemma.