Given that there are so many proven benefits of having a cloud infrastructure for your business, including but not limited to collaboration, better access to analytics, increasing productivity, reducing costs, and speeding up development cycles, it’s hard to think of reasons why a company wouldn’t want to invest in a cloud infrastructure. And due to huge advances in our familiarity with the cloud, the security capabilities available in cloud and the emergence of new cloud infrastructure providers, doing business in the cloud today is no longer as risky an endeavor as it might have been just a few years ago.

But that doesn’t mean that creating and maintaining a safe and efficient cloud infrastructure is always easy or that it is without risk when doing business with various providers of infrastructure. There are many factors to consider and when looking at cloud computing infrastructure options, many companies are increasingly looking for cloud providers that offer painless cloud infrastructure.

Since ProfitBricks provides comprehensive cloud services to support all kinds of businesses with a very varied set of applications and workloads, we wanted to share some top cloud infrastructure tips. More specifically, we wanted to get advice from cloud infrastructure professionals on some of the most common (and avoidable) mistakes businesses and individuals make when it comes to either building, managing, or maintaining their cloud infrastructure. To do this, we asked 25 cloud experts to answer this question:

“What is the biggest mistake people make with cloud infrastructure?”

Most of these experts are not ProfitBricks customers, but they responded to a request we published to provide some expert advice. We’ve collected and compiled their advice into this comprehensive guide on how to avoid cloud infrastructure mistakes. See what our experts said below:

George McMann

George McMann is the President and CEO of BizNet Software Inc., which he founded in October 1996, and has been its chairman since April 2000. McMann personally designed BizNet Software’s core technology and has over 15 years’ experience in software development, implementation and consulting. He began his career as a certified public accountant at Ernst & Young and has advised on and managed the implementation of software integration and financial auditing projects with Nortel Networks and Clarus Corporation within the financial, health, insurance and telecommunications sectors.

While storing data on the cloud can be very efficient, one of the biggest mistakes companies make with their cloud infrastructure is…

Being unable to access and retrieve their data from the cloud.

Beyond looking at the data on the cloud server, being able to pull the data into third-party tools, such as Microsoft Excel, is crucial. While the data is available, it may not be in a format that is conducive to the reporting process. After all, data is a primary driver for business decisions and must be easily accessible regardless of where it’s stored.

Additionally, it is essential to have an appropriate amount of telecom bandwidth with a reliable up-time to ensure information will be available when it is needed. Transaction and retrieval bandwidth must both be measured when considering cloud server bandwidth. Transaction activity refers to the number of people that can see the data at one time, however it does not account for the retrieval bandwidth, which relies on the customer’s infrastructure. Cloud servers typically restrict how much data can be requested, or pulled down, at a time. However, you should be able to request all of your data at once if you so desire, and there will need to be sufficient bandwidth to make that possible.

At the end of the day, the data belongs to the customer, so accessing and retrieving it without any headaches is key.

Ajay Patel

Ajay Patel is a Co-founder and Chief Executive of enterprise collaboration platform HighQ. Founded in 2001, the London-headquartered company provides software to some of the world’s largest law firms, investment banks, and corporations.

The biggest mistake businesses can make when moving to the cloud is…

To not check where their data is hosted.

Data sovereignty is a massive issue right now, particularly with the recent US Senate vote against NSA reform, and the Patriot Act, which allows the US government to access digital data no matter where in the world it is stored. This should be a concern for all businesses venturing into the cloud, but especially for highly regulated industries such as law and finance.

Businesses should be cautious about choosing a cloud provider that doesn’t let them choose which jurisdiction their data is stored in, or cannot guarantee that their data will stay there. Cloud providers based in the United States will always be subject to American laws, no matter where in the world they host their data, but these laws also apply to data hosted in the United States data centres by non-US providers.

This means that if, for instance, a French law firm uses a French cloud provider that hosts their data in the US, this data is under the jurisdiction of the United States government. For US companies looking for a cloud provider, it makes no difference whether they host in the US or outside of it – it will be subject to the same laws regardless. But for non-US companies, it is vital to ensure that their data is secure in their chosen jurisdiction and guaranteed to remain there.

Always check with a prospective cloud provider where they host their data. Good providers should offer hosting in a range of jurisdictions and should be able to talk you through the laws surrounding each jurisdiction so you can make an informed choice as to where is the best place to store your company’s and clients’ confidential data.

Kacee Johnson

Kacee Johnson, Founder of Blue Ocean Principles, is a regular speaker and commentator at Technology, Business, Accounting, and Legal conferences nationwide; Focusing on business development, marketing, sales and Cloud technologies. Awarded the CPA Practice Advisor Magazines “Top 40 Under 40” Award in 2012 and 2013, she is recognized as one of the young professionals leading businesses into the future. The former Executive Vice President of Cloud9 Real Time utilizes her diverse management career marked by a demonstrated ability to create solid business plans, determine product needs, achieve revenue goals, build teams and achieve cross-functional business objectives. Kacee founded BOP with a simple goal in mind: work with people you enjoy and on projects you believe in.

The biggest mistake people make with Cloud Infrastructure is…

Not planning for mobile device security in user access.

Most companies lack a BYOD (Bring Your Own Device) policy and/or management of company devices in securing data accessed in the Cloud. If a phone, tablet or laptop are lost or stolen the risk of unauthorized access to company and client data is real. Employees often do not have their devices password protected leaving access open for any person in control of the device. Without a formal procedure for reporting incidences, the amount of time that goes by without action only increases the risk of a security breach. Businesses should create a BYOD policy that requires all devices that access the corporate cloud to be registered with the company, be password protected and have the ability to be “wiped” should a device be compromised.

Eugene Dong

Eugene Dong is CTO and and Co-founder of Procurify, a fully featured cloud-based procurement solution. In his leadership role, Eugene oversees the creation and maintenance of Procurify by leading the technical team’s development of the code and infrastructure; he is constantly working to improve that code and infrastructure until it is flawless. He also coordinates between the business and technical sides of the team. His insatiable craving for technology and innovation helps to give the company the edge it needs to stay ahead of the competition. Currently, Procurify has users worldwide in over 53 countries and is making it’s mark on the cloud technology landscape.

The biggest mistakes made with cloud computing:

Not planning for the future in terms of programming language and hosting provider.

You want to choose software languages that are well documented and support the features that you want to build. Anything that makes your life easier through plug-ins are key to quick and easy growth.

Also, pick a hosting provider that has solutions for how you want to grow. Pick IaaS, If you have staff to maintain servers and set them up. If not, maybe PaaS is better for your company.

We wanted to focus all our attention to delivering features, picked PaaS to allow us to achieve that.

Matt Powers

Matt Powers is the CTO of Applico, the world’s first Platform Innovation Company. At Applico, Matt has been responsible for a variety of roles and responsibilities, but at the core has always been focused on technology. During his tenure, Matt has helped define and operationalize Applico’s Product, Design, Engineering and QA teams. Today Matt is responsible for the Product Engineering team, its day to day operations, and Applico’s overall technology/product strategy.

Here are my top cloud infrastructure suggestions, so that you don’t run into common cloud infrastructure mistakes…

Choosing the right provider: Its essential today that IT professionals understand the choices that are out on the market. Companies like Google and Amazon have competing product offerings in AWS and Google Cloud, but it’s extremely important to understand the differences between the two services so that you are making an investment in a cloud infrastructure that meets your needs. For example, multimedia streaming applications have VERY different requirements than applications that are doing real time-analysis on large amounts of data. Where AWS offers a tight integration into something like Wowza, Google does not. Be careful about the cloud provider you choose and understand their 12-18 month technology roadmaps.

Cost: Understanding the costs associated with the different cloud providers can be overwhelming. Looking at EC2’s pricing options will make your head spin. In a nutshell, providers generally have two pricing models, with Amazon having the most traditional one. Amazon treats every product in its AWS suite as an instance and charges based on “uptime.” Google does this too, but it also has alternative pricing mechanisms based on read/write operations or “queries.” For an application that requires a tremendous amount of IO, the delta between the cloud providers you choose could grow quite large.

Operations = Development: Your development team needs to be a part of operations. Cloud doesn’t mean no maintenance. You need to make sure you are prepared for failures, so things like redundancy, automation, and continuous integration are all essential. Traditional IT teams are a thing of the past. Welcome to DevOps.

David Howard

David Howard is a Certified Ethical Hacker, MCSE 2012, and holds 13 other IT industry certifications addressing wireless, security, and infrastructure. He writes a security and tech blog at http://www.dtig.net and is featured on the national powerhouse radio station 700WLW as “Dave The IT Guy”.

Some of the usual mistakes made by both industry and consumers regarding cloud infrastructure revolves around…

Security.

While in the recent 2 years we’ve heard about hack after hack, it has gotten to the point of being background noise and people have stopped making the effort to secure their data. They often feel like there is nothing they can do to stop it, so they stopped trying. Instituting something as simple as 2FA (Two Factor Authentication) can go a long way in helping to secure their cloud infrastructure.

Simply put, 2FA requires you to have something (a username and password) and to know something ( a randomly changing security code). Many might remember the old RSA tokens you’d carry on your keychain. Same idea, but these days codes can come from a secure app on your smartphone, tablet, PC or MAC. Once you register your device as being yours (a relatively simple process even for non-techs) if someone tries to access your 2FA protected account, an alert pops up giving you the current code (they change about every 15 seconds or so). If you didn’t try to login, the login attempt will time out as the person on the other end doesn’t know your 2FA code.

However, having 2FA doesn’t alleviate you from still having strong passwords. The days of P@ssw0rd1 are gone and using phrases like “I l1ke 2 W@tch Bu773rfli3$’ are even better. Once you’re used to it, remember phrases is easier than silly passwords.

JJ Rosen

JJ Rosen is the Founder of Atiba, a Nashville based technology consulting firm providing custom programming, web design, mobile and web development and other IT networking services.

The biggest mistake people make with cloud infrastructure is…

Offsite backups.

Making the assumption that a cloud provider could never lose data has some risk. So backing up data that is hosted on the cloud either to an on premise based backup or to another cloud vendor is essential for anyone storing important data in the cloud.

Douglas Landoll

Douglas Landoll is the CEO of Lantego, a security consultancy specializing in risk assessment and policy development. He is the author of the best selling “Security Risk Assessment Handbook” and a cybersecurity expert specializing in security risk assessment, compliance/governance, and building corporate security programs. He has been a leader in information securiy for over 25 years, training over 2000 CISSPs and CISAs, founding 4 information security organizations and running security consulting divisions for public and private companies. Mr. Landoll has led projects to assess and improve security at many corporations and Federal, state, and local government agencies including the NSA, CIA, NATO, FBI, State agencies in Texas and Arizona, and Fortune 50 companies.

The biggest mistake people make with cloud infrastructure is…

Making assumptions of information security responsibility.

The responsibility for providing information security controls around cloud infrastructure services is often misunderstood, undocumented, and leaves gaps in the organization’s information security posture. Cloud service providers may provide adequate physical controls, patch management, account management, security event monitoring, and incident response. Then again, they may not. All organizations seeking to outsource any element of their information systems, including infrastructure, need to understand their information security control requirements and appropriately allocate, convey and contract specific controls to the cloud provider. Often the best way to accomplish this is with a cloud risk assessment.

Failure to methodically document security requirements, assign specific requirements through the contract, and monitor the quality of the execution on these requirements is likely to leave the organization with major vulnerabilities in their security posture. A costly security breach will soon follow.

To avoid unwarranted risks, organization should assess the security risk they accept in their current environment and ensure a cloud environment provides adequate security protections as well. One more caution lies in the security process interfaces established by dividing responsibilities across the organization and the cloud service provider. Specifically, the account management, security event monitoring, and incident response processes will need to account for divided responsibilities and communication between these two organizations.

Arthur Zards

Arthur Zards is an original Internet entrepreneur, having Co-founded one of Chicagoland’s first Internet Service Providers in 1992, XNet Information Systems, Inc. He also founded TEDxNaperville in 2008, an independently run event under a license from the TED organization. Arthur has over 22 years’ experience in digital marketing, online networks, social communities, and other elements of a digital business. He has created numerous innovations such as Twitter strategies for connections, a Digital Engagement Plan framework, an EBook on event partnership strategies and most known by his peers for his mathematical formula on digital media utilization. He has been featured in numerous print and media outlets such as the Chicago Tribune, Chicago Sun Times, Daily Herald, Digital Chicago Magazine, and ABC News Chicago.

By far, the biggest mistake people make with cloud infrastructure is…

Assuming that the cloud has a 100% uptime and fully redundant.

Not so! Just because they are typically getting “better” uptime and redundancy, do not assume that the cloud will give you 100%. Still work on your own offside backup for storage and access. Do not have unreasonable expectation so of cloud services. They do have issues.

Jennifer Riggins

Jennifer Riggins is the Founder of eBranding Ninja and an online branding expert. She spent the last few years running digital and B2B marketing for sales and human resources business software, cloud-based infrastructure and an online SaaS directory.

The biggest mistake in cloud infrastructure is easy:

Not having a plan for implementation.

So many company heads Decide they want SaaS and mobile tools and Mandate that employees use them, but without a plan for *when* to use it, *why* we’re using it and, especially, *how* to use it to get the most out of it, implementation inevitably fails. Likewise, many business software companies maybe invest in customer support but not customer success.

The first week or so of adoption is most key in properly implementing an app, and if the SaaS doesn’t invest in proper training and on-boarding that week, it’s very unlikely that at the end of the month or the end of a year that company will renew. After all, like a lazy person with a gym membership, they were just paying for a membership that they didn’t use.

Jeb Molony

Jeb Molony is President and Founder of e-vos, a firm which utilizes cloud computing to assist clients, increase productivity and stay on trend, while decreasing the costs associated with traditional IT.. Mr. Molony founded e-vos in 2011, bringing his background in finance, law and website design to his role in the company. As President, he oversees the direction and strategy for e-vos operations, including website design, practice management solutions, marketing, sales, consulting and support.

The greatest advantage cloud computing provides a business is scalability. Scalability can only be achieved if there is a clear plan for scaling services up and down. Developing a scalable model starts with mapping each individual position based on the organizational chart. A list of tools or programs required for each position should be created to determine what cloud services are needed for each individual. This map gives the business a clear path for adding and removing individual employees as well as a per head cost of employees based on position.

The next step is to determine what type of collaborative tools the business should deploy. In order to determine collaborative tools, the organizational chart should be used to map out who needs to share data. Once the data sharing map is completed, a business can determine which cloud services will be used for each point of collaboration. After the collaboration tools are assigned, the organization can determine the level of access each user type, based on the organizational chart, needs within the collaborative service. This data sharing map will provide an organization with a clear understanding of how it plans to grow and work securely on a team level.

The maps created for the individual and the team will give an organization a clear path for growth. It will also provide the IT department with a clear understanding for on-boarding and team expansion. The final step is to treat these maps as a constant work in progress that should evolve as your business and needs change. Having the ability to scale the services quickly means minor changes will not require a system rebuild so the IT budget can focus on projects that improve productivity rather than system maintenance.

Shannon Snowden

Shannon Snowden is the Senior Technical Marketing Architect at Zerto, a company that provides enterprise-class business continuity and disaster recovery (BCDR) solutions for virtualized infrastructure and cloud.

Many organizations put themselves in a difficult position when using a cloud infrastructure by…

Not having a good disaster recovery (DR) solution for the applications running in the cloud. However, what happens if the cloud service is unavailable?

When facing the scenario of the cloud being unavailable, many organizations don’t have a good answer and find themselves at the mercy of a completely external organization for their sustainability.

Additionally, depending on how they initiated the cloud service, such as a shadow IT operation, the organization may not have the basic information available that is necessary to contact the cloud provider, escalate tickets, or conduct a failover test operations. In the shadow IT case, the person who initiated the service may not even be employed at the business anymore. Further, the cloud service provider may not even offer that level of interaction with customers and might not have the capacity to recover the data to a specific point in time.

There are ways to have the benefits of cloud, but stay in control of the data.

Take the time to understand how the application works and if it is possible to fail it over to another geographic location or how long it will take to recover the data if the application is unavailable.

Establish strict Service Level Agreements (SLA) with the provider.

Determine if the applications can be failed over to the organizations own datacenter or to another cloud provider temporarily in case of a cloud outage.

Test failing over and recovery regularly to minimize the outage window and to verify SLAs can be met. Nothing proves the validity of an SLA like conducting regular tests.

By diving deeper into a more complete solution, the business can have all of the benefits of running in the cloud, yet maintain a recovery solution that they control themselves.

Michael Starostin

Michael Starostin, Chief Technology Officer and a founder of PlexHosted, has held engineering and strategy positions in the broad range of the internet infrastructure solutions industry since 2008. PlexHosted, founded in 2010, is a cloud based hosting company that specializes in managed SharePoint site deployments and associated infrastructure applications. Michael is the driving force behind the company’s cloud based architecture strategy that serves as the building blocks for hosting customer applications. In addition he oversees the company’s engineering and sales support teams, the corner-stone of PlexHosted’s superior customer satisfaction. Prior to founding PlexHosted, Mr. Starostin was an engineering manager at PortaOne, a leading provider of Voice over IP solutions. At PortaOne Michael was a mentor and trainer of junior team members as well as a quality assurance engineer.

One of the biggest mistakes cloud users make is…

Just not having a fundamental understanding of what the cloud is and how it should be maintained in order to provide optimal performance for their business.

All clouds are not created equal so you must understand the architecture and security of the cloud you are running; public, private, or hybrid. Since a cloud runs on a shared platform using a set of shared resources, you will need to understand how these resources are shared and if this sharing arrangement works for you. All cloud deployments should be automated so that deployment, maintenance, and patching minimize human resources. Think of cloud resources as you would software that can be configured, deployed, and tested and then re-adjusted at any time for any use case scenario. Make sure you always save each use case scenario so that it can be re-used at another time.

Always monitor your cloud deployment performance so that you can predict any extra deployment load that can be easily handled by expanding your cloud to prevent failures. Similarly, if the performance of the cloud exceeds the requirements of the application, valuable resources can be saved. There are numerous open-ware products available that monitor and report on cloud performance in real time, e.g., Nagios, Nimsoft.

Cloud does not mean reliable. Always realize that your cloud is nothing more than software that is built upon a physical hardware platform that is managed by a group of people. No matter what type of quality control standards are in place, at some point you will experience component failure and end-of-life issues, software bugs (application or OS), and human mistakes. Make sure your cloud architecture takes these issues into account, and make sure you have a complete quality of service agreement in place with your cloud provider.

Aaron Deutsch

Not spending enough on it. When the cost of a software employee is $6k+ a month and cloud infrastructure costs a small fraction of that, it generally makes sense to solve performance issues with the cheap cloud, not expensive engineers.

Chris Ciborowski

Chris Ciborowski is Co-Founder and Managing Partner of Nebulaworks, where he draws on over 20 years’ experience in the world of information technology. He has designed and implemented technology solutions for large telecommunications providers, hyper-scale web properties, and large enterprise environments. His past work includes developing high performance solutions that could handle millions of users demanding information at the speed of now. Today, he is leading Nebulaworks, who is developing architectures that allow enterprises to bridge the gap between private and public clouds and, helping them to cut their lead-times between feature request and delivery.

When considering or deploying cloud infrastructure, the single biggest mistake we see made is…

Attempting to apply the demands of yesterdays applications to the concept of cloud computing.

In doing so many organizations try, unsuccessfully, to deploy new private and public clouds on the standards that were set forth over the past decade or older. The advantages of the cloud are realized when native applications, developed specifically for cloud deployment and can take advantage of the scalable nature of the cloud are implemented.

This is not to say that legacy applications cannot reside in the cloud – they certainly can – but they should be treated as services, and the requirements for these separated from cloud native applications. By clearly separating the two sets of requirements, organizations can begin to define their cloud infrastructure. In most cases these new deployments are best suited to twelve-factor applications, supported by technologies such as container application delivery and platform as a service, and underlying infrastructure which is horizontally scaling and managed via APIs.

Penny Collen

Penny Collen is the Financial Solutions Architect for Cloud Cruiser, Inc. Her work in operations and development provides a foundation for business process analysis across a broad spectrum of disciplines, including asset management, service pricing, software capitalization and activity-based cost models for hybrid IT.

The cloud infrastructure trap that many organizations fall into is that…

The project team begins and ends with IT, with little interaction or input from the business users that consume and pay for the IT services they deliver. Thus, success criteria get focused solely on technical results – ‘speeds’ and ‘feeds’ – versus specific business objectives. As a result a thorough requirements definition is not performed and the goals and objectives of the cloud strategy don’t align with the business objectives of the intended audience.

One consequence can be a failure to identify which mission critical assets should remain on-premise and inside the corporate data center firewall compared with those where a public cloud is ideal for applications and workloads that require on-demand elasticity or surges up and down during peak usage periods.

The combination of on-premise assets combined with a public cloud for other applications and workloads results in a hybrid IT computing infrastructure that is difficult for many IT departments to manage. If they fail to develop a single-pane dashboard that can view every asset in real time in the enterprise, regardless of where it resides, they will not be able to identify the specific IT services being delivered, let alone provide showback/chargeback services. In many cases this results in billing disputes between departments and projects.

In conclusion, without very specific, measurable criteria, there is no way to determine if the objectives of your cloud infrastructure strategy are successfully achieved. Each workload, application and IT asset (VMs, Compute, Storage, etc,) must be analyzed to determine where they should reside: on-premise data center, public or private clouds. Otherwise, you are left wondering if the cloud model you have deployed is the most efficient choice for your IT department as well as the overall business.

Dr. Marco A.V. Bitetto

Dr. Marco A.V. Bitetto is a Scientist in Residence at the Institute of Cybernetics Research, Inc. and a Scientific Advisor to NERVOTRONIC Corporation. He has more than 25 year experience in both analog and digital hardware based design. Additionally, he has more than 41 years experience at software design. He holds a Bachelor’s of Science Degree from SUNY in Computer System’s Engineering and a PhD. in Robotics.

One of the biggest problems of the cloud that leads to many mistakes is that…

It is far from absolutely secure from intrusion by malicious hacking attacks.

Can it be made substantially more secure, oh yes it can. The problem is that the basic gateways to the cloud are the very hubs that interconnect the computer that are in the cloud all together. Now, if many of these cloud service companies were to demand better hardware based security systems for their cloud based computing services then the manufacturers of network hubs would invest in developing smarter hubs that can detect and prevent intruders from gaining access to cloud based computing network and also the data that such computing networks have.

Aaron Ross

Aaron Ross is an Internet Security Expert & Owner of the cloud site RossBackup.com. You can often see Ross talking about Internet security and cloud computing on Fox News, CW & ABC, among other places.

The biggest mistake that people make in dealing with cloud infrastructure is…

Worrying about cloud security instead of developing a strong and unique password.

Not a day goes by in which I don’t have a business client, private customer, or even a friend, ask me how secure our cloud in RossBackup is. I don’t bother explaining how AES-256 security works and I also don’t detail the complex algorithms that are implemented when storing data. The fact remains that whenever private information is compromised, it’s much easier for people to shift the blame onto cloud providers like us.

In reality, it’s the end user who is almost always at fault. Our tech team got an E-mail a few weeks ago from a client who wrote, “My password is 12345678, and I have a question about my account”. Who chooses such an insane password, and even worse emails it around? While the tech team was discussing this, I bet them that this this fellow probably uses that same password for everything.

Ivo Vachkov

Ivo Vachkov is the Founder of Xi Group Ltd, a small company devoted to high quality Operations / DevOps services with Cloud emphasis. Xi Group Ltd builds and runs applications in the Cloud.

I currently operate several large-scale cloud (AWS) deployments, one of which is 700+ EC2 nodes. In the past I’ve operated similar sized deployments in AWS and smaller ones in Heroku. With 5+ years of daily cloud usage, I believe there are two mistakes that many people make with cloud infrastructure:

1. People expect static infrastructure properties and reliability.

Common expectation is that migration for on-premise equipment to a cloud setup will be similar to switching data centers. Many operations engineers will expect to ‘set it and forget it’. Others will expect environment properties that exist in classical / on-premise infrastructure, to exist in the cloud. I can mention network control, multicast, controlled low-latency point-to-point connectivity, etc … but I’m sure there are more.

It requires shift in the general mentality of how computers and networks work in the cloud. Instead of common bus used by all computing nodes, network becomes ‘series of tubes’ that are created on demand to connect computing pods. And sometimes they fail, and almost always you’re limited to IP/TCP/UDP only. Virtual instances that are seemingly close are running hundreds of miles apart and experience different environmental factors. Underlying physical hardware failures affect your systems in unpredictable ways.

2. People fail to effectively use the elastic nature of the cloud service.

Building distributed systems is hard. Building elastic distributed systems is even harder. Cloud provides quick provisioning mechanisms and many people will top at those. To fully benefit of the properties of cloud-enabled infrastructure, one would ideally want to allocate only the minimum resources needed to provide required processing power *at that point in time*. Before and later should not matter. Because the user / transactional load is not static and we no longer need to base calculations on the projected maximum. *Start small, design for elasticity, scale-out your infrastructure*. This should be the new capacity planning mantra.

Richard Wiles

Richard Wiles is the Founder of LSA Systems, an IT service company, supplying computerized Accounting and Payroll solutions, Networking services, Disaster Recovery and Software Development to small and medium size businesses throughout the UK.

The biggest mistakes people make with cloud infrastructure is…

Failing to address data breaches.

A virtual machine could use side-channel timing information to extract private cryptographic keys in use by other VMs on the same server. A malicious hacker wouldn’t necessarily need to go to such lengths to pull off that sort of feat, though. If a multitenant cloud service database isn’t designed properly, a single flaw in one client’s application could allow an attacker to get at not just that client’s data, but every other clients’ data as well.

The challenge in addressing these threats of data loss and data leakage is that the measures you put in place to mitigate one can exacerbate the other. You could encrypt your data to reduce the impact of a breach, but if you lose your encryption key, you’ll lose your data. However, if you opt to keep offline backups of your data to reduce data loss, you increase your exposure to data breaches.

Amir Naftali

Amir Naftali is the Co-Founder and CTO of FortyCloud, a security technology platform for cloud infrastructure. Before ­founding FortyCloud, Amir worked for Cisco Systems for 13 years. In his last role there he led the development of Cisco’s Network Access and Security (ACS) product line.

There is one very common (and simple mistake) when it comes to cloud infrastructure and it is…

To assume no one pays attention to the infrastructure they use and therefore they open ports to everyone (0.0.0.0/0) making their entire infrastructure open for hackers. We see it quite a lot with companies that are new to the cloud.

Eugene Smith

Eugene Smith is the Creator of Wire You Networks, he has over twenty years of experience in telecommunications and technology. Wire You Networks is a cloud, disaster recovery, and connectivity solutions provider. Wire You works with SMBs and enterprises to enable pathways for growth, productivity, and innovation through cloud and connectivity based solutions. These solutions enable companies to spend less time maintaining their computing and communications environments, and more time creating new business. To learn more about how Wire You Networks is bringing the focus back to business.

One of the biggest cloud infrastructure mistakes companies make is…

Conducting a poor audits prior to the migration of their existing on-premise infrastructure.

This leads to server, networking, and application resources being over or underestimated. When overestimated, they end up paying for unused resources. When underestimated, they don’t have the services and resources needed to operate effectively. To keep the potential cost and headaches of moving their on-premises infrastructure to the cloud. Companies should do thorough server network and application audits prior to migrating.

Brian D. Kelley

Brian D. Kelley is in his 24th year as CIO for Portage County, Ohio. Under his leadership, Portage County has received international, national, state, and regional recognition for highly successful enterprise-wide IT projects. In 2012 he was recognized by Government Technology as one of Top 25 Doers, Dreamers & Drivers in Public Sector Innovation. Brian is currently serving as 1st Vice President & Conference Director on the Executive Board of Directors of GMIS International. With over 400 member organizations across the U.S. and international sister organizations in six countries, GMIS International is the largest professional organization for public sector IT leaders in the U.S.

One of the biggest mistakes people make with cloud infrastructure is…

Failing to have a business contingency plan ready should you find it necessary to leave the cloud altogether, to move to a different cloud, in the event you’re surprised by your cloud service provider exiting stage left leaving you without a cloud as some have experienced.

It’s important to have a plan A as well as plan B in place to move to a new cloud with minimal disruption or exit out of the cloud altogether when stormy clouds appear on the horizon.

John Patrick Hernandez

John Patrick Hernandez is a System Engineer at Azeus Convene, an IT solutions provider with 20 years’ experience in successfully delivering IT solutions. Since 2003, Azeus has been consistently appraised as a CMMI Level 5 company, the highest level in software development capability, by the Software Engineering Institute. This distinction is given only to a select few software development organizations such as NASA, Boeing and contractors for the US Department of National Defense.

One of the biggest mistakes people make with cloud infrastructure is…

Relying too much on the cloud service provider’s built-in monitoring system.

Though most cloud service providers bundle monitoring systems with the cloud service, they cover only the most basic statistics such as CPU, RAM, Disk I/O, etc.

However, there is usually much more that needs to be monitored, which these built-in systems do not address, like ensuring critical services are up or checking whether applications are not taking up too much memory. In these cases, companies should also consider using more specialized control and monitoring software for their cloud infrastructure to make sure all necessary checks are in place. Otherwise, total reliance on built-in monitoring systems can actually limit their ability to manage the infrastructure in the long run.

Az Jassal

Az Jassal is the CEO of BookAbacus, a book recommendations website containing 4 million recommendations from 50 million books, powered by a Hadoop and Hypertable. Az’s previous roles include being a data engineer at the following organizations: JPMorgan (Corporate and Investment Bank Technology), Metropolitan Police (Digital Policing), Nature (science journal), Book Depository (now an Amazon company).

In my experience, the biggest mistake people make with cloud infrastructure is…

Over provisioning.

I have been part of projects in large organizations where budgets were extinguished because large expensive virtual machines were provisioned and left under utilized for long amounts of time (months in fact). Enterprise people sometimes forget the key feature the cloud provides: elasticity, orchestrating the growth of your infrastructure as your needs increase.

I have seen hybrid approaches be quite effective in controlling costs. This is where core infrastructure (which handles predicted workload) lives on long leased infrastructure, e.g. dedicated machines leased monthly. Cloud services are then used to handle any increases. In dollar amounts, a 32GB memory 4-core Microsoft Azure instance (A6) works out to ~$400 a month, where as in the dedicated server market that would run around ~$80 a month. The cloud is great, but provision when you need infrastructure, use for the number of hours it takes to get the work done and then de-provision! If you orchestrate that symphony, you will win.

We appreciate the time this varied group of experts have spent to answer this simple question. What tips would you share? To read some accounts of how ProfitBricks offers a cloud computing infrastructure solution that is painless, read some of our case studies.

William Toll

William Toll, VP, Marketing at ProfitBricks, has held Marketing and Product Management positions in the Web hosting and Internet infrastructure industry since the late 1990s. Currently William is driving the marketing strategy and communications for ProfitBricks. ProfitBricks is a global cloud infrastructure provider with a platform engineered from the ground up to provide class leading Cloud Computing – IaaS services. Most recently William was leading the marketing efforts at Yottaa, a Boston-based startup in the CDN space. Prior to that he managed the small business offerings of hosting and cloud services provider, NaviSite. At NaviSite, William was responsible for developing and marketing the company’s hosted product lines, including managed cloud services, managed hosting, managed business applications and shared and dedicated hosting. In past positions at companies such as Affinity Internet, Inc., Intermedia.NET, and NTT/VERIO, William was the driving force in launching and enhancing successful SMB focused services including: Shared Hosting services, Microsoft Exchange and hosting add-on services like online marketing and Web design. William received a BA in Marketing from New England College in Arundel, UK.