Would it be any different if it were a small hardware chain? What if it was a bank? What if it was your bank, and your money was now inaccessible because of it? The problem just became very real when you thought about that, didn’t it?

Know Your (Agile) Enemy

Organizations are struggling with the concept of more rapid delivery of services. We often hear that the greatest enemy of many products is status quo. It becomes even more challenging when we have bad actors who are successfully adopting practices to deliver faster and to iterate continuously. We aren’t talking Lorenzo Lamas and Jean Claude Van Damme kind of bad actors, but the kind who will lock down hospital IT infrastructure putting lives at risk in search of ransom.

While I’m writing this, the WannaCry ransomware has already evolved and morphed into something more resilient to the protections that we had thought could prevent it from spreading or taking hold in the first place. We don’t know who originally wrote the ransomware but we do know that in the time that we have been watching it that it has been getting stronger. As quickly as we thought we were fighting it off by reducing the attack surface,

The Risks of Moving Slowly

Larger organizations are often fighting the idea of risks of moving quickly with things like patching and version updates across their infrastructure. There are plenty of stories about an operating system patch or some server firmware that was implemented on the heels of its release to find out that it took down systems or impacted them negatively in one way or another. We don’t count or remember the hundred or thousands of patches that went well, but we sure do remember the ones that went wrong. Especially when they make the news.

This is where we face a conundrum. Many believe that having a conservative approach to deploying patches and updates is the safer way to go. Those folks view the risk of deploying an errant patch as the greater worry versus the risk of having a vulnerability exposed to a bad actor. We sometimes hear that because it’s in the confines of a private data center with a firewall at the ingress, that the attack surface is reduced. That’s like saying there are armor piercing bullets, but we just hope that nobody who comes after us has them.

Hope is not a strategy. That’s more than just a witty statement. That’s a fact.

Becoming and Agile IT Operations Team

Being agile on the IT operations side of things isn’t about daily standups. it’s about real agile practices including test-drive infrastructure and embracing platforms and practices that let us confidently adopt patches and software at a faster rate. A few key factors to think about include:

Version Control for your infrastructure environment

Snapshots, backups, and overall Business Continuity protections

Automation and orchestration for continuous configuration management

Automation and orchestration at all layers of the stack

There will be an onslaught of vendors using the WannaCry as part of their pitch to help drive the value of their protection products up. They are not wrong in leveraging this opportunity. The reality is that we have been riding the wave of using hope as a strategy. When it works, we feel comfortable. When it fails, there is nobody to blame except for those of us who have accepted moving slowly as an acceptable risk.

Having a snapshot, restore point, or some quickly accessible clone of a system will be a saving grace in the event of infection or data loss. There are practices needed to be wrapped around it. The tool is not the solution, but it enables us to create the methods to use the tool as a full solution.

Automation and orchestration are needed at every layer. Not just for putting infrastructure and applications out to begin with, but for continuous configuration management. There is no way that we can fight off vulnerabilities using practices that require human intervention throughout the remediation process. The more we automate, the more we can build recovery procedures and practices to enable clean rollbacks in the event of a bad patch as well as a bad actor.

Adapting IT Infrastructure to be Disposable

It’s my firm belief that we should have disposable infrastructure wherever possible. That also means we have to enable operations practices which mean we can lose portions of the infrastructure either by accident, incident, or on purpose, with minimal effect on the continuation of production services. These disposable IT assets (software and hardware) enable us to create a full stack, automated infrastructure, and to protect and provide resilience with a high level of safety.

We all hope that we won’t be on the wrong side of a vulnerability. Having experienced it myself, I changed the way that I approach every aspect of IT infrastructure. From the hardware to the application layers, we have the ability to protect against such vulnerabilities. Small changes can have big effects. Now is always the time to adapt to prepare for it. Don’t be caught out when we know what the risks are.

Thinking Like the Bad Actors and Prioritizing Security

Assume you’ve been breached. Period.

The reason that I start there is because I’ve learned from practice, that we have to work on the assumption that we have had our systems violated in one way or another. The reason that this is important is that we have to start with a mindset to both discover the violation, and prevent it in future.

Who is it that has breached our systems? Well, we have a fun name for them…

Bad Actors

Hey, I like Kirk too, but you have to admit…he’s not really a good actor

No, not the kind that you see in SyFy remakes of popular movies, but the ones that have been infiltrating your infrastructure for nefarious purposes. Bad actors are those who have the single-minded purpose of breaching your security, and doing something either inside the environment, or taking something back out.

All too often we hear about breaches long after they have happened. I’m a big fan of Troy Hunt’s web site Have I Been Pwned? It’s a helpful resource, and a reminder of just how important it is that we understand that bad actors exist and are pervasive in the world of internet connected resources.

Bad actors love the internet of things. Just imagine how much simpler it is to access resources when they are interconnected and internet accessible. Physical security is the first place to look, and all the way up the stack to the application layers. Using your mobile to access your bank site when you’re in Starbucks? Not a good idea. Seem paranoid to say that? That’s what every bad actor hopes you say.

Assume security is failed. Assume you’ve been breached. The next step comes with how you plan and prepare to discover and recover.

White Hat (aka Ethical) Hacking

Just under a year ago, I attended the BSides Delaware event. This was a very interesting opportunity to go outside of the normal conference circuit that I am used to attending. I would liken this to the VMUG equivalent where DefCon is the VMworld of security. These are great events, and touch on every aspect of security from application, to network, to physical, and even security of yourself including self-defense tactics.

One thing that you learn about hacking, is that it takes a hacker to find and prevent a hacker. White Hat hacking has been a practice for many years, and it is an important part of the security and networking ecosystem. If you aren’t already engaging an organization to help with penetration testing or some form of security analysis, you absolutely should.

The same skills that drive the bad actors have been embraced by white hat hackers to provide a positive result from that experience. We use real users to provide UX guidance, so it only makes sense that we should use the same methodology for our security strategy.

Make Security Part of Infrastructure Lifecycle

Whether it’s your application lifecycle, or your infrastructure deployment, security and automated testing should very definitely be a part of the workflow. I was lucky to have a great conversation on my Green Circle Live! podcast recently with Edward Haletky.

We chatted about how there is a fundamental flaw in both the home and the data center. The whole podcast is a must listen if you ask me, and I encourage folks to rethink security as something that should be top of mind, not an after thought.

There are lots of bad actors out there. I prefer to keep them in the movies and out of my data, how about you?

SDN challenges – “You can keep your networking gear. Period.”

You my recall a statement regarding some big U.S. legislation that led us to the forever quoted phrase: “You can keep your insurance. Period.” that has caused quite a ruckus in the insurance industry both for providers and customers because it was found to be untrue.

So just imagine that a similar situation that is about to come up in the enterprise networking environment. With Software Defined Networking (SDN) being the hottest buzzword and most aggressively marketed paradigm shift in recent months, we are about to hit a crossroads where adoption may leave many customers taking on unexpected costs despite being pitched a similar line that SDN will simply run as an overlay, but you can keep your existing networking hardware.

Let’s take a look three particular challenges which are present as companies take a look at SDN and figuring out the cost/benefit and how it relates to existing infrastructure.

Challenge 1 – No reduction of ports

This is one of the most common misconceptions around SDN. The idea that ports will be reduced is unfounded because the number of uplinks that will exists into host systems, virtualized or not, will continue to be the same. If anything, we will have more uplinks as scale-out commodity nodes are utilized in the data center to spread the workloads around more.

The reduction in ports will happen as a result of the migration to higher speed ports like 40GbE and up, but the consolidation level will be limited for physical endpoints. SDN is a great enabler for creating and leveraging overlay networks and making physical configuration less of a factor in the logical design of the application workloads.

In order to get the savings on per-port utilization, the move to 40GbE and higher ports will trigger the rollover of existing hardware and expansion to new physical networking platforms. In other words, you need to change your existing hardware. Hmmm…that wasn’t in the original plan.

Another interesting shift in networking is the new physical topology which includes ToR (Top of Rack) switches which are connected to a centralized core infrastructure. The leaf-spine design is being more widely used and continues to prove itself as an ideal way for separation of workloads and effective physical isolation which has other benefits also.

Challenge 2 – Policy-based delivery requires policies

This is the business process part that can add a real challenge for some organizations. Putting a policy-based framework into place is only truly going to add value when you have business policies that can leverage it. Many CRM and Service Desk implementations fail because of the lack of adoption which stems from a lack of understanding of existing processes.

Many organizations are having difficulty adapting to cloud implementations because it is a very process-oriented technology. As more and more companies make the move to embrace cloud practices, the move towards SDN will be more natural. There is much more awareness now about where the efforts are needed to make SDN deployments successful.

Challenge 3 – Your physical gear doesn’t support your SDN platform

Other than the previous limitations where we mentioned the port speed issues for higher consolidation levels, there is also the issue of firmware and software capability on existing ASIC hardware. As an example, you can use Cisco ACI as your SDN product of choice, but if you are running all Cisco Catalyst equipment I have some bad news for you. (*UPDATE 11/21*: Thanks to @jonisick for the tip that there are smaller physical investments to allow the use of ACI. It is not a full rip and replace, but more some additional hardware to augment the current deployment in most cases).

There will be a barrier to entry for many SDN products because there are requirements for baseline levels of hardware and firmware to support the enhancements that SDN brings. This will be less of an issue in a few years I am sure, but for right now the move to embrace an SDN architecture may be held back by the need to upgrade physical hardware to prepare.

Have No Fear! SDN will work…No seriously, it will

While these scenarios may be current, realistic barriers to the adoption of a SDN platform, we are also dealing with hardware and software lifecycles that are becoming shorter and more adaptive.

The hardware platforms you are running today are inevitably going to be upgraded, extended, or replaced within a reasonable time frame. During that time we will also see the shift the way that we manage and deploy the networking inside organizations. This fundamental shift in process will align with the wider acceptance of SDN platforms which are being regarded as only accessible to agile organizations sometimes.

What SDN brings to us is really the commoditization of the underlying physical hardware platforms. Not necessarily the reduction of quality or cost of the hardware, but the commoditization of its role in the networking architecture.

What is important for us all as technologists is that we are prepared for the arrival of these new products and methodologies. We have a responsibility to stay ahead of the curve as much as possible to get to the real benefit of SDN which is to enable agility for your business.

DevSecOps – Why Security is Coming to DevOps

With so many organizations making the move to embrace DevOps practices, we are quickly highlighting what many see as a missing piece to the puzzle: Security. As NV (Network Virtualization) and NFV (Network Function Virtualization) are rapidly growing in adoption, the ability to create programmable, repeatable security management into the development and deployment workflow has become a reality.

Dynamic, abstracted networking features such as those provided by OpenDaylight participants, Cisco ACI, VMware NSX, Nuage Networks and many others, are opening the doors to a new way to enable security to be a part of the application lifecycle management (ALM) pipeline. When we see the phrase Infrastructure-as-Code, this is precisely what is needed. Infrastructure configuration needs to extend beyond the application environment and out to the edge.

NFV: The Gateway to DevSecOps

Network virtualization isn’t the end-goal for DevSecOps. It’s actually only a minor portion. Enabling traffic for L2/L3 networks has been a major step in more agile practices across the data center. Both on-premises and cloud environments are already benefitting from the new ways of managing networks programmatically. Again, we have to remember that data flow is really only a small part of what NV has enabled for us.

Moving further up the stack to layers 4-7 is where NFV comes into play. From a purely operational perspective, NFV has given us the same programmatic, predictable deployment and management that we crave. Using common configuration management tools like Chef, Puppet, and Ansible for our regular data center management is now extensible to the network. This also seems like it is the raison d’être for NFV, but there is much more to the story.

NFV can be a confusing subject because it gets clouded as being L2/L3 management when it is really about managing application gateways, L4-7 firewalls, load balancers, and other such features. NFV enables the virtualization of these features and moving them closer to the workload. Since we know that

NV and NFV are Security Tools, not Networking Tools

When we take a look at NV and NFV, we have to broaden our view to the whole picture. All of the wins that are gained by creating the programmatic deployment and management seem to be mostly targeting the DevOps style of delivery. DevOps is often talked about as a way to speed application development, but when we move to the network and what we often call the DevSecOps methodology, speed and agility are only a part of the picture.

The reality is that NV and NFV are really security tools, not networking tools. Yes, that sounds odd, but let’s think about what it is that NV and NFV are really creating for us.

When we enable the programmatic management of network layers, we also enable some other powerful features which include auditing for both setup and operation of our L2-L7 configurations. Knowing when and how our entire L2-L7 environments have changed is bringing great smiles to the faces of InfoSec folks all over, and with good reason.

East-West is the new Information Superhighway

Well, East-West traffic in the data center or cloud may not be a superhighway, but it will become the most traffic-heavy pathway over the next few years and beyond. As scale-out applications become the more common design pattern, more and more data will be traveling between virtualized components on behind the firewalls on nested, virtual networks.

There are stats and quotes on the amount of actual traffic that will pass in this way, but needless to say it is significant regardless of what prediction you choose to read. This is also an ability that has been accelerated by the use of NV/NFV.

Whatever the reasons we attach to how DevSecOps will become a part of the new data center and cloud practice, it is absolutely coming. The only question is how quickly we can make it part of the standard operating procedures.

Just when you thought you were behind the 8-ball with DevOps, we added a new one for you. Don’t worry, this is all good stuff and it will make sense very soon. Believe me, because I’ll be helping you out along the journey. 🙂

How about some exCLUSive Cisco news?

With technology event season rapidly approaching, it is time to get your planning sorted for what exciting conventions, events, and community gatherings to join into. As a Cisco Champion I have a particular wish this year that I would love to fulfill which is to attend Cisco Live in San Franciso which happens from May 18-22.

Unfortunately, it’s not in the cards for me this year, but that doesn’t mean that I can’t excite you all about what’s happening in the Cisco world in the next while around the event!! Let’s put the US in CLUS 🙂

And I kind of teased with the headline, but in case you didn’t catch the pun, it is not exclusive, but exCLUSive 😉

Do UC what I see?

It’s no secret that Unified Communications is a feature platform for Cisco, so it should also be no surprise about the new goodies coming out of the Cisco camp as we head into the second quarter of the year.

The video conference experience has come a long way luckily.

Collaboration is the key to success in so many ways for modern businesses and for people in their personal lives. If you aren’t already connected to your colleagues through collaborative tools and technology, there are inevitably things coming that will be enabling better collaboration, remote workforce, and ultimately more closeness for people.

All work and no play? Not at a Cisco event!

I’ve done my time on some stages in the past and even played a few corporate gigs, but let me tell you that when Cisco puts on a party, they tend to go a bit bigger 🙂

Perhaps you’ve heard of a fellow by the name of Lenny Kravitz?

Yes…that Lenny Kravitz

Or perhaps his friends that also be there, a little group called Imagine Dragons?

Convinced yet?

More than just a show

One of the tenets of the Cisco Champion program, and of Cisco as an organization, is to support collaboration and sharing of information. As a consumer of the services, a blogger who has intimate access to products and engineers, and as a lover of technology in general, I can’t say enough how positive the big event experience can be.

I’ve attended a number of events from one to 5 days, and the content that people have come away with, along with the great social collaboration at Cisco Live, is one of the unparalleled experiences for a customer, partner, or just a die hard technologist like myself.

March 14th Early Bird deadline!!

If you get a chance to go, make sure you tell them that @DiscoPosse sent you 🙂

Why it is always, and never, the year of VDI, but network virtualization is here to stay

You’ve all heard it: The Year of VDI. It has consistently been the mantra of the launch of each calendar year since Citrix and VMware gained significant adoption during recent years. But why is it both true and false at the same time?

Desktop versus Server Virtualization

Server virtualization has taken hold in an incredible fashion. Hypervisors have become a part of every day datacenter deployments. Whatever the flavor, it is no longer necessary to justify the purchase of a products like VMware vSphere or Microsoft Hyper-V. And for those who embraced open source alternatives already, KVM, Xen and the now burgeoning OpenStack ecosystem are joining the ranks as standard step-1 products when building and scaling a datacenter.

Server virtualization just made sense. We have 24 hour workload potential because of a 24/7/365 usage scenario plus backups, failover technologies and BCP needs.

Desktop Virtualization is a good thing

The most commonly quoted reason for desktop virtualization is the cost of managing the environment. In other words, the push to move towards VDI is about policy based management of the environment. Removing or limiting the variables in desktop and application management makes the overall management and usage experience better. No arguments there.

So why hasn’t it hit? One powerful reason is the commoditization of desktop hardware. It used to cost thousands of dollars in the 70s to purchase basic desktop hardware. Throughout the 80s, 90s and 2000s the price of desktop hardware has plummeted to the point where corporate desktops are now available for $300-$500 dollars and they are amortized over 2 or 3 year cycles.

And now the CFO has their say

The impetus to use VDI save money on desktop hardware went away. We now have thin desktops that are nearly the same price as full physical desktops. There is no doubt that this has slowed the uptake of VDI in a strong way. When it comes to putting together our annual expenses, the driver has to be strong to make the shift.

Next up is the classic “Microsoft Tax”. While we may reduce the cost somewhat at the hardware layer, we are still bound to the needs of the consumer of the desktop to provide Microsoft OS and software. There is a reason why we don’t even talk about Linux on the desktop anymore. If people are ready for Linux, they will just use it. There are however millions of software consumers that require Microsoft tools. That’s just a fact.

So now that we enter 2014 and all of the analysts and pundits tout the new DaaS (Desktop-as-a-Service) revolution, we have to still be realistic about the amount of impact it will have on the overall market place. I don’t doubt that it will continue to gain footing, but nowhere near the level of adoption that server virtualization was able to produce.

A Patchwork Quilt

In my opinion, we have already gone down a parallel timeline on policy based desktop management. With Microsoft SCCM, LanDesk and a number of other imaging and application packaging tools already in many organizations, there is less of a need to make the shift towards VDI. There are great use cases for it for sure, but it will be a difficult battle to siphon away the physical desktop processes that have done us well up to now.

Patch management and application delivery can do a lot towards providing the policy based management that we are being told is the prime objective of many VDI products. I’m a big proponent for VDI myself, but I am also realistic about how much of the overall market it has already and will cut into.

So, is this the fate of network virtualization?

Network Virtualization is costly, but that’s OK

So now we have an interesting shift in the market again. Network virtualization has gone from a project in the labs of Stanford to becoming a real, market ready product with many vendors putting their chips on the table.

Not only are ASIC producers like Cisco and Juniper Networks coming forward with solutions, but VMware with their purchase and integration of Nicira to produce VMware NSX has created a significant buzz in the industry. Sprinkle in the massive commitment from open source producers with OpenFlow and Open vSwitch and there is undoubtedly a real shift coming.

2015 will be the year of Network Virtualization

In 2014 we will see a significant increase in the understanding and adoption of network virtualization tools and technologies. With the upcoming GA release of Cisco ACI and more adoption of open source solutions in the public and private cloud, we will definitely see a growth in the NV adoption.

Remember, NV isn’t about reducing physical network hardware. It is about reducing the logical constraints and increasing the policy and security integration at the network layers. Server virtualization has laid the groundwork to create a perfect pairing really.

When does NV become the standard in networking deployment?

This is the real question we need to ask. As all of the analysts pore over the statistics and lay out what the landscape looks like, we as architects and systems administrators have an important task to deal with: Making NV work for us.

In my mind, network virtualization is a powerful, enabling technology. We have already come a long way in a short time in the evolution of networking. From vampire taps to the upcoming 100GBE hardware in a couple of decades is pretty impressive. Now we can fully realize the value of this hardware that we have sitting on the datacenter floor by extending it with virtualization tools and techniques that gave us exponential gains in productivity and efficiency at the server levels.

It’s coming to us one way or another, so I say that we dive in and do something wondrous together.