“…First have a definite, clear, practical ideal–a goal, an objective. Second have the necessary means to achieve your ends–wisdom, money, material, and methods. Third, adjust all your means to that end.” –Aristotle VMworld is next week, and as expected we are already starting to see announcements from VMware and its massive partner ecosystem. This has me thinking a lot about how large organizations choose to adopt new technology, and how it is eventually implemented. IT culture is strange. It is notoriously risk-adverse (especially in large organizations). Since change always comes with at least some element of risk, the tendency is to recoil from it (at least initially). However the technology that drives and supports this industry is in a nearly constant state of flux. As a result, these two elements seem to be fundamentally opposed to one another. To complicate matters further, it is often difficult to identify what is likely a true shift in the industry over what is simply some new product that a vendor is pushing. Change does...

I went “full cowboy” last night and executed an in-place upgrade (to 5.5) of a substantially sized production vCenter. It was running 4.1, contains about 50 UCS hosts, and around 1000 VMs. I ran into essentially every bug/issue along the way, and wanted to document for posterity what I learned. Actually I should say what “we” learned, as I went through the ordeal with a few friends of mine; Scott from Capgemini, and Danby from Honeywell. The vCenter upgrade process is actually fairly simple, (all things considered). You basically just backup your existing database, snapshot (or clone) your existing vCenter (if virtual), mount the ISO and let ‘er rip. I wont go through that process as it already very well documented elsewhere. Looking back the main issue really was just the size of the vCenter database, specifically the vpx_event and vpx_task tables. This was causing DBUHELPER to essentially run out of memory buffer space and crash during the database upgrade. Had I been more careful, we would have either...

I have been championing the merits of software defined networking (SDN) lately (until I am blue in the face) to pretty much anyone who will listen. I figure that I will either win them over with logic, or they will just get tired of hearing me talk and say, “fine, just shut up and do it already.” I really don’t expect the latter scenario. In my experience the biggest obstacle to new technology and new ideas are the IT veterans themselves. I am just as guilty of this. Steve Jobs has often been quoted as saying that the key to success in this industry is the ability to “think different.” Thinking differently is what drives innovation and moves us forward. It should be clear to almost everyone in this industry that the fundamentals that you understand and cling to today, will not be so fundamental a few years from now. The critical problem that SDN has at the moment is that the network administrator’s thinking (as a whole) lags...

Designing a cloud computing solution is a tricky endeavor. Regardless of the size or scope of your project, you will have to account for many different variables in your design. Not the least of which is how you will handle the virtual networking piece of the puzzle. In my experience, if there is one part of the design that can be considered fundamental to success or failure, it is the underlying virtual networking solution(s) that you choose to leverage. This will be a key element in determining how fast and far your environment can scale, as well as what types of use cases you can support. Why has “Software Defined Networking” (SDN) become such a buzz word in this industry over the last few years? Actually, let’s take this a step further and talk about the “Software Defined Datacenter” (SDDC). In SDDC, every part of your stack from compute, to storage, to networking is automated and controlled by policy and scripting. Like many of you, I have spent my...

Today I’d like to walk through the process of configuring dynamic routing between an NSX distributed logical router and an NSX edge. We will be using OSPF to advertise routes owned by the distributed logical router (DLR) to the edge device. In a previous post I discussed the advantages of leveraging the DLR to optimize East/West traffic. We will now be attaching an NSX edge device to provide North/South connectivity into the environment. In this design, all of your East/West traffic is handled by the DLR, and only ingress/egress traffic will be traversing the edge virtual appliance. It should become quite clear by this example exactly how well NSX can scale, and how it can be customized to support literally any network design. First lets start with a logical diagram of what this will look like when complete: (Credit: VMware) As you can see, we have a typical three-tier app design (web, app, and DB) attached to logical switches (VXLAN virtual wires) that then connect to the DLR. We...

When the “Software Defined Networking” buzzword first emerged from the halls of UC Berkeley back in 2008, the definition was simply the separation or abstraction of the data plane from the control plane for all network elements. Meaning that my data plane (read: switches, routers, firewalls, load balancers, etc) are all API driven/controlled from a centralized control layer (the control plane). The control plane is in turn driven by the management interface (the management plane). As is typical of the general IT market, this fairly simple to understand concept has become obscured almost to the point where it can mean different things to different people. I want to try to shed some light on how I see the SDN landscape evolving. Please understand that this is my take on this subject, and your mileage may vary. This is a dynamically evolving space, and as vendors work to come up with ways to sell you things that fit within the SDN arena, it seems like each approach is slightly different...