The debate emerging about how best to build next-generation software-defined networks boils down to this: Do we really need a separation between physical and virtual elements?

Not so long ago, the answer seemed to be "No." SDN was synonymous with the OpenFlow protocol; with the ability to disaggregate the control and data planes of physical networks and manage low-level packet flows using a centralized controller, it seemed virtual switches and interfaces would slot right into OpenFlow designs. But SDN has gained a more expansive meaning: It's now about creating platforms for applications and gaining configuration control that enhances network automation and agility, hopefully lowering operational costs. In this broader view, SDN enables what Juniper VP Bob Muglia calls "network service chains" that run through the network stack, using software to virtually insert services into the traffic flow. That enables IT to build standard bundles of network services and policies to quickly support new applications. About three-fourths of business technology pros base their SDN business cases on providing a more flexible and responsive network, our InformationWeek2013 Software-Defined Networking Survey finds.

But that broad definition masks an important split between physical and virtual networks that affects everything from topologies and system architectures to provisioning, management, services and interactions. As more IT workloads become virtualized, an increasing percentage of data center traffic starts from and flows among virtual ports and switches. Yes, these packets ultimately traverse physical cables and switch ports. But the network endpoints are purely virtual abstractions that, like all virtual resources, can be spun up, reconfigured and moved on a dime, typically without operator intervention, by automation and orchestration software. More than half (53%) of tech pros with SDN plans consider automated provisioning and management among the top benefits.

IT's success in delivering those benefits hinges on how well SDN and network virtualization architectures decouple the physical and logical realms. It makes sense to separate physical resources such as switch ports, network links and Layer 2 traffic flows from logical abstractions such as virtual interfaces and networks. That's true even if you're not ready to swap expensive proprietary switches for dumb OpenFlow forwarding robots -- and most aren't, according to our survey. Just 35% of companies are very or completely willing to make significant architectural changes to production networks to achieve SDN benefits, a five-point drop from last year, even though virtualized networks are required for broader private cloud architectures.

There are two broad SDN technology approaches for IT pros to consider. The first is a software-centric overlay, where routers and switches operate independently of an entirely virtual SDN and operate as a conventional data center switching fabric. To get the benefits, you make your network the equivalent of wiring in a house, just a conduit for moving packets from one place to another, and train your teams on VXLAN or another tunneling protocol. Overlays are favored by VMware, which just released NSX to some fanfare; Midokura; Juniper Contrail; and Alcatel-Lucent/Nuage.

Detractors say it limits scalability and increases complexity, and they champion a more hardware-centric alternative that merges physical and virtual SDN control under a single controller, typically OpenFlow. The controller manages network configuration and packet flows for both physical and virtual devices. Cisco, Extreme, Hewlett-Packard, IBM and NEC lean this direction. Critics say this drags physical-network baggage into a virtualized world.

Vendors, no matter their approach, do themselves no favors by conflating physical and virtual. The percentage of tech pros citing immature products as a barrier to SDN rose over the past year, up six points to 47%. IT may associate "SDN" with "network pain and expense," which explains why the share of companies with SDN in production or with plans for testing in the next year budged little; it's now at 37%, up just seven points from last year. Once companies do start moving to SDN, IT's ingrained reluctance to dramatically change physical network designs bodes well for virtual network overlays that use existing network configurations and standard L2 and L3 protocols.

There are a couple of other considerations for overlays that probably don't rise to the level of detractor but rather talk to the need for these overlays to communicate with the physical network underneath.

Imagine two overlays (or even two flows in a single overlay). Ultimately things will break down to packets on a wire. How does the physical network know which packet gets priority with these two?

This is a simplistic example obviously. But the point is that overlays will eventually need to be pinned to physical networks. This speaks less to one vs. the other and more to the need for active collaboration between the two.

I think some of the more genuine dialogue on overlays and physical networks is starting to embrace this a bit more. Once we get past some of the marketing FUD, cooler minds usually prevail :)

Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.