Software is top headache for network engineers

SAN JOSE, Calif. – Software tops the list of concerns for network systems developers these days. High on their wish lists are better tools for debugging multicore processors and supporting virtual I/O, according to a panel of engineering managers at the Linley Tech Processor Conference here.

"Mostly the software is still lagging to take advantage of latest optimizations," said Raghavendra Mallya, a distinguished engineer from Juniper. "We would like to see [better] virtualization in I/O to create partitioned systems," he added.

"We look a lot at software tool chains" when picking a processor," said Mark Ennamorato a director of engineering for enterprise routers at Cisco. With multicore processors "debugging is very hard--trying to figure out where a packet has gone is quite a challenge," he said.

That's especially true when engineers need to make changes to multithreaded networking applications, said David Sonnier, a processor architect at LSI.

"Partitioning an app into a lot of threads especially for packet processing is a real challenge," said Sonnier, speaking in a separate panel of processor designers.

"If new requirements drive a small change in the app, performance can be cut in half because partitioning of the threads is suddenly incorrect," he said.

David Malicoat, a distinguished architect in HP's networking group, said engineers are making "stair step" progress in supporting virtual I/O in networking systems, another important but thorny software issue. Ironically the growth in the number of processor cores available has helped ease some early problems in load balancing.

"Early dual-core, single threaded processors running two jobs simultaneously often had load balancing problems where one core was unused and another was running flat out," Malicoat said.

The multicore hardware itself is getting powerful enough to make even ASIC giants like Cisco think twice about developing its own chips.

"Unless you are really trying to get to very high performance, there doesn't seem to be much reason to do your own ASIC anymore," said Ennamorato. "I think the scalability is there [in off-the-shelf chips] except for some extreme cases," he said.

However, all sides agreed, OEMs need to evaluate the details of increasingly complex multicore processors closely to find architectural bottlenecks often in memory controllers or how resources are shared. "There are a lot of subtleties," said Ennamorato.

Shifts in memory chips also create major headaches for network systems makers, said HP's Malicoat. "We have had to redesign systems because a specific flash chip we used was no longer available due to consolidation among flash suppliers," he said.

In addition, engineers have faced conflicts synching the life cycle of their networking systems with DRAM generations, Malicoat said. "After shipping a system for a few years, you may have to figure out if you will do a redesign to support a next-generation memory or buy enough of the old memory chips to last the system's lifetime," he added.

He noted that most of the processors presented at the conference supported DDR3 DRAMs. "But DDR4 will be here in 2014, so in three more years we may start to see a crossover," he said.

Virtual I/O is making "stair step" progress, said HP's David Malicoat.

About 100 years ago* a journo in 'BYTE' magazine built a multiprocessor project using dozens of 8052 microcontrollers. Sounds silly, but as a concept it illustrated exactly the same problems as are found today, including Amdahl's arguments. Fascinating proof of concept; would all be done in software these days, but it was fun to work through the build and interprocessor comms issues with him as the project unfolded month by month.

Hi Larry,
Having worked in the game industry since '88 I can tell you that multicore processors haven't really faired all that well. Even though the hardware makers insist on using them. It's very time consuming to engineer the solutions that split the problem up over multiple cores. Case in point. XBox 360 games look just at good as PlayStation 3 games and the the 360 does not have a multicore beast like the Cell Processor.
Multicore looks great on paper for the hardware people because it solves their problem of more processing power but the savings are lost when products are delayed due to extended s/w engineering schedules and debugging (really nasty, hard to track down issues).

At one point I held the opinion that embedded systems programmers were the best equipped to program multicore CPUs, but in the time since then I have seen increasing sophistication out of game programmers. They have learned to deal with massively parallel compute power in the modern GPUs and gaming platforms. Ultimately, this has to be resolved by the creation of an appropriate abstraction layer. There will always be a need for a few hard-core programmers who work deep in the guts, but to really succeed there has to be a way of effectively expressing software that can be used by more "normal" programmers. I'm not sure that we're there yet.
Larry M.

I agree that tools for debugging multicore is a real need.
I would like to see debugging tools that used graphics to show program flow through modules and be able to zoom into the graphics to get to actual lines of code. Also, being able to slow the hardware down while having it behave exactly like the real-time version.

Two major challenges are stated in the article - multi-core optimization and virtual I/O. In addition, chip obsolete and memory revolution have impacted the development life cycle. What would you do to keep up to technology and to shorten the development time?