Taking Aim At Big Data

By Ed Sperling
As the Internet of Things bridges the gap between the mobile and big data worlds, EDA and IP vendors increasingly are looking well beyond their usual boundaries.

How successful they are at moving upward into a market that is far less price-sensitive remains to be seen. But from a technology standpoint, at least, the issues encountered by data centers and cloud providers are remarkably familiar. They’re the same ones that engineering teams already wrestle with in the mobile SoC world—performance, power and area.

When it comes to big data, each of these factors can be measured in real dollars rather than hours on a battery. Improving performance means fewer servers are needed within the data center, and it frequently translates into less energy per computation. Fewer servers also are more efficient because they require less energy to power and cool a data center, which is one of the biggest costs faced by CIOs.

While data center consolidation of servers has been under way for the past decade, there are two areas that are only now beginning to be addressed. One involves software, where applications, operating systems and middleware can be made significantly more efficient. Those kinds of improvements have rarely been even considered in the data center, where the key attributes of successful software are performance and accuracy.

A second area involves storage. The amount of data being stored is exploding, and being able to access it efficiently and within a reasonable amount of time is the next big challenge. It’s also where chipmakers, IP vendors and EDA companies see a big opening—and one that will only grow in importance as the Internet of Things begins to take root.

But all of this requires thinking about a system-level design from the standpoint of an entire data center, where the kinds of tradeoffs made in one area can have big impacts on another.

Software
At least part of the challenge, as well as one of the largest opportunities, is the ability to bridge hardware and software—and to make each more efficient. The two worlds have existed as silos at the enterprise level since the invention of the mainframe, when product cycles were as long as a decade.

“We have server-based customers using virtual prototypes to model the power and launch software, but it’s surprising just how little interest there is within the software community to deal with power,” said Glenn Perry, general manager of the embedded software division at Mentor Graphics. “It’s frustrating because one or two lines of code can add 10% in terms of power.”

Perry said that Mentor has been trying to push more efficient software programming since the 1990s, but the only ones who have shown real interest in improving the efficiency of software are hardware engineers. The solution, and one the company has just begun to promote, is using the same software development tools with some rather innocuous additions that check on power efficiency.

“Until now, when you when a software engineer hit a break point, they’d blame the hardware and the hardware guy would blame the software. What we’ve been able to do is collect data in the correct database so the hardware engineer sees the same data as the software engineer. It’s almost like they’re providing a second view of the same process, and it allows the software developers to connect to the emulator.”

Commercial IP
Another opportunity involves IP. While the amount of commercially developed IP content is growing on the SoC level, it is crossing over into the data center world through PCI Express and the protocols that ride on top of that—Nonvolatile Memory Express (NVMe) and Small Computer System Interface Express (SCSIe).

“We’re seeing three initiatives in the data center,” said Ron DiGiuseppe, strategic marketing manager for IP at Synopsys. “The first is on the compute side, where lower-power SoCs and compute operations are becoming critical. The second is in networking, with software-defined networking on how to manage data on the data plane, which is where PCIe is standard. The third is on the storage side where there is a lot of activity around things like buffering and packaging.”

With storage, the key is protocol conversion above PCIe until a single de facto standard emerges, similar to what happened with Blu-ray and HD-DVD in the high-definition DVD space. “We’re seeing customers implementing one or both (NVMe and SCSIe),” he said. “They’re also using a range or storage solutions ranging from RAM to external drives to cache acceleration. We’re also seeing demand for a whole range of IP, whether that’s PCIe generation 1, 2 or 3. It’s critical for the data center that the software provided to ISAs is synchronized.”

PCIe is particularly important in this world because it’s almost ubiquitous, from networking to PCs and add-in cards. It’s also in the storage IP, which is why Cadence and Synopsys are both battling for a piece of this market—both with IP and verification IP. Cadence rolled out a full subsystem in this space last year, as well, including a controller, PHY, firmware and VIP.

“Any time there is a convergence of standard protocols there are opportunities to look at subsystems,” said Susan Peterson, group director of VIP marketing in Cadence’s SoC Realization Group. “We’re anticipating more subsystems, coupled with EDA tools such as emulation or acceleration to verify an SoC or system, and other tools for layout.”

She noted that the big opportunity—if it materializes—is for full data center analysis, particularly involving power. “We’re also going to see more and more standards like SATA, NVMe and IDE, which makes connections to standards easier.”

Hurdles ahead
While these opportunities sound straightforward enough, there also are some giant hurdles.

“One of the biggest issues we see is differentiation,” said Naveed Sherwani, president and CEO of Open-Silicon. “Software is the biggest play to differentiation, but everyone has their own proprietary hardware, I/Os, and technology for transferring data between switches and routers. You can’t buy generic solutions here. On top of that, the tools will remain ASIC-centric for at least a few years.”

He noted the big picture problem to solve isn’t latency on a chip it’s latency across a data center. That has to be accomplished using less energy to achieve better performance everywhere, and it’s where software will become particularly important.

This is not simple stuff, and it compounds the problems that many companies have encountered in the ASIC world. While the problems are similar, it doesn’t necessarily follow that the approaches for solving the problems are exactly the same.

“With the Internet of Things we’re going to see arrays of sensors loosely connected to help people make intelligent decisions,” said Drew Wingard, CTO at Sonics. “So we’re going to need SoC design techniques to attach to the server, but they will have to be fully cache coherent, highly reliable, with performance analysis. This is not where most of the SoC design work has been. The big iron guys also will have to learn about higher levels of integration and the ASIC guys will have to develop different techniques. The current ASIC flow only works because they’ve made the assumption that they can fix problems with guard-banding. They don’t error-check everything.”

This is a big shift for SoC developers, and it remains to be seen just how well these two worlds meet, Wingard said. While the SoC developers have a better sense of use models and power management, they clearly don’t have all the pieces. But neither does anyone else.

And that has left everyone thinking seriously about where the new opportunities might be. “Every EDA supplier is thinking about this, because for the first time in the past 20 years it means a new customer base,” said Mike Gianfagna, vice president of corporate marketing at Atrenta. “ The number of design starts is shrinking even though they’re getting more complicated. That leaves two new opportunities. One is in 3D-IC, which we believe will be real whether it’s in one year, two years or four years. The second opportunity is in the data center.”

With legacy hardware interfaces and well-established operating systems and applications, the real opportunities appear to point to IP and software—particularly embedded software. “If you can add a layer that deals with clock frequencies and that turns things on an off, that can be a big win,” Gianfagna said.