Archive for June, 2011

Few pundits have addressed the system engineering development implications of the recent EDA and semiconductor company’s move toward platforms that include both chip hardware with associated firmware software.

Niche industries are notoriously myopic. They have to be, since excelling in a highly specific market segment usually requires a sharp focus on low-level details. A good example of a niche market is the semiconductor Electronic Design Automation (EDA) tools industry, that fine group of highly educated professionals who create the tools that allow today’s atom-sized transistors to be designed and manufactured.

The EDA industry has long talked about the importance of software (mostly firmware) as a critical complement to the design of processor-intensive hardware ASICs. While the acknowledgement of the importance of software is nothing new, it has only been in the last few years that actual hardware-software platforms have been forthcoming by the industry.

What does this trend really mean, i.e., the move to include firmware (devices drivers) in the System-in-Chip integrated circuits (ICs)? To date, the result is that companies offer a platform that contains both the SOC hardware and accompanying software firmware. In some cases, like Mentor, the platform also includes a Real-Time Operating System (RTOS) and embedded code optimization and analysis capabilities.

One could argue that this move to include software with the chips is an inevitable step in upward abstraction, driven by the commoditization of processor chips. Others argue that end users are demanding it, as time-to-market windows shrink in the consumer market.

But rather than follow the EDA viewpoint, let’s approach this trend from the standpoint of the end-user. I define the end-user as the Systems Engineer who is responsible for the integration of all the hardware and software into a workable end-system or final product (see figure). Note the big “S” in SE, meaning the system beyond the hardware or software subsystems.

The integration phase of the typical Systems Engineering V diagram is just as critical as the design phase for hardware-software systems.

What is the end-system or final product? It might be a digital camera or tablet; or perhaps a heads-up display for commercial or military aircraft; or even a radiation detector for a homeland security devicen. Regardless of the end system or product, the role of the Systems Engineer is changing as he/she receives software supported ICs from the chip supplier, courtesy of the EDA industry. In essence, the “black box” that traditionally consisted of a black package chip just got a bit blacker.

Some might say that the systems engineer now has less to worry about. No longer will the SE have to manage the hardware and software co-design and co-verification of the chip. Traditionally, that would mean long meetings and significant time spent in “discussions” with the chip designers and the firmware developers over interface issues. Today, that job has effectively been done by the EDA company and the chip supplier as the latest generation of chips come with the needed firmware, e.g., the first offering from Cadence’s multi-staged EDA360 strategy.

On the embedded side, the chip-firmware package might also include an RTOS and tools for software developers to optimize and analyze their code. Mentor is leading this area among EDA tool suppliers.

But how does this happy union of chip hardware and firmware affect the work of a module or product level SE? Does it make his/her job easier? That is certainly the goal, e.g., to greatly reduce co-design and co-verification issues between the silicon development and associated software while including hooks into upper level application development. Now, companies claim that many of these issues have been taken care of for a variety of processor systems.

One should note that these chip hardware-software platforms don’t yet really extend to the analog side of the business. This is hardly surprising since the software requirement is far less than on the digital processor side. Still, software is needed for such things as communication protocol stacks (thing PHY and MAC layers).

Yet, even on the digital side of the platform space, important considerations remain. How does hardware and software intellectual property (IP) fit into all of this? Has the new, higher abstracted blacker box that SEs receive been fully verified? The answer to this question might be partially addressed by the emergence of IP subsystems (“IP Subsystem – Milestone or Flashback“).

Other questions remain. How will open system code and tools benefit or hinder the hardware chip and firmware platforms? From an open systems angle, the black box may be less black but is still opaque to the System Engineer.

What will be the new roles and responsibilities for systems engineers during design and – perhaps more importantly – during the hardware and software integration phase? Will he/she have to re-verify the work of the chip hardware-software vendors, just to be sure that everything is working as required? Will the lower-level SE, formerly tasked with integrating chip hardware and firmware, now be out of a job?

If history is any indication, then we might look back to the early days of RTL synthesis for clues. With the move to include chip hardware and firmware, the industry might expect a shifting of job responsibilities. Also, look for a slew of new interface and process standards to deal with issues of integration and verification. New tool suites will probably emerge.

How will the new chip hardware and firmware platforms affect the integration System Engineer is not yet certain. But SE’s are very adaptable. A black box – even a blacker one – still has inputs and outputs that must be managed between variant teams throughout the product life cycle. At least that activity won’t change.

Analyst reports from IHS iSuppli suggest a strong market for NAND and DRAM memories, which will be good news to related non-volatile memory devices such as one-time programmable technologies.

If the latest reports are accurate, the mobile memory market will be worth $16.4 billion in 2011. IHS iSuppli researchers report that NAND memory will be the largest product segment this year, followed closely by mobile DRAM. NAND and mobile DRAM are used increasingly in high-end smart phones and tablets. In third place will be NOR memory devices, which are used mainly in lower-end mobile handsets in steadily decreasing amounts.

Before I go further, let’s have a short refresher on the alphabet soup that is the world of memory acronyms. The two main types of memory used today are Random-Access Memory (RAM) and Read-Only Memory (ROM). RAM has very fast access times but burns a lot of power. ROM has much slower access rates but burns less power. The main difference between the two is that RAM needs a constant supply of power to retain its data. ROM retains its data even when power is removed. ROM is an example of non-volatile memory (NVM).

Both NAND and NOR devices are the two main types of nonvolatile ROM. NAND Flash is used in just about every consumer product you can imagine.

An interesting variation to standard types of embedded non-volatile memory is the use of One-Time Programmable (OTP) memory cores. Several fabless semiconductor intellectual property (IP) companies provide OTP memory, including Sidense, Kilopass and NSCore. Most of these companies use an anti-fuse memory approach that is implemented in standard-logic CMOS and requires no additional or post-processing mask steps. One advantage of Sidense’s macrocell IP is that it uses very low power for memory applications in consumer markets.

One more memory refresher for those of us using RAM in our brains: macrocells refer to hard IP blocks that must be placed manually in the SoC floorplan, whereas standard cells are typically provided by the semiconductor foundry.

One relatively easy way to improve memory performance and lower power consumption is to follow Moore’s law, which leads automatically to reduced die size, faster performance and higher memory densities. A related benefit of Moore’s law is a decrease in the voltage needed to power the transistors. Sidense has taken these benefits to heart by being the first to offer OTP antifuse-based memory cores at the leading-edge process node of 28nm (due in 2012) that also support 1.8V input/output interfaces.

Both low power and smaller die size are prerequisites for mobile applications, such as smart phones and tablets. Technology companies that meet these prerequisites should do well as the market for memory-related products continues to grow.

Even though there was no specific mention of IP at this year’s Integrated Electrical Solutions Forum (IESF), all discussions about the future growth of both infotainment systems and self-braking, parking and driving autonomous vehicle operations will only be possible by a heavy reliance on chip and FPGA IP.

Not once did I hear the expression nor see the phrase “IP” while attending the 2011 Integrated Electrical Solutions Forum (IESF) in Dearborn, MI. The absence of IP nomenclature was hardly surprising as the forum focused on Electronic/Electrical (E/E) systems design and wire harness engineering issues. Still, the value and growth of electronic hardware and software reuse was apparent thought-out the event.

From the beginning of the one-day show, the growing importance of electronics in automotive systems was stressed. The first keynote speaker, John McElroy, the host of the Autoline Daily show, ended his presentation by talking about cars that can brake, park and even drive by themselves. One can image the extensive array of sensors, networks, analog and digital subsystems that are needed to accomplish these autonomous tasks.

McElroy even when so far as to say that these vehicle-to-vehicle communication-based systems would be game changers for the automotive industry and might be available by 2014. Perhaps that is why Google is a major developer in several of these initiatives. http://www.youtube.com/watch?v=X0I5DHOETFE

In today’s automobiles, electronics are the chief differentiator between competing auto makers. In terms of numbers, automotive electronics per car typically include hundreds of sensors, tens of Electronic Control Unit (ECU) processing systems, miles of cable harnesses and extensive network systems – not to mention up to 10million lines of software code.

The complexity hinted at by such numbers, coupled with the safety concerns of the industry, make hardware a software reuse a must. Trusted hardware IP and software libraries will be the most obvious way to achieve the necessary economies of scale and shorten development cycle demanded by a consumer market.

The automotive space is still an industry of siloes, from electrical, mechanical, industrial and computer science disciplines. In practical terms, this means that hardware and software engineers don’t often talk with one another. Such communication challenges are why Wally Rhines, CEO of Mentor Graphics, noted in his keynote that the “biggest problems still occur in the system integration phase of the V-diagram life cycle.” The integration phase is traditionally the part of the system or product life cycle stage where hardware and software subsystems first come together. Not surprisingly, this is often where interface problems first appear.

Interface definition means connecting the right bits to the right bus, activities that are best modeled in the early architectural design phases of the life cycle. Such models include virtual prototypes, which simulate the yet undeveloped hardware – usually via IP and libraries. But virtual constructs are still relatively new to the automotive industry, where prototypes are still predominantly physical. However, the complexities of electronic systems are making physical prototypes a thing of the past.

Paul Hansen, in his annual report at the end of the IESF, noted that automotive giants like Ford are relying on newer players like Bsquare, an embedded software vendor, to help create infotainment players. Apparently, Tier 1 software provides are struggling with the hardware-software challenges of ever more complex and integrated infotainment systems. Here is yet another segment where hardware and software IP reuse can bring significant benefit.

One doesn’t need to look very far to find a growing market for automotive infotainment IP. Common elements in this segment include ARM, ARC and Tensilica (among others) processors for audio systems, audio amplifier, communication controllers for automotive specific networks like CAN and more general standards like Bluetooth, DACs, microcontrollers, memory, and even embedded security.

Automotive FPGA IP related is also growing, such as Ethernet Audio Video Bridging (AVB) IP for network connectivity, MOST and Flexray network controllers, and even stepper motor controller IP for the simultaneous operation of two phase stepper motors.

IP continues to play an important role in automotive electronics, even if the phrase is seldom used at market events.

This company provides a cloud deployment platform that allows users to test out tools from leading EDA vendors. Xuropa uses the Amazon cloud platform. Harry Gries, well known EDA blogger and Xuropa’s Director of Customer Support, provided me with a great demo.

Check out their booth! Its huge and well laid out. I met with Jim Ballingall, Ph.D. and VP of Marketing, as well as Jason Gorss, Technology PR Manager. Lots going on at Globalfoundries, including the new 28nm fab in Albany, NY and the synchronization of all of their world-wide fabs and partner facilities at the full planar, 28nm node. Their ecosystem is amazing.

This company is the “sleeper” at DAC. Their main message at DAC is one of PLM tools, which are essential for chip design and IP reuse as it evolves into the packaging, board and software market. But this company is also into augmented reality, megatronic systems and full-up systems engineering development tools. Rick Stanton, Director of Industry Market Development for ENOVIA (global semiconductor market) provided me with a first-class tour of the company – which I have mentioned from different angles in the past.

The analysts at Gary Smith’s EDA group spoke to a large crowd following the traditional kick-off gathering on Sunday night for the Design Automation Conference (DAC).

Focusing on the 3D/TSV packaging space, Mary Olsson opened by saying that the market was “not for the weak of heart.” Cautious optimize would best describe her outlook. She included several key factors for which to watch in this evolving market. Chief among that list was that chip designers and suppliers should follow the money to discern the direction and market strength of 3D/TSV trends.