Another note from DAC 2012: In Gary Smith’s Sunday night pre-DAC talk, he mentioned that in 2011, ESL tools took off – the famous Hockey Stick. See his slide (courtesy of my friend Frank Schirrmeister):

Could the long-awaited, long postponed arrival of the ESL Hockey Stick at the same time as the Stanley Cup Finals be a coincidence? Enquiring minds think not! It must be a mystical confluence after all.

So do ESL Designers favour the Kings or the Devils? All my favourite teams are not in the finals, so I will stay strictly neutral.

I was at Gary Smith EDA’s pre-Dac Sunday evening talk in San Francisco Sunday June 3, 2012, and was interested to see him lead off with a discussion of what he calls “Multi-Platform Based Design”.

This was especially interesting to me. It was 13 (Thirteen!) years ago, in 1999, that I co-wrote with several colleagues “Surviving the SOC Revolution: A guide to platform-based design” (available from Springer here, Amazon here).

And then I followed that up four years later with “Winning the SOC Revolution: Experiences in real design”, which picked up the platform-based design theme with a number of real instances (available from Springer here, Amazon here).

Daniel Payne at semiwiki.com with an excellent summary of all the slides

You can also get Gary Smith EDA’s Multi Platform Based Design as a pdf here, and Gary Smith EDA will sell you a research report on it here.

But we’ve been here before. In fact, at DAC 2010, I wrote my first “Return of the Platform” post! Is this a case of the phenomenon being observed, then people forgetting about it (all too quickly) and then it being rediscovered? I don’t think so. Maybe the analysts forget, but the industry has been working in this space for a long time.

Of course, in 13 years, lots have changed. From simple platforms to complex ones; from single platforms to indeed, aggregations of “Multi-Platforms”. Some people are trying to skip the “P”-word (Platform) entirely and call them IP subsystems, as with this note by Neil Hand of Cadence. But no matter what you call it, if you look beneath the surface, you will find the platform concept.

Last week I was able to attend DATE 2012 in Dresden, Germany. I was in conversation with a colleague who asked me what I would like to see in DATE 2013 in Grenoble. One thing that occurred to me is that I use the opportunity when visiting a conference like DATE to find out as much as I can about the state of design in Europe. It is a topic of a lot of interest to me (since I used to be a designer/EDA type in Europe, albeit on the relatively far north-western fringes of it).

So what I would like to see at DATE 2013, among other things, is a talk by a knowledgeable, unbiased and dispassionate analyst on “Whither electronics design in Europe?”. Over the past several years there have been incredible changes in the companies and design teams with partitionings, spin-outs, mergers, re-partitioning, acquisitions, mergers, divestments, and on and on.

The same is true of European systems companies such as Ericsson, Nokia, Thales, NSN, Bosch, to name just a few in various sectors. Many of them are changing at a rapid pace and some prognostication of what the future might bring would be of great interest. There are also branches of other worldwide semiconductor and systems companies with substantial design teams in Europe.

There are also significant worldwide IP companies in Europe, of which, of course, ARM is the most notable example.

Finally, relevant European companies in the embedded software space, such as Dspace in automotive software and other embedded domains, and tools areas, such as Esterel Technologies, have potentially strong design futures, but something very fuzzy to figure out from far away.

Finding a knowledgeable and bluntly honest analyst who both understands these sectors, their history and is willing to guess what the future may bring sounds like the biggest challenge for a conference like DATE. But it could certainly make a very interesting keynote if it went beyond a dry recital of facts and predictive figures to more colourful anecdotes about the industry – where it has been and where it is going in Europe.

For years I have observed a simple confusion between power and energy leading designers to jump to odd conceptions about design alternatives.

Power consumption is expressed in units such as milliWatts (mW) consumed over some suitably short interval of time. To quote our modern oracle, Wikipedia,

In its technical sense, power is not at all the same as energy, but is the rate at which energy is converted (or, equivalently, at which work is performed)

In evaluating designs, a unit such as milliWatts per MegaHertz (mW/MHz) allows one to normalise different designs – assuming one can run two designs at the same rate (MHz), then the one that consumes more instantaneous power per MHz can be thought of, in some sense, as more “power-consuming”.

But this is a rather crude metric. Not all milliWatts are the same. Design A may actually accomplish more computational “work” than Design B even if it consumes more power. If Design A does double the amount of computation than Design B but consume only 10% more power, then for a given set of computations (e.g. to process a frame or token of data according to some algorithm), running the algorithm on A will take fewer cycles or time periods than on B. In fact, 1/2 the number of cycles. Since each cycle on A consume 10% more ‘power’ than B, then in terms of energy, A will consume 55% of the energy of B (0.5 * 1.10). Thus A is more efficient than B. We assume of course that when not doing useful work, these devices are powered down in some way and consumption drops to near zero.

If devices are plugged into the mains, then consuming more or less energy is a cost, but possibly not a huge cost (e.g. running the computer I am using right now is a cost, but not very large). Of course, running Google’s data centres requires a large amount of energy and efficiency is all important there.

If devices consume a high level of instantaneous power (peak power), then running them without melting may require expensive packaging, expensive (and energy consuming) cooling, or a combination of both. So for the devices running the large servers in the data centres, and the processors in desktop machines, peak power is also quite important.

But when we turn to mobile device design, such as for smartphones, tablets and all kinds of battery powered devices, energy trumps power every time. Making the assumption that peak power for the designs is kept controlled (a big challenge in itself) and does not require exotic cooling or expensive packaging, what is key in these devices is energy: power consumption multiplied by the time required to play a piece of music or video or read several pages of a book or to look up something on the web. When a device is battery powered, and peak power is kept under control, it is energy consumption that determines how long the battery lasts in typical use scenarios and how often you have to tether the device to the wall for a new charge of soma.

So in mobile, while mW/MHz is one useful metric, it is not the key one. Efficiency of how you use the mW is key to lowering joules per song, or joules per video, or whatever else is draining your batteries.

This week I was invited to speak at an Intel-sponsored symposium at the Technion, Haifa, Israel. The theme of the symposium was “Challenges and Opportunites in System-Level Design and Verification” (see here for an outline of the symposium). I spoke on the theme “Software-Defined Everything: The impact on high-level design and validation”.

Technion, Haifa

There were several interesting speakers, most from universities talking about their advanced research, including David Harel who I had the chance to see give a talk for the first time.

The idea for the talk was taken from the work I have been doing on baseband processors and systems the last couple of years, and the term “Software-Defined Radio” which has become a little out of fashion in the last little while. I think this is due to taking the words a little too literally. To quote the Wikipedia definition:

A software-defined radio system, or SDR, is a radiocommunication system where components that have been typically implemented in hardware (e.g. mixers, filters, amplifiers, modulators/demodulators, detectors, etc.) are instead implemented by means of software on a personal computer or embedded computing devices

While an interesting concept, this definition seems to take an “all or nothing” position in which everything that was done hard (with hardware) is now done soft (as software running on processors). This is a little too absolutist. I think in modern embedded systems, the key phrase is “software-defined“, but not necessarily “100% software implemented“. That is, software running on embedded processors defines the functionality of the system, and will definitely be the implementation vehicle for much of that functionality, but not necessarily all of it. However, where functions continue to be delivered via hardware blocks, software, and the processors it runs on, will control it, shape it, and thus define what it is.

So moving on from Radio (which of course, are vital parts of cellphones, whether smart or not, tablets, and indeed untethered computing devices of all kinds) to Everything: we see that there is a major shift in product architectures, and many more embedded processors are used to deliver major parts of the product function, and will define the rest. It goes way beyond cell phones to include almost every consumer product. It was a long time ago that I first heard of simple microcontrollers used in appliances such as microwave ovens, washing machines, etc. Now in theory and in practice most appliances have software-defined functions, and it is hard to conceive of any electronics-based device without a software defining component, and a growing one at that.

Of course, from my perspective, many of these products are increasingly using application-specific instruction set processors (ASIPs) as well as fixed-ISA processors to do this.

Despite John Bruggeman leaving Cadence recently, the focus he brought in 2010 to the idea of EDA360 and “App-driven design” was a manifestation of this trend. While there were not a lot of new ideas in the white paper Cadence produced, it did represent a synthesis of a lot of ideas that had been out in the industry for a while and it was interesting to watch Cadence’s product line evolve – some of which clearly tied to the software-driven approach they espoused. It is not completely clear how this will go further now that John Bruggeman is no longer there although Cadence says that it is still a key part of its vision.

So what is the impact of “Software-defined everything” on high level design and verification? It is pretty profound.

It re-emphasises the concept of platform-based design

It opens up room for new types of processing engines, usually derived from ASIP design flows

It emphasises the need for sophisticated up-front design space exploration and architectural analysis: that is, part of ESL (electronic system level design)

It requires highly software and processor centric verification methodologies and tools, thus (with the above) leading to virtual prototyping and virtual platforms

It allows high level synthesis for hardware blocks to be squeezed into the design methodology for those blocks that still need to be mapped to digital hardware and to be designed rapidly.

It changes the nature of hardware prototyping, as processors map to FPGA devices somewhat differently than digital hardware blocks designed at the direct RTL level.

It also exposes some key areas for future tool and technology development:

Debug with multiple heterogeneous processors and engines becomes more complex, and current single-focus debug methods must evolve to more of a sytem level debug concept

Verification technology must move to support the use of “multi-core” to design “multi-processor”. Techniques for this are still evolving

As systems continue to grow in complexity there is an opportunity to reconsider an old idea – true system-level synthesis.

Lots of opportunities exist for future innovation and research. As always, I would welcome your comments.

It was surprising because it seemed to split off a key tool area in ESL – high level synthesis – from the companion ESL tools in a large EDA company and move it to a small one. Also, Catapult C had been a relative success for Mentor Graphics in ESL, and had a major market share for several years. Losing the sales and support channels of a major worldwide EDA company seemed to be a strange way to foster a product, although given its relatively loose structure, the links between Catapult C and other tools in Mentor Graphics seemed at times quite tenuous. Clearly the focus of an independent small ESL company like Calypto may allow more focus on it and its links to the verification and power optimisation products that Calypto has been built on. We’ll have to see what happens to this product and the new Calypto over the next couple of years.

It did make me wonder if the entrance of Cadence directly into the High Level Synthesis market with its C to Silicon product, and Synopsys’s indirect entrance with its acquistion of Synfora technology, (just a year ago…… seems much longer) has meant more competition for Mentor’s Catapult C despite its early entrance and large market share. The sale of AutoESL to Xilinx earlier this year implies a bit of a shakeup has continued in this area which had too many players for a relatively small pie, albeit what looks like a growing one.

I know several people who have been part of this team and whatever the future holds for them, I wish them Bonne Chance!

I returned from DAC 2011 Thursday evening, after being in San Diego since Saturday evening. Here are some observations:

Attendance – I see that DAC has announced double digit attendance rises in all categories. This is something that I am terrible at estimating so I will take their word for it. Anecdotally, I did attend technical sessions with good attendance – even one of the last sessions on Thursday. And there were times on the exhibit floor, especially when wandering near Synopsys, Mentor and Cadence, when the crowd around their booths were reminiscent of DACs of old. The keynotes also seemed well attended although they shrank the space between Monday and Thursday. So its good to see more attendance. Next year, in early June in San Francisco may be a better marker for DAC since there is a larger pool of local people to draw on to attend. It was also better that DAC did not provide numbers in a somewhat misleading way as they did in 2010 (where they combined exhibit-only and full conference exhibit attendees into one number, as was pointed out by several commentators, including Olivier Coudert). Still some way to go to back to the 2009 level, or the approximately 5000 in San Diego in 2007.

Mentor ESL Symposium – I attended the Mentor ESL Symoposium and was a bit disappointed when Wally Rhines said that instead of the normal focus in previous years on ESL practices and case studies, there would be several managers from different companies talking about ESL and how it fit into their company’s design flows. However, on some reflection, I did think that this may be a reflection on the long-awaited maturity of ESL in design flows – no longer a set of “missionary practices” to be taught, but becoming a more ordinary part of design flows and a more regular part of the design process.

Focus on Embedded Systems and Software – DAC 48 tried to beef up the conference content on Embedded Systems and Software. This included a special “Embedded System and Software Executive Day: Embedded Systems and Software Meet Hardware” on the Wednesday, in which I participated. Attendance for this day was rather thin, which was a shame, because there were some interesting talks and quite a few questions and discussion. I think there were three problems – calling it an “executive day” may have been offputting for other attendees, many of whom would have found the contents interesting; it was billed in a parallel way to DAC’s “Management” Day, but unlike “Management” Day, there was no special fee to attend nor special registration, but many attendees would not have realised this; finally, it was not advertised very well. Nevertheless, I enjoyed attending and participating. And there was other embedded content in the show and conference. This is an area DAC should continue to stress and grow over the next few years.

A Lot of Knowledge can be an Exhausting Thing – DAC had a lot of material presented. Sometimes, too much! Looking at the conference programme, one could often find 13-15 things going on simultaneously: for example, on Tuesday June 7 at 11 AM, there was one special session, one panel, one pavilion panel, four research paper sessions, a user track session, a special Management day, presentations in the Embedded Theatre on the show floor, an Exhibitor Forum presentation, a colocated event, and a workshop! Very often the themes of many of these simultaneous sessions or events would overlap, presenting the attendee with the “too much choice” phenomenon. Sometimes there was a bit of content in some of the colocated events or workshops of interest, but not enough to divert an attendee from the main DAC and making it pretty expensive to attend both. I think DAC may have reached the limits of choice (and expense) with all the interesting colocated and parallel workshops, and maybe should plan these a little more deeply next year to avoid some of the inevitable overlap and give attendees more opportunity to see some of the interesting parallel content.

Venues – 2012 DAC is in San Francisco in early June. 2013 is in Austin, Texas. Getting good attendance in Austin may be challenging. That will be the 50th. DAC (tracing DAC back to 1964 in Atlantic City). Although generally pulling attendees from Silicon Valley is not too hard when DAC is in San Francisco, when elsewhere it needs to pull from the local design community. In San Diego this year I saw a number of attendees from Qualcomm (San Diego) but not too many from the companies up in Orange County, not too far away – such as Broadcom. In Austin there are a number of companies and design centres but it is not so easy to get to from other places, so it will be interesting to see how it turns out. DAC is going to have to make a concerted effort to get attendance in 2013.

One of my favourite episodes of Seinfeld is the one with the tag line “It’s a Ziggy”. (The episode is called “The Cartoon“). In it, Elaine draws a cartoon for the New Yorker that gets accepted, but it turns out she has subconsciously copied the joke in the cartoon from the comic strip Ziggy. Her boss, J. Peterman, played by John O’Hurley, recognises it almost immediately:

Peterman (returns) Flash of lightning Elaine I just realized why I like this cartoon so much.

Elaine: Oh! Do tell sir?

Peterman It’s a Ziggy!

Elaine: A Ziggy?

Peterman: That irreverence , that wit I’d recognize it anywhere. Some charlatan has stolen a Ziggy and passed it off as his own. I can prove it. Quick Elaine , to my archives.

I was reminded of it here at DAC 2011 in San Diego during Mentor Graphics’ ESL Lunch and panel discussion. One of the panelists put up a slide showing the raising of design abstraction levels over the years and I immediately thought “It’s a Schirrmeister!”. That is, created by my friend and colleague Frank Schirrmeister, then of Cadence (the Alta group of past fame and glory), now with Synopsys.

I actually wrote about this 3 years ago in this blog. Unfortunately, people have not improved in giving credit to Frank. Instead, this picture has become a meme of ESL, perhaps always to go uncredited. But at least Frank invented a seminal tool for explaining the ESL transition!

I am moderating a Pavilion Panel on “Multicore: Madness or Just Today’s Chaos?” with panelists Drew Wingard of Sonics, Paul Tobin of AMD and Debashis Bhattacharya of Huawei on Tuesday June 7 from 1600 to 1645. This should be of interest. Multicore issues continue to be hot and the right way to architect systems, configure (and extend) the cores, map applications to multiple processors, and develop and debug the software, continues to have many more questions than answers. This panel will try to determine the state of the art and answer some of the key open questions. I have a number of questions of my own to ask the panelists, but hope all the readers show up to ask their own (deference to the questions from the floor, always!)

This is followed on Tuesday June 7 by a Pavilion Panel on the topic “Android, MeeGo and Linux: Where is it all heading?” from 1700 to 1745. The answer at least for MeeGo seems to be “Nowhere!!!”, showing once again that events in the real electronic design world change much faster than titles for a conference session can! This is moderated by Jim Zemlin of the Linux Foundation. Among the panelists will be my Tensilica colleague Eric Dewannain, and the other panelists are Gerard Andrews of TI and Simon Davidmann of Imperas. I know Eric speaks very passionately about the topics of mobile platforms and operating systems, and am sure the others will be equally vocal.

On Wednesday June 8, late in the afternoon, I am speaking as part of Session 3 in the Embedded System and Software Executive Day. I will be discussing themes around the role of configurable and extensible processors in embedded systems (naturally) and will zero in on specific application domains such as baseband for wireless and audio. This day is new for DAC this year, I believe, and is part of its expansion of scope to include more designer content and increase focus on embedded systems. Scanning the programme, there is a lot of interesting content on display – way more than can be attended by any one human being. The role of trade show/conferences such as DAC has been up in the air in recent years. Yet there is no substitute to face to face technical and business interactions with a whole host of colleagues all in one place at one time. As DATE 2011 showed, there may be a bit of a turnup in the EDA trade show/conference area. Let’s see if DAC 2011 continues the trend. I hope to see you there.

The big advantage of wifi and the internet is that when you are away from home base, you can be in constant touch.

The big disadvantage of wifi and the internet is that when you are away from home base, you NEED to be in constant touch!

DATE 2011 Coffee Mug

Unfortunately, the number of work related things I needed to do after each day of DATE 2011 ended ate up all the time I wanted to use to write up brief blogs about it. So I only was able to post a note Tuesday evening about the EDA industry and embedded software tool development – although that did trigger a couple of comments. This post then is a bit of a summary of a variety of things I saw and heard at DATE 2011.

Attendance: I did not attend DATE 2010 in Dresden, and do not know any numbers for this year. However, people I talked to who had been in Dresden last year felt there were definitely more attendees, and some of the sessions this year were downright crowded with people standing and spilling out of the back of the room. This included the session I co-organised and spoke in on Virtual Manycore Platforms. In addition, people said there were more exhibitors than last year and at times the exhibit floor seemed quite crowded (having the coffee break in the exhibit area helps, even though there was coffee in the other part of the conference centre where many of the technical sessions were).

Keynotes: I saw several interesting keynotes. Professor Steve Furber of U. Manchester talked about his Spinnaker project which is using processors and advanced interconnect to build a neural net of processors to model brain function and to provide an advanced computing platform. I first heard about this in 2007 and was familiar with most of it, but he showed pictures of the first 2-core prototype from end 2009 and a plot of the next 18 core chip due back from fab in April 2011. Of course, when he has built a multi-million processor system, if this were a science fiction movie it would become self-aware and try to take over the world, while exterminating humanity! Other keynotes by Philippe Magarshack of STMicroelectronics, and later in the week, by Carmelo Papa of STMicroelectronics, talked about many of the advanced and specialised technologies ST has created for low energy consumption ICs, 3D, advanced RF and mixed-signal technologies, etc.

Drawing on the local design community: Grenoble being a key city in Europe for microelectronics, with the enormous presence of STMicroelectronics, and other institutions such as Verimag and CEA-Leti, the conference did a pretty good job in drawing in participants in the programme from these and other local places, especially in special sessions. It would have been good to see even more designers from ST and other companies attending the technical sessions – seeing hundreds of local designers with one-day passes would have bolstered attendance significantly.

Technical Programme: I heard some people say that they thought some sessions they attended had poor papers (they were experts on the subject). With usually eight sessions on at once, it was both impossible to attend more than a thin slice of the programme and possible to jump about a bit. There were some unfortunate overlaps – while talking in one session, there were some interesting-sounding talks and panels going on in others. I found a number of interesting talks including a session on Baseband (3.7) and one on smart devices for the cloud (5.1). Someone else I talked to made the comment that in a maturing field, there might not be enough significant advances every year for a major conference. I think this stresses the need for DATE and other conferences like DAC to keep increasing the focus on design as well as design technology, because across the spectrum of electronics, there are always major advances in design every year in some domains at least.

The PhD Forum: Monday evening, after registration and an initial reception, there was a crowded and well-organised PhD forum run by Peter Marwedel. This was well-done, with posters from 40 different PhD projects on display (and some very good food on offer as well). I was able to chat to some students about their research – lots of creativity on display.

The Coffee Mug: Every year for as long as I can recall, DATE has offered a coffee mug with the conference name and year on it in exchange for filling out a survey/suggestion form. Every year that I have been at DATE I have earned my coffee mug. As I write this I am enjoying coffee in my 2002 mug. Seems like a fair exchange! See above for picture