Posts Tagged ‘DAC’

While we all know Karen as the standards guru at Synopsys, her story on how she got here – and next year, to the Presidency of the IEEE Standards Association – speaks to her resolve, smarts and diplomacy.

Interestingly, her engineering education experience speaks to San Jose State engineering dean Dr. Belle Wei’s thoughts on women in engineering. That “back in the day” (which is another way of saying no years allowed here) women engineering majors peaked in number, and have fallen since.

Seems to me that it’d be a fascinating interview: Bartleson and Wei on this topic…why it’s dropped and what to do about it.

One suggestion on the article: EEWeb could have included a picture of Karen’s yellow Camaro. Maybe she’ll drive it to DAC this year! You’ll know that Karen is a-coming when you see the car in the pictures below.

This year, we’ll see an old standards battle get resolved. Now that all the players are participating in the IEEE Standard 1801 project (IEEE Standard for Design and Verification of Low Power Integrated Circuits), we can finally put the UPF-CPF debate to rest. Let’s hope that peace will reign and the temptation to fight one more time about a single low power standard will be overcome.

Social media will become less of a curiosity or a perceived waste of time for engineers. We’ll see more EDA customers helping answer each other’s questions and sharing more information (nothing proprietary, of course). LinkedIn discussions will have more depth, not simply people posting “read my blog”.

Facebook will remain more of a social vehicle, and for many engineers of our generation, a misunderstood channel. YouTube videos that provide good content – “how-to” and learning opportunities – will become popular. Twitter will remain a mystery for most, while a minority will find it of much value (include me in the minority). Marketers who spam social media channels with marketing-speak will be shunned. And, we’ll have some great guests on Conversation Central radio.

What: Define the state and future of, and path to prosperity for EDA & IP industry.

Format will be ashort presentation, then audience-oriented discussion on what direction startups and established companies in the EDA & IP space ought to steer if they want to show their investors the money.

Why: This session will help attendees think about how their companies’ technology will have to analyze and verify design concepts much earlier in the design process…at much higher levels of abstraction than before. That’s where the money in EDA & IP will be in the coming years.

Hogan and McLellan will propose that EDA & IP companies will have to help users guide their SoC designs from concept to implementation, ensuring that the design is synchronized for both the hardware and software aspects of the system’s functionality.What’s the upshot? These changes in the SoC realization supply chain will alter the: 1) relative values of the chain’s components; 2) ability to leverage that value into profit; an 3) valuing of every entity in EDA & IP – the company you want to start up, or the one you’re working for.

Last year, EDA360 was the at- & post-DAC talk of the town, so to speak. So is there any continuity this year?

Atrenta seems to have made the first move in making the EDA360 vision real. It’ll come out of the chute as “The SoC Realization Company,” and probably show technology that implements SoC Realization. By doing that, Atrenta may end up helping define the industry direction on how the EDA360 rubber meets the road.

Here is a white paper that talks about HOW to realize SoC Realization:

Stay tuned………next week, details on how to find out what Jim Hogan and Paul McLellan have to say about the direction EDA ought to follow so EDA & IP companies can answer their investors who demand that they “Show me the Money!”

Mike Gianfagna, well known and long time EDA executive, has quite a bit to say about the EDA360 manifesto that’s electrified the EDA world. As vice president of marketing at Atrenta, Inc, Mike has been an astute, articulate participant in the EDA value discussion. I was able to grab a few minutes with Mike to ask how EDA360 helped define the 2010 and beyond definition of EDA value and how it might alter the industry’s direction.

ED: EDA360 has caused quite a buzz. Why?

MIKE: Simply put, it’s one of the first times a major EDA vendor has focused on growing the industry and not just winning the next deal.

ED: It’s curious that EDA people have embraced it so vigorously. After all, it’s not a “how to” but more of a “here’s the vision, the dream.” What’s the impact of EDA360 on the EDA industry? The EDA user community? The EDA media?

MIKE: Let’s face it, the EDA industry has been stuck at roughly the same size for a long time. This lack of growth, in my opinion, has a lot to do with the predatory practices most suppliers pursue. That is, “I win the current budget and you lose.” Growing the business takes a broader view, and a good dose of vision to see beyond today’s budget and determine how EDA can serve new customers tomorrow. EDA360 articulates such a vision.

I’d like to think all this will have a positive impact on our industry overall. As for the EDA media, I am honestly not sure who that is anymore, so it’s hard to comment.

ED: This is a Cadence-generated document. How effective can it be if there’s a significant “other” camp?

MIKE: This point is what I find most interesting (and refreshing) about the concepts of EDA360. It’s not a Cadence document per se. It’s a blueprint of where EDA can go to find new customers and add new value. The piece articulates this in terms of current industry trends. It aims to exploit adjacencies in order to grow the market. And it clearly states that everybody needs to start thinking differently if it’s going to work.

ED: Rightfully, some people could view EDA360 as a Cadence effort to regain some of its industry momentum and influence that it has NOT had for years. Why should the rest of EDA buy into a company initiative?

MIKE: As I mentioned, I don’t see this as a company initiative. I see it as a call to action for our industry. We can all keep chasing the same budget, or find new customers and new budgets. A “dog food dish” image is spinning around in my head right now, but I’ll leave that discussion to the class historians among us.

MIKE: Wow, thanks for the flattering reference. It’s not every day that Atrenta gets mentioned in the same sentence with Cadence, Synopsys and Mentor. The reference is correct, however. Atrenta is now at a size, and a popularity level that gives us the opportunity to make a real difference, if you believe the DeepChip readership.

How can we make a difference? First of all, a consistent focus on serving the new and emerging user base referenced in the EDA360 vision will help. That is, the software development community that requires advanced silicon to get its job done. The changes implied by EDA360 will take time – all design paradigm shifts do and they usually take longer than you like.

If a group of forward-looking companies can work together toward the vision, the time required to get there can be reduced. And that spells opportunity for everyone.

ED: How will EDA360 affect the medium sized EDA companies?

MIKE: I think the effect here will be similar, except many mid-size EDA companies may necessarily be slower to respond. Pursuing new markets and new customers takes discretionary resources, and many mid-size companies don’t have a lot of that.

ED: How will EDA360 affect the slew of small and startup EDA companies?

MIKE: For the current crop of startups, I don’t believe the effects will be that noticeable. Some will figure out how to re-invent themselves in new, emerging markets but most will continue on the path they are currently on.

The interesting part for venture-funded startups is what happens next. Will the venture community start writing checks for new business models that address the application software developer’s needs? If this happens, we’ll have another proof point that EDA360 is more than a nicely done White Paper.

Jim McCanny, co-founder and CEO of Altos Design Automation, Inc., is one of the most vocal voices on the use of characterization technology and what trends will be coming down the chip design pike.

I was able to catch Jim to talk about where EDA was heading and how characterization technology plays into those trends issues and chip design challenges.
……………
Ed: I was at an event, recently, where the premier investor in EDA startups cited Altos as one of his startups that did it right. Altos also got mentioned in Paul McLellan’s book, EDAgraffiti, as a company that did it right. What did Altos do that was “right?”

Jim: The things we did right? Well, I’d say that we focused on a real need – characterization run-time was too long to support the electrical analysis needs of 90nm and below. We used an experienced team and got a product to market quickly. And finally, we took only a relatively small amount of funding and relied mostly on organic growth and kept control of the company.

This last item, I think, is the one that has resonated with private investors. It made us somewhat immune to the big economic downturn in early 2009, as we had always been operating in a very fiscally responsible way.

Ed: Good point.

Jim: Finally while it was nice to be mentioned as a company who did it right, I don’t think we can be the “model” for every EDA startup. We did it right for the particular market we were going after and the current economy. Other target markets at another time might require a different approach.

Ed: I’m still fuzzy on what characterization is. Can you give me the 30 second elevator explanation?

Jim: I’d be glad to lessen some of the mystery, Ed. It’s elevating the behavior of a group of related analog transistors to a higher level of abstraction that is fundamental to digital design. For example a simple Nand gate typically has four unique analog transistors. Characterization enables each Nand gate to be modeled as a cell with equivalent timing, power and noise characteristics. That is equivalent to a 4X reduction in the circuit size to be analyzed.

Ed: So how big are we talking about?

Jim: For complex cells and blocks, there can be hundreds or even thousands of transistors and for memory instances there are often millions of transistors so the abstraction dramatically reduces the number of distinct elements that the digital design tools have to work with. Without characterization, there would be no synthesis, place and route or static timing analysis. There would be no IP reuse, basically no SoC design flow.

Ed: So characterization is obviously extremely significant to chip design. I recall that Altos started off back five or six years ago, touting the onset of statistical timing analysis (SSTA) and how characterization would be a required element in SSTA-based design flows. Adoption hasn’t really been overwhelming, yet it appears that characterization helps with static timing analysis driven chip design as well as SSTA driven chip design. What’s the difference in productivity and value that characterization brings to static timing analysis and SSTA based chip design?

Jim: SSTA is one of the areas that we saw as driving the need for faster characterization.

Ed: Now, can you remind me what SSTA is again?

Jim: Sure. SSTA is a methodology for predicting the impact of process variation of the performance of your design. It requires an accurate library that captures the effect of variation on timing (delay, slew, constraints etc.). Creating accurate models in a reasonable time frame is a big challenge. For example, the most accurate method is to use Monte-Carlo simulation but that would take thousands of times longer than “nominal” characterization (which itself can take days or even weeks). Clearly this “brute-force” approach wasn’t going to work if SSTA was to be feasible. We are able to create an SSTA library hundreds of times faster than using Monte Carlo, but still with great accuracy. Without this capability, SSTA would not get anywhere.

Ed: So is the push to lower manufacturing processes a factor in the increasing use of SSTA?

Jim: Yes! We are now starting to see serious usage at 28nm. You actually bring up a good point. There are several methods for predicting process variation such as “corner” analysis or “advanced on-chip-variation” (AOCV). Both of these solutions require either more characterization or longer characterization run-time; so our “ultra-fast” characterization technology is still very relevant whether SSTA is used or not.

Ed: As we get down to finer processes, what problems will chip/SoC designers encounter?

Jim: For most of today’s designs, the key challenge is optimizing both power and timing. Variation can play havoc with this process which is why SSTA is starting to get some traction. If you add too much margin then you can kill your power budget. However if you don’t account for variation you can have a dead part on your hands or suffer from low yield.

Ed: What else will crop up?

Jim: Another key challenge is what to do with all the available silicon real estate. The most obvious thing is to integrate more and more components on-chip. To get to market quickly this means using off the shelf IP. Making sure all the IP works together in a consistent way is tough. If you rely on pre-built models from the IP vendor you may suffer from over-guard banding or simply that the models are not up to date with the version of the process you are using. The best way around this is to either re-characterize everything to a single well defined set of characterization criteria or run an independent validation of your IP before using it.

Ed: IP quality is definitely a challenge. Harking back to that EDA investor, he seems to be saying that the valued technology will be in the front end, going forward. What’s your take and how does characterization play into that supposed trend?

Jim: There has always been value at both ends in EDA. Layout verification, layout editing, place & route, post-layout simulation, static timing analysis are all back-end solutions and major EDA markets. Sure integrating systems and software has huge potential but so does any solution that can make sure your chip will work in silicon or can improve its yield.

Ed: So what’s ahead for EDA? Is it a stagnant, mature industry, as so many people were saying a year and two years ago? Or maturing but vital in the semiconductor supply chain?

Jim: I don’t think it’s mature. There is simply too much churn in customer needs. Current tools are continuously getting enhanced and new tools are always coming on the market. Just look at the Spice simulation market. Three years ago, I think everyone would have said it’s stagnant. But look at all the new players and new capabilities that have come out in the last few years.

Ed: What do you see here?

Jim: There have been big improvements in performance, capacity, new models, new integrations into other solutions and innovative use of distributed processing.

Ed: So what is the technology development/adoption cycle for EDA?

Jim: I think EDA has cycles of about 8-10 years from the leading-edge adopters to trailing-edge users. There were a lot of new solutions around 2000 that have served the industry well for the past decade, but are now aging. Obviously, sometimes the EDA industry gets ahead of itself and has to go through a few lean years like we have just done. The danger is that when the industry needs new tools and solutions they won’t be there, as the past year and half has been pretty brutal and instead of investing in the future, many of the big EDA companies had to make cuts. Key areas such as analog automation, IP integration and verification and system and software design still need a lot of work.

Ed: Keeping in mind that there could be a reduction in new tools, what technologies do you see rising above the others in terms of user need, value added and just plain necessary for, say, 28nm designs that are full of complex IP blocks, many of which don’t integrate easily with one another?

Jim: Tools that truly enable IP integration and verification. By verification I don’t mean “will the IP work stand-alone” but “will it work as desired in the integrated system,” e.g. at the voltage levels being used, at the process corners being used, with the expected amount of process variation etc.

Ed: And what issues will we see rise to crisis level in power? Timing? How will they get fixed?

Jim: Power is really dynamic but timing is usually analyzed statically. How do you really model dynamic, temporal effects such as IR drop, crosstalk and substrate noise using static methods without gross “worst-casing”. In addition noise effects can cause very analog like waveforms that break the assumptions of today’s delay models that assume a linear or piecewise linear ramp. There is room for better timing models and smarter ways to statistically model the impact of dynamic effects like noise and IR drop and possibly hybrid static-dynamic analysis tools.

Ed: So what’s ahead for characterization technology? For Altos?

Jim: Our focus is in “enabling a world of IP.” By that, we mean that we want to make reuse of any form of IP highly productive, be it cells, complex I/Os, embedded memory or custom blocks. To do this we are working on bringing the same kind of automation and performance we have brought to complex cell characterization to IP block characterization. We also see characterization as more than model creation but also as a means to validate IP. A characterization tool tells you how the block will perform under a range of different conditions but doesn’t tell you if it performs as expected or how much margin you have to deal with the “unexpected”. We are on a path to change that.

Ed: Seems promising! I look forward to hearing more on this front down the road. Thanks, Jim, for taking time out of your busy day to share your viewpoints on these topics.

Harry: I wanted to make a distinction there because a lot of people view bloggers like the next generation of journalists. That’s not me. I don’t feel I have a responsibility to cover any one issue or to be non-biased.

Liz: Well, no, you aren’t a reporter, but I would call you a commentator or columnist. Would you agree?

Harry: I suppose, if a journalist is supposed to be totally objective and a commentator is allowed to have an opinion, then I’m more of a commentator. There’s been a lot written lately about bloggers vs. journalists and I’d rather stay out of all that argument. We’re different, period.

Liz: So what is your responsibility or purpose or duty?

Harry: I write the blog because I want to write and it gives me a unique connection to my audience. I can have this conversation, debate, commiserate, etc that I could not do otherwise.

Liz: I get that.

Harry: Ron Wilson had an interesting insight at DAC.

Liz: What was Ron’s insight?

Harry: He said that in the past, conferences were the way that engineers socialized and networked. Also, when EE Times or EDN came out, they’d stand around the coffee machine and talk about it. Now, this kind of interaction is happening online. As a blogger, I’m kind of like the instigator for those conversations. In fact, some of my best blogs were where I put out some idea, and the most interesting insights came from the comments.

Liz: Kudos to you that you were able to elicit so many comments. That isn’t too far off from what we have tried to do with our blogfests and what we were trying to do with Jim Hogan’s presentation at ICCAD. We were trying to initiate a discussion. And this brings me to…..What do you want or not want from PR folks?

Harry: I’ll tell you about someone in PR who I think does a good job working with bloggers.

Liz: I’m all ears…

Harry: First off, she follows my blog and follows me on Twitter, so she has an idea of what I write about and what I am interested in.

Second, if there is some news or item she thinks I’d be interested in, she will email me or tweet me, but not a SPAM press release. She’ll say something like “I know you are interested in XYZ. We have an upcoming announcement regarding that, would you be interested in learning more.”

Last, if I am interested, she’ll help me to know more about it, either through material or talking to someone at the company.

Liz: That makes sense. I think it’s pretty clear that bloggers do not want press releases.

Harry: It’s not that I don’t want press releases; it’s that they are 90% out of my area of interest. Hold on, lemme just take a quick look at my Inbox:….Ok, so the last 10 items I got either from PR people or thru EDA company mailing lists, are not my area of interest. That’s why SPAMming press releases doesn’t work.

Liz: So what are your favorite topics in EDA?

Harry: I look for something that will be disruptive because that interests me the most and generates the most interest. When the OVM/VMM battle was going on, that was a hot topic. When Oasys Design Systems claimed to have a Synopsys-killer synthesis tool, then that was interesting.

Liz: Jim Hogan insinuated during his ICCAD presentation that EDA is complacent. In your opinion, how complacent is EDA?

Harry: I probably would not choose that word to describe EDA. I’d probably pick the word ‘angst” to describe EDA. In the last year we had a DVCon panel called “EDA: Dead or Alive”, we’ve had several companies go under or get bought, and we’ve had a lot of talk about new business models and where EDA provides value. I think EDA is struggling, like it always has, to find out where it fits in the design chain and the supply chain. So there is a lot of angst in that way.

Liz: What do you think the trend will be for the next 10 years?

Harry: 10 years is a long time, especially the way that technology is accelerating. I think that over such a long time, you need to look at the bigger trends going on overall, not just in EDA, and then see how EDA will need to respond. On the economic side, I think the entire IT and software world will change significantly. Cloud computing is a big buzz now but it is for real and companies are going to continually want to rent IT infrastructure rather than own it.

Liz: EDA is driven by the Intels and AMDs of the world.

Harry: Yes, and even Intel and AMD are embracing cloud computing even though they may stand to lose out in the short run. The economics are such that it is more advantageous to build a large data center somewhere that power and cooling are cheap rather than everyone have their own data centers. Companies, like Amazon, rent computing time for 10 cents per CPU hour; and that allows companies to make their IT costs into a running expense rather than a capital expenditure. I think that EDA will need to embrace cloud computing and eventually a Software-as-a-Service model.

I think the technology trend will be that custom ICs will be too expensive to design. In 10 years you’ll have standard off-the-shelf ICs that have hundreds of processors and 10s of millions of gates of reprogrammable logic, like an FPGA on steroids. Most products will be designed with these, so today’s chip design will become tomorrow’s software development.