Whither interoperability? The myth of the grand, unifying EDA database

All of the attempts at creating EDA tools that are interoperable through the mechanism of a grand-unifying open source database have failed and are likely to continue to fail. Why?

OA does not work well for high-transaction processes like IC routers,
DRC, layout vs. schematic, and extraction tools. At the moment, no
commercially available IC routers are fully “native” on OA. Many read
their initial problem space from OA and then write their results to OA,
but the internal transacting takes place on an optimized database. The
read/write to OA is a conversion process.

In addition, the
commitment by Cadence and Si2 to OA’s support of scripting languages is
unclear: specifically, for TCL and Python. TCL support in OA has always
been problematic, and EDA providers who use OA have had to modify the
code themselves before delivering their product. In addition, Python
support in OA currently must be supplied by the EDA vendors themselves.
It looks as though the potential advantage of using OA as a universal
scripting environment to control multiple tools will not become a
reality for those not based firmly on Cadence’s SKILL language.

The
bottom line is that OA is a mixed bag. For the Cadence tool user, OA
assures that Cadence tools are interoperable with other Cadence tools.
EDA tool vendors who want to adopt OA as a native foundation turn over
control of the underlying framework of their product to another company.

For
the larger companies with internal CAD groups building custom EDA
tools, OA saves their engineers the drudgery of having to devise and
maintain their own database. There is little downside in using OA for
these internal CAD groups unless the run-time parameters of the OA
framework do not provide the performance and functionality they require.
For those who use commercial IC design and implementation tools, OA is
at best another interchange format.

Unfortunately, a decade
after its inception, the idea of a grand unifying open-source database
providing interoperability without translation between all IC
implementation tools has yet to be realized.

I believe
eventually we will come to the realization, as did the RCA team in the
early 1980s, that the idea of the great unifying database must naturally
succumb to the more compelling realities of the technical database
needs of EDA tools as they evolve to meet the ever more difficult
requirements of the next generation of IC designs.

Linda
Fosler is a seasoned professional providing marketing and operations
management expertise for a diverse group of technologies and industries
including computer systems, semiconductors, embedded systems and
software and hardware design and verification tools. Linda is currently
the director of marketing for the deep submicron (DSM) division of
Mentor Graphics.

Prior to this Linda was Vice President of
Marketing and Business Development for VaST Systems, Esterel
Technologies and IKOS Systems, as well as Vice President of Software
Quality at Cadence Design Systems. Past accomplishments also include
working as an independent consultant for a host of electronics related
clients.

While most agree that OA has failed in universal acceptance it still has the potential to be leveraged as an Open database standard. In part it depends on Cadence relinquishing some of it's IP rights and others taking up OA and adding to it to make it really unified. Or left to itself the industry will gravitate towards two or three camps of database technologies and face insurmountable problems in the long run.
For a start the top EDA companies should come together and come out with some kind of a compromise and hand it over to Accellera or another such independent entity to take it forward.

Accellera has, what,14 member companies, Si2 has over 100, who more represents the industry? Just look at the Si2 Board of Directors, and yes, Cadence and Synopsys are both on there.
http://www.si2.org/?page=65

The OpenAccess Coalition Scripting Languages Working Group has Perl, Python, Ruby and Tcl ready for action:
http://www.pr-inside.com/new-contributions-ease-adoption-of-r2298621.htm
Synopsys did the Tcl binding. I might do C#. Which scripting language did Mentor want?
I have made two code contributions to OpenAccess. It's possible but difficult because only Cadence is allowed to change the core database code. To get even a production tested bug fix into OpenAccess, it took well over a year plus pressure from a powerful OpenAccess Coalition member.
Magma seems to have done okay with a central database strategy.

Linda makes some great observations about the practical realities of Silicon Realization—I think we all agree on the problem although I think there are some misconceptions about the OpenAccess program and progress to date. Cadence believes strongly that a common database is important to the industry and has remained steadfastly committed for the past eight years. Since 2002, Cadence has contributed and maintained more than 90 engineer-years of code at our own expense. We actively participate in the community and provide input to architectural and priority decisions. The community owns OA content. Cadence has no IP rights other than those granted to us by the community. The community leadership is comprised of other major EDA players as well as heavyweight product companies. Cadence works in this community for the good of the industry which in turn benefits Cadence.
The community is releasing 22.41, due by the end of 2010, which supports multi-threading. The community has recently released new scripting language bindings for Tcl, Python, and Ruby which are available for beta now. The binding code is designed to be easy for the community to download and support. There are many companies both on the EDA side and the design side that depend on OA every day. These companies are all very well aware of the continued improvements in database capability. The community sees this through the evidence of continued and accelerating adoption.
OA was never envisioned to optimally support every algorithm known to EDA. OA’s primary goal is interoperability. There are some applications that will work well with an in-memory model and some that don’t. The point is to have a common place to store and access design data so it can be shared across applications and design teams. OA delivers a common repository to store data and access it either through C++ APIs or various scripting languages built on top of OA…a huge improvement compared to what the industry has had in the past.

Gee, here's a nice article from Mentor Graphics' web site which starts out like this:
"Now that almost all of the major custom design tools run on OpenAccess, we often get asked about how well Calibre supports OpenAccess (OA). The truth is that Calibre has supported reading polygonal data from OA since February 2007 and we have kept up with the new releases of OA as they come along" Here's the full link, you'll probably have to cut/paste it, but if a problem, just go to Mentor web site and search for OpenAccess.
http://www.mentor.com/products/ic_nanometer_design/blog/post/running-calibre-from-an-openaccess-database-12ade244-ee25-43a4-aa63-c8250f6f26eb

To build a best in class interoperable product, you don’t need OA in-memory database, but you absolutely need to use OA API.
If you have a product that uses in-memory OA database but does not add any more value than the incumbent, then you are not going to overcome user inertia to adopt your product. Your product needs to provide value (productivity or quality of design etc.) while providing interoperability using OA API. That's how we are making our Titan customers using OA successful.

Mentor is horribly schizophrenic on OA. Calibre supports, analog tools do not (maybe some weak translation). Analog tools continue to gimp along on AMPL, a language developed back in the Falcon Framework days.

Here's an article from the latest issue of Electronic Design which gives details on the different ways of using OpenAccess and the success companies have had
http://electronicdesign.com/article/eda/Chip-Layout-Implementation-Add-Significant-Muscle.aspx

The OpenAccess effort by Si2 is forging ahead in 3D/Stacked chips as well for standardizing chip power, thermal & stress models. But if a decade of so called cooperation hasn't yielded the results one had hoped for, what is the motivation for established as well as startup EDA tools providers to contribute? Are we just throwing more into the mix fully knowing & expecting an outcome we don't like, I wonder...
Dr. MP Divakar

This comment concerns a few technically inaccurate statements in the article above.
OpenAccess is a specification of a schema for representing electronics design data.
OpenAccess as such is not a database.
The distribution we all receive through Si2 is a reference implementation for the schema expressed in the OpenAccess spcification. It happens to be a very good implementation – probably one of the best designed and implemented software in EDA.
Many tools in production rely on OpenAccess for advanced design.
A design database comes about when design tools populate a design library with the many representations possible in OpenAccess.
OpenAccess also represents and controls the architecture of design libraries.
Many tools can manipulate the design database - sometimes simultaneously, aided by design management tools. Viewed this way, an OpenAccess design database is a “centralized database” and this model has been in play for a long time.
Almost taken for granted.
If by “centralized database” Linda is referring to in memory data model accessed by multiple tools then I’m aware of at least one case where OpenAccess based tools from different companies work with one in memory image of OpenAccess design data. Although, this level of tight integration is not always necessary.
The reference implementation is not a requirement to be OpenAccess compliant. It is possible for a company/tool to undertake their own development of an implementation of a certain aspect of the OpenAccess specification. I'm aware of at least one case where this was done successfully.
Continued ...

On Multi-threading: Ciranova’s device placer – Helix - is fully multi-threaded and works fine with a non-thread safe OpenAccess.
Helix has been in production making ICs for at least 2 years. A sound architecture and careful implementation is a prerequisite.
As of this year’s latest release of the reference implementation from Si2 (oa22.41p004), support for multi-threading is explicit
(announced at the October 2010 OAC). So Linda’s comment on multi-threading is not well informed.
A typical situation we find at our customers involves layouts generated by Helix that are subsequently analyzed by Mentor’s
DFM tools. Helix populates design databases with layout views and Mentor’s world class DFM tools analyze the layouts
for DR & LVS correctness and subsequently extract post layout netlist for simulation. Such flows are OpenAccess based and
no data translation is required between Ciranova and Mentor’s DFM tools. Typically, such designs are also PyCell/iPDK based
which implies the design data is open to all other OpenAccess tools.

Sorry to see that the custom implementation group at Mentor is so negative about OpenAccess - not an opinion shared by the Calibre group as a couple of other people have pointed out. I am not sure what "universal acceptance" is, but OA is at least partially adopted by many EDA companies (Synopsys, Magma, SpringSoft, Jedat, Cadence, etc) and major IC companies (Intel, IBM, ST Micro, TSMC, Samsung, etc).
As Ed Petrus pointed out, the code available from Si2 is a reference implementation of the OA standard. Any company is free to create their own implementation (as is being done now with scripting language bindings) or modify the reference implementation source code. Another EDA company I worked for made several major changes to OA and have those in their production code. The only requirement is that any code modifications be contributed to Si2. There is no requirement to wait for contributed code to show up in the reference implementation before shipping it in a product.
Last, curious comments in the article about router integrations in OA. Many EDA companies (Magma, Pulsic, Cadence) have router integrations that translate from OA to another data structure and back. Since both the source and target databases are controlled by the particular EDA company it is much easier to maintain complete data integrity and performance is excellent. SpringSoft and Pyxis (now owned by Mentor) have a very mature shared runtime memory integration in OA for Pyxis NexusRoute-HP. As for OA not working well for DRC, here is a quote from DeepChip "Springsoft and Mentor working together to enable full signoff DRC check in a DRD style environment. Calibre run[s] in seconds in the background every time you unselect a polygon in Laker based on layer. When layout is complete it's 100% DRC clean." Full sign-off DRC checks in seconds sounds like pretty good performace to me. Unfortunate that Mentor Deep Submicron Division (analog) isn't on the same page as the rest of Mentor.