Exploring new design flows -- integration and automation

Editor's Note: In Part 4 of this series, consultant and ASIC designer Tom Moxon covered several physical optimization and layout design flows, with some discussion of the signal integrity issues facing ASIC designers today. In this last installment of the series he'll demonstrate several techniques for the integration and automation of RTL to GDSII design flows. He'll discuss several ways to reduce design cycle turnaround times using resource management tools like the Sun Gridware Engine, as well as some of the advanced dependency graphing tools, such as Flowtracer/EDA.

EDA market studies from several organizations like
Gartner Dataquest and
Collett International
have shown that for every dollar spent on EDA software, a company usually spends three to five dollars on the integration and support of that software. For most practicing IC designers, the reality has been a design flow that amalgamates software from several competing vendors and is unified largely with duct tape (scripts) and bailing wire (translators).

Fortunately, there has been increased standards activity work that which promises to yield better end-to-end integration throughout the industry. The current trends towards integrated RTL to GDSII implementation systems, and common databases, will also tend to reduce the integration and support requirements.

In this final installment of the series I'll outline several newer techniques being developed for EDA software integration and automation, as well as techniques in production use.

Design Domains
Typically an evolving electronic design is organized into the categories of design domain and level of detail. Electronic design descriptions often fall into one of the
design domains depicted below -- behavioral, structural, test, physical.

Several articles
have been written about the recent work on the System Level Design Language
called Rosetta, which is sponsored by Accellera International. The Rosetta language allows the specification of functional requirements and constraints in multiple interacting domains at varying levels of abstraction.
Rosetta provides modeling support for different design domains, enabling the user to employ semantics and syntax appropriate for each. The Rosetta language allows a designer to span abstraction levels and semantic domains, and see how changes in one design domain affects other design domains.

While the development of Rosetta is ongoing, the base language has enough definition to begin using it, and several companies already have projects underway. There seems to be a convergence of exciting new work on ontology, semantics, knowledge, and modeling systems that is starting to be incorporated into the next generation of applications.

Graph Based Methods
One production approach to representing EDA processes has been through
graph-based methods that capture a design flow. Here, flow graphs model a sequence of transfers between tools and data. Different types of graphs have been employed, such as bipartite flowcharts, hierarchical networks, and Petri nets. These systems can graphically model workflow, and provide optimum paths through complex EDA processes.

RunTime Design AutomationRunTime Design Automation has implemented a unique technology called "Runtime Tracing" that builds a complete and correct dependency graph for a design. The FlowTracer
tool can automatically capture this dependency graph and perform optimum
execution/reexecution of the graph as your design files change. Each tool in a design flow reads one or more files and writes one or more files, and thus implicitly defines a dependency between the input and output files. Using "Runtime Tracing," Flowtracer can dynamically determine the dependency graph using information generated by the tools themselves at runtime. Flowtracer is a innovative tool for managing design builds that greatly reduces the amount of time required to propagate and react to changes in design files.

Flowtracer solves the problem of managing low-level design operational flows,
and allows the designer to interact with the system by means of a simple abstraction of the flow, called the High-Level Flow ("HLF"). Each block in the HLF is associated with a set of nodes (files) in the low-level flow.
The user can interact with the HLF while the correctness of flow execution is guaranteed automatically in the low-level flow. Thus a user can define high-level flow steps, such as synthesis, placement, static timing analysis, DRC, and LVS, and then "map" these high-level flow steps on top of the design hierarchy. With a single command you can rebuild an entire design hierarchy, using all the machines and licenses available on your network.

High Level Flow DiagramFigure 47

For example, in the figure above, Flowtracer would be able to execute
the static timing analysis (STA), ERC, and DRC flow steps in parallel, once the ROUTE step was successfully completed. This can really cut down on the time required to react to design and specification changes.

Flowtracer provides multiple user interfaces, including a graphical user interface, command line interface, and web/browser interface, as well as a TCL extension/API. The
Flow Description Language (FDL) is an extension to the TCL language that allows users to easily define high-level and low-level flows. A major difference between the Flow Description Language and writing Makefiles is that the jobs in the FDL need not be declared in the correct order, or with complete dependencies - because runtime tracing
will determine these as soon as the jobs are actually executed. Those of you who need to maintain EDA project Makefiles will understand the time savings involved using this method.

Flowtracer also incorporates Run-time Change Propagation Control (RCPC), a patented technique to avoid rebuilding a design hierarchy
in the case of insignificant changes such as a comment change in a source or include file. An "unintelligent" system such as Make would simply notice a date/timestamp change on the source file and force a complete hierarchical rebuild. Flowtracer can block that change propagation using their "Clever Copy"
mechanism and build only those modules that have significant changes.

Flowtracer's color-coded user interface gives you a simple visual representation of the state of your design tree. If a flow step/tool fails, it is highlighted in red in the Flowtracer GUI so you can quickly examine that step or module.
By clicking on that flow step, you can look at any input or output of the failed job, including stderr and stdout for that step. Once you determine the cause of the problem, you can immediately submit the job again with another click in the GUI. Purple nodes are out-of-date, green is up-to-date, and yellow is currently running.

Flowtracer also has a web browser interface, and can generate HTML output to reflect the state of your design tree. In the screenshot link below we can see that John is working on 5 design units, and that the SYNTH of module ADDER is running (yellow), and in parallel with that he is also running STA of module ALU, and the ROUTE of module MMU.

While Flowtracer has its own network job distribution and queuing system, it can also interface to other resource management systems such as LSF and SGE. This allows larger organizations to perform computing resource allocation on a per-project or per-group basis more readily.

Resource Management Systems
Medium to large companies can have computer networks connecting thousands
of computers and servers. Effectively managing and distributing workloads
across a corporate computing cluster or a campus grid of computing clusters
requires a robust resource management system. Organizations must manage their computing resources to best effect, and allocate those resources based on deadlines and other constraints perhaps organized by department, project, and task. Resource management systems are used to monitor a distributed computing environment and dynamically reconfigure systems and workloads to match available resources. The primary RMS packages used in the EDA segment are LSF, SGE, and PBS.

LSF from Platform Computing has the largest installed base of resource management systems, particularly for the EDA segment of the market. Their Global License Broker (GLB) is useful to optimize the usage of software license resources.

Sun Grid Engine
The
Sun Grid Engine (SGE) is a flexible RMS, available as open-source at sun.com.
Several of my customers have begun using it to manage their corporate and departmental computing resources.

SGE Basic Architecture
Figure 48

As you can see from the above diagram, the SGE is built around a Master Host (qmaster) that accepts requests from the Submit and Administration Hosts and distributes workload across the pool of Execution Hosts. The SGE system uses a client-server architecture, and uses NFS file systems and TCP/IP sockets to communicate between the various hosts.

The new Sun Grid Engine Enterprise Edition (SGEEE) software manages the delivery of computational resources based on enterprise resource policies set by the organization's technical and management staff. SGEEE software uses the enterprise resource policies to examine the locally available computational resources, and then allocates and delivers those resources.

Almost anything can be defined as a resource, such as available real memory, available virtual memory, available disk space, number of available CPU slots,
software licenses, or special hardware resources like an emulator or chip tester. Sun Grid Engine keeps track of these resources, and allocates them to jobs based on the enterprise resource policies.

SGE allows resources to be grouped into "complexes," which provide all pertinent information concerning the resource attributes a user may request for a job.
For example, perhaps only two host computers in your network have hardware accelerators attached directly to them. By creating a complex containing the resource attributes for the hardware accelerators and attaching that complex
to only those two host computers, then jobs that require hardware accelerator resources will be dispatched only to one of those two host computers, whichever is available.

An example job submit (qsub) might look like this:

qsub -l hw_accelerator some_hw_accelerator_job.sh

The same mechanism is used for consumable resources, such as CPU slots, memory, disk space, and software licenses. When working with floating or network licenses, consumable resources are defined for license features, and then requested for jobs that require those licenses. I typically use the license feature name literally for the resource attribute, so I can just type things like:

qsub -l Design-Compiler=1 some_dc_shell_job.sh

And know that my job will run as soon as one Design Compiler license is available.

SGE defines CPU slots as a resource, so if my route job requires 8 CPUs of SPARC architecture, 4 gigabytes of memory, 5 gigabytes of local disk space, and a Monterey Dolphin license, the SGE system can be configured to allocate a suitable machine for me (based on my department and project priority) and run my job on it. For example, to submit that job you might use:

SGE also has parallel processor support using parallel environments (PE), and configurations for distributed parallel processing or SMP operation can be set up. In the following example I use a PE named "smp_8" that expects 8 processors on the same host.

qsub -pe smp_8 8 some_dolphin_script.sh

While many design shops are familiar with resource management systems, what is new to the Sun Grid Engine are the capability enhancements in the Enterprise Edition, and the fact that SGE is now open-sourced. There are now more open-source developers working on porting and improving SGE worldwide than ever before, as you can see on the project mailing lists. I've been using it primarily on Sun and Linux platforms, but have seen it ported to the IBM, HP, and SGI platforms as well.

Many of my customers have a real mix of desktops, workstations, and servers
scattered around, and I've installed SGE on several heterogeneous networks. Often I've been able to double the amount of effectively usable computing cycles available by placing idle user desktops and idle workstations into the computing cluster on a weeknight and weekend calendar configuration. An idle sensor running on the desktop updates a resource
to indicate when users are no longer active on a host, and then that host queue is enabled to receive jobs from the resource management system.

If a user returns to a host, the idle sensor updates the resource to indicate that jobs should be suspended or checkpointed. At one company, we were able to harness their desktop computers in the "off-hours" to perform a complete simulation regression suite each night. At any given large company, often hundreds of computers will sit idle on the evenings and weekends (or power down),
unless you implement a resource management system.

Implementing a comprehensive resource management system allows you to harness resources that were previously unavailable, and distribute them throughout the company. Effective resource management (and resource discovery) are key to
meeting your business objectives.

Resource Description Framework - RDF
A framework for describing resources, RDF
is becoming used in a number of interesting ways, such as content description, indexing, syndication, and automatic resource discovery.

RDF is already built into most current generation, XML capable web browsers, such as Mozilla. The library community was involved with the early development of RDF, and included support for concepts like copyright or digital rights management of resources that can be described using RDF.

Technically speaking, the Resource Description Framework (RDF) is a framework for describing and interchanging metadata. The Resource Description Framework provides a model for metadata, and a syntax so that people can exchange it and use it.

What is metadata? Metadata is "the data about the data", like the "author" of a "book", the "power dissipation" of a "module", or the "user" of a "license". Metadata can impart knowledge about the relationships between data, as well as facilitate search and query operations. Design metadata can include design metrics information such as the runtime of a simulation, or the number of placed instances in a layout.

RDF "crosswalks" are used to map between different metadata models, such as the U.S. Library of Congress system, and the British Library system. They can similarly be used to "crosswalk" other metadata models.

RGML is an RDF vocabulary to describe graph structures, including semantic information associated with a graph. We've seen that we can represent CAD processes through graph-based methods that capture a design flow.

RDF packages
are available for most of the common scripting languages, like
TCL, Perl, and Python. Application programs that include an embedded TCL/Perl interface can quickly become RDF/XML enabled applications.

Current EDA circuit databases are designed to be efficient for large circuits,
and attempts to use XML for a circuit database (occurrence model), or using fully instantiated XML netlists, have been up to twenty times larger than the corresponding EDA circuit database so far. However, RDF was designed to represent higher level metadata, and used as such it can be quite efficient, providing a lightweight ontology system.

Using RDF and XML for EDA
will allow you to present an information-centric view of resources on your Intranet and on the web. RDF based resources are represented as structured information, and thus gain many of the benefits of databases. These techniques can be employed to provide global resource discovery and resource management.

" Fini "
In this series I've outlined some of the commercial design flows in use today,
starting with RTL exploration in Part 1 and
Part 2, RTL synthesis in Part 3,
and physical optimization, layout and signal integrity analysis in
Part 4. I hope that you've enjoyed reading this series as much as I have enjoyed
putting it all together. Please contact me if you have questions, comments, or feedback on the series.

Tom Moxon
is the founder of Moxon Design,
an electronics design consulting organization. He has designed integrated circuits for client companies including Cray Research, Adobe Systems, Hewlett Packard, Silicon Graphics, Rohm, and Hyundai Electronics. Tom has been working for several years on web enabling EDA and CAD infrastructures, and is researching EDA applications using XML/XSLT, RDF, DAML, and XML-RPC. When he's not "slinging gates" for a living, Tom is often called upon to torture test new EDA tools before they are inflicted on the remainder of the engineering community.