Synopsys states that Lynx is “flexible and inherently configurable to readily incorporate 3rd-party technology”. And it is true that they have done nothing to prevent you from incorporating 3rd-party tools. They also have done little to help you incorporate these tools. In most cases, incorporating 3rd party tools means digging in to understand the guts of how Lynx works. That means getting down and dirty with makefiles and tcl scripts and mimicking all the special behavior of the standard Lynx scripts for Synopsys tools. For instance, here are a few of the things you might need to do:

Break up your tcl scripts into several tcl scripts in separate directories for the project and block

Access special Lynx environmental variables by name in your makefiles and TCL scripts

Have your tool output special SNPS_INFO messages formatted for the metrics GUI to parse out of the log file

Update your scripts for new versions of Lynx if any of these formats have changed.

If you are motivated, I’m sure you can hack through the scripts and figure it out. However, to my knowledge (I looked in Solvnet, correct me if I am wrong) there is no application note that clearly documents the steps needed and the relevant files, variables, and message formats to use.

It’s not surprising that Synopsys does not want to make this too easy. One goal of offering the Lynx flow is to encourage the use of an all Synopsys tool flow. If it were truly a wide open flow with easy addition of 3rd-party tools, then there would be less of a hook to use Synopsys tools. (Personally, I disagree with this approach and think Synopsys would be better off offering a truly open flow, but that’s the next post).

As a result, Lynx will be used only by customers using a predominantly Synopsys tool flow. I think that is OK by Synopsys. They’d rather sell a few less Lynx licenses rather than support the use of 3rd-party tools. Unfortunately for designers using other tools, Lynx does not currently have that much to offer.

One of the complaints of Lynx’s predecessors is that it is not easy to upgrade from version to version. That is because it is a set of template scripts, not an installed flow. What do I mean?

When you create a project using Lynx, a set of template scripts are copied from a central area and configured for you based on your input. Let’s call this V1.0. As you go through the design process, you customize the flow by changing settings in the GUI, which in turn changes the local scripts that you copied from the central area. Now, let’s say that you want to upgrade to V1.1 because there are some bug fixes or new capabilities you need to use. You can’t do that easily. You have 2 alternatives:

Create a new project using v1.1 and try to replicate any customizations from the v1.0 project in the v1.1 project. I hope you kept good notes.

Diff the new scripts and the old scripts and then update your version 1.0 scripts to manually upgrade to v1.1.

Admittedly, Synopsys provides release notes that identify what has changed and that will help with approach #2. And they try to avoid making gratuitous variable name changes. Even then, the upgrade process is error-prone and manual. In most cases, for any one project, customers will just stick with the version of Lynx that they started with in order to avoid this mess. Then they’ll upgrade between projects. That negates the benefit of having a flow that is always “up-to-date”.

In my humble opinion, a better way to architect the flow would have been to have a set of global scripts that are untouchable and a set of local scripts that can be customized to override or supplement the global scripts. In that case, a new version of Lynx would replace the global scripts, but the local scripts, where all the customization is done, could remain unchanged.

3. Debugging Is Difficult

Have you ever tried to debug a set of scripts that someone else wrote? Even worse, scripts that are broken up into multiple scripts in multiple directories. Even worse, by looking at log files that do not echo directly what commands were executed. Even worse, you were told you never had to worry about the scripts in the first place. And worst of all, when you called for support, nobody knew what you were talking about.

That is what debugging was like in Pilot, Lynx’s most recent predecessor.

I’ve been told that Synopsys has tried to address these issues in Lynx. They now have a switch that will echo the commands to the log files. The Runtime Manager can supposedly locate the errors in the log files and correlate them to the scripts. And now that Lynx is a supported product, Support Center and ACs should know how to help. Still, I’d like to see it to believe it. From what I understand, many of these features are still a bit flakey and almost all the Synopsys consultants, the largest user base for the flow, do not use the new GUIs yet.

__________

In summary, Lynx’s main weakness is that it was not originally architected as a forward compatible, open flow for novice tools users, which is what it is being positioned as. In fact, it started out as a set of scripts written by Avanti tool experts for Avanti tool experts to use with Avanti tools. Synopsys has done a lot to try to morph the flow into something that allows 3rd-party tools, upgrades more easily, and eases debug, but the inherent architecture limits what can be done.

So, what should have been added to make Lynx better? You’ll want to read the next in the series: The Missing Lynx.

This entry was posted
on Thursday, March 26th, 2009 at 12:59 am and is filed under EDA.
You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

4 Responses to “The Weakest Lynx”

Harry, first of all: thank you for sharing these insights with us. They are very helpful to understand Lynx from a users point of view.

I totally agree with you that it would be better to have a truely open flow, it is limiting the potential user base as you point out.

If it would be open and it would be easy to integrate 3rd party tools, then Lynx would have a good value in itself. Now it appears to be more a copy of Cadence’s Reference Flows/ Design Kits and a promotional base for SNPS tools only.
Obviously and understandably this is in SNPS interest. But it is really limiiting the potential leverage such an integrated design system could have.

Thanks for the comment. It’s actually quite a bit more than the reference flows that either Cadence or Synopsys offer. Synopsys makes Lynx consistent with the reference flows (they say that it is incorporated) but adds standard directory structures, flow control, links to revision control (CVS,SVN, etc), and job distribution (LSF, GRD). And then all of the metrics reporting and GUI for managing the flow.

Regarding being open, I don’t think there is anything inherent that prevents 3rd party tools. If Synopsys were to document the interface and methods for implementing third party tools and provide access to 3rd parties to implement and test integration, then I think that would work out well.

Harry, I think you are being a bit too strong in your criticism of Lynx. I saw the demo at SNUG, and talked with some Lynx marketing and techincal people. Here are my impressions.

Pilot may have been limited when it came to 3rd-party tool support, but the Lynx folks are working hard to make it easy to use non-Synopsys tools with Lynx. Integrating a 3rd-party tool or existing script appears to be very straightforward. I think a few lines of code will suffice. It’s no harder than integrating a new Synopsys tool. The only real requirement is to follow the Lynx environment variables for where your script writes its outputs.

As for upgrades, while it might have been awkward to upgrade Pilot, the Lynx folks appear to have learned their lesson. The variable and file format definitions should be robust enough to remain unchanged across versions. I expect updates to the Lynx GUI(s) will be independent of changes to the task/tool definition files. In any case few projects will make wholesale flow upgrades in mid-project, and in this respect Lynx is no more difficult than any other flow
environment. As you point out it might even be easier because Lynx is now a supported product.

Finally regarding debug, yes it is always difficult debugging something that someone else wrote! The difference here is that all the scripts are available to be modified by the user if necessary. Modularity is a good thing; having many shorter scripts that each do one task makes it easier to concentrate on the flow and pinpoint any problems.

Pete Churchill and I gave a presentation on flow methodology at DAC 2008. Many of our suggestions and recommendations were independently developed by the Pilot (now Lynx) team, thus Lynx seems very familiar. Pete and I hoped that users would come together and agree on an open standard for how to describe tasks and flows: individual modular tasks and the complex flows that result when you chain them together.

In our DAC presentation we described a tool called flowMaker which in Lynx-speak is sort of a poor man’s Runtime Manager. We could modify flowMaker to use the Lynx file syntax. Perhaps Synopsys will open up the variable and file format definitions like they did with SDC? This would enable even more tool/flow development compatible with the Lynx environment. That’s of great benefit to all users.

I thought I might help clear up a couple of things that seem to be common threads here:

First let me clarify the comment ” … it would be better to have a truly open flow, ..”.
Lynx is a truly an open flow - there is nothing in the structure or content of Lynx flows that prevent or hinder you from adding in any tools or utilities at all. The flow is defined in ascii files, the scripts are delivered as source code and written in Tcl, but you can use your favorite scripting language too if so desired. Script execution , tool version selection, error checking, and metrics capture amongst other things are all completely user definable.

Adding a bit more light to Steve’s post on 3rd party tools and debugging issues in Lynx …

Third Party Tools
———————
In Lynx a task definition has 2 parts to it as follows:

For dc this looks like this
dc_shell –f <some_tcl_script> SRC=elaborate DST=compile
check dc_shell.log

The <script> is a filename that you specify in the flow editor as a property of the task box. The SRC and DST variables are defined by how you connect tasks in the flow editor - you can also completely ignore them if you want and have your script decide where to read and write data (but this does kind of bypass a lot of the goodness in being able to define the flow graphically).

The checking is again optional – you can choose to have no checking at all done in which case the subsequent task(s) will always execute even if your script ran into problems.

So how does this map into executing a third part tool?
Simply put, the <program> can be anything and the <script> can be anything. For example I can execute a shell script that invokes any other tool or utility. Here is a real example used in Lynx
tclsh /some_path/run_hercules.tcl SRC=FINISH (no DST since this is just generating a report)
check SRC/hercules.log

As you can see in this example, we are treating Hercules as if it were a third party tool by invoking it through a wrapper script.
The script does not even need to be an EDA tool. How about having a script that munges your data in preparation for some other task to use it (convert netlist format etc), if you really want to stretch this how about a script that invokes your old flow wholesale!

You might note also that this latter example shows that there is really no requirement to “break up” scripts. The granularity of the scripts are really driven by user preference. As a default we prefer shorter tasks to allow easier backtracking in the flow but you can certainly, for example, combine your placement, CTS and route tasks into a single script that does all three operations in one task.

Lynx does use environment variables to define some directories in the system and also provides variables that define technology and library references. It’s not a hard requirement to use them, but a recommendation that allows people to avoid bad practices like hard-coding information into scripts and helps maintain consistency across the flows. Not that we would recommend it, but you can write your script entirely absent any kind of variables if that is your preferred coding style.

Debugging
————-
Debugging can be a difficult task in any design environment, however there are features in Lynx that help you here.
• Interaction – start the tool inside of Lynx and interact with it, query the database, try alternate command sequences etc. Step through the script by cutting and pasting, just like you would when debugging outside of Lynx
• Tracing – Lynx can output a 100% variable free, loops unrolled, if/then resolved script that shows every command that was executed. This will reproduce outside of Lynx the exact command sequence you ran inside of Lynx. The script can then be executed, for example, simply by doing dc_shell –f
• Exporting - While tracing is very useful in some circumstances such as for submitting testcases it is not the complete solution. You may not want something where every variable is removed – maybe you want to be able to take a subset of the flow and use it outside of Lynx – this is where flow export comes into play. You can export subsets of the flow into a portable environment where some variables are still used for library setup etc but it is not Lynx anymore. This is what you use for exporting IP flows outside of Lynx if your end consumer doesn’t have Lynx and you need to deliver either a flow or a completed block to them.

Upgrading
————–

Upgrading any flow mid-design is a challenge (and as Steve correctly noted, avoided by most design teams if at all possible), but Lynx helps here with the notion of global and local scripts and also the extension of this to having company or project specific scripts. By following certain (documented) guidelines, it is certainly feasible to upgrade the flow mid –design, however what we tend to see in our customer base is not wholesale flow upgrading but rather point upgrading of specific scripts to use the latest tool features. Lynx helps here in allowing you on a task by task basis to specify a tool version to be used. It is a simple option to change the tool version and see what happens with a given script. If there are new command line options or features you want to make use of, then you can create a local copy of the script (or take the default one delivered with the new Lynx version and make it local).

One last point – all of this is documented in the Lynx documentation, not much of it is in Solvnet at present but you will begin to see notes appearing in Solvnet over the coming months.

Hope this clears things up a little

Regards
Chris Smith
Synopsys Lynx CAE

Leave a Reply

Name (required)

Mail (will not be published) (required)

Website

harry … the ASIC guy is proudly powered by
WordPressharry the ASIC guy TM is a Trademark owned by Harry Gries
the ASIC guy TM is a Trademark owned by Harry Gries
Entries (RSS)
and Comments (RSS).