What! How big did you say that FPGA is? (Team-design for FPGAs)

Field-programmable gate arrays (FPGAs) have become incredibly capable with respect to handling large amounts of logic, memory, digital-signal-processor (DSP), fast I/O, and a plethora of other intellectual property (IP).

At 28-nm, FPGAs deliver the equivalent of a 20- to 30-million gate application-specific integrated circuit (ASIC). At this size, FPGA design tools, which have traditionally been used by just one or two engineers on a project, begin to break down. It is no longer practical for a single engineer, or even a very small design team, to design and verify these devices in a reasonable amount of time.

Of course, project schedules are always too long from a managerís perspective and always too short from a design and verification engineerís perspective. As a result, larger design teams, often geographically dispersed, are becoming much more common in the FPGA world. This trend has a significant impact on the tools used to design, verify and manage these increasingly complex electronic devices. This article describes a few of the key issues that should be considered when tackling complex FPGA design among several different engineers or teams of engineers.

There are many things to consider with regard to team-design, so itís helpful to break it down into three key areas and discuss each separately as follows:

Distributed and parallel development

Design flows

Tracking and reporting

Distributed and parallel development
Companies are increasingly becoming global entities with distributed work forces. This distribution may be another continent, another state, a different city, or just upstairs. However, most of the requirements for sharing and collaborating on a project remain essentially the same. Sub-projects assigned to each engineer or team must be self-contained and frequently and easily integrated into the top-level project.

These sub-projects are often at different stages of progress (i.e., one may have stable RTL code, another new/unstable RTL code, another might only have the basic I/O and functionality specified), so a system that can handle and even take advantage of this is important. Using and re-using IP, whether from a previous design or provided by a third party, is also on the rise for FPGA designers. It is no longer practical to develop new RTL code for all the functionality required in todayís large FPGAs. A few of the important considerations for distributed and parallel development are as follows:

Management and integration of sub-projects into the top level, including source code version control

Design / IP re-use

Time budgeting

In-context synthesis

In most cases, team-design consists of multiple sub-block owners and a top-level design integrator. The ability to develop each sub-block independently and then have them automatically integrated into a regular (i.e., nightly) top-level build is extremely useful. Integrating source code control systems, such as CVS and Perforce, is becoming standard practice for managing large FPGA designs that have constantly changing RTL code from a variety of sources.

Figure. 1 Team-design allows for the stabilization of each
block separately so that top-level integration is
virtually guaranteed to work. Block-level designers
can move on as soon as their block is stable.

A useful side effect of developing and managing sub-blocks independently is that it makes the sub-blocks much easier to re-use in future projects. The RTL source and design constraints for a target FPGA can be archived together as a verified functional block for quick integration into next generation projects.

One of the key challenges of team-design is creating and managing time and resource budgets for each sub-block. Tools are needed to allow the team leader to allocate resource budgets for each sub-block so that FPGA resources, such as random-access memory (RAM), DSP and look-up tables (LUTs), are not over-utilized. These tools help avoid the situation in which two independent teams, working on their own portion of the design, arenít aware of the resources being utilized by other sub-blocks and end up using more than their fair share of FPGA resources. Top-level budgets that can be pushed down to the sub-block teams and enforced throughout the design cycle are needed.

In-context synthesis refers to cases in which specified information about a designís multiple sub-blocks is known up front. In such cases, additional optimizations can be performed between the blocks during synthesis, yielding an overall improvement in timing performance. For example, consider the case of a constant that is propagated across sub-blocks within a design. There is an opportunity to further optimize the circuit by removing unnecessary logic. Without the context being known by the synthesis tool, such optimization would not be possible.

Additionally, the tools should be flexible enough to allow selective boundary optimization when critical paths do appear between sub-blocks of the design. A precise, surgical approach to limiting changes to only the logic thatís immediately involved with the critical path is desired, preventing the disruption of the rest of the design that may have already been verified.

One of the advantages of FPGA design is that you don;t need an ASIC style design flow. Even with large devices. The ASIC guys will try and convince you that a big FPGA is really an ASIC to sell you their more expensive tools. I'm not ready to buy their argument yet. How about some real world examples from several happy customers? That might get my interest...

I remember way back in the mists of time when I first heard the terms top-down and bottom-up design. Of course in those days we were still doing everything by hand -- I would never have dreamed of tools like the ones that are available today!