What will it take for FPGAs to become as ubiquitous as processors?

In which we mull the rising number of FPGA design starts, the falling number of ASIC design starts, and the impact that this is having on the EDA industry...

The inspiration for this blog came from an article I read recently that talked about the rising number of FPGA design starts, the falling number of ASIC design starts, and the impact that this is having on the EDA industry.

It also pointed out that FPGAs still account for only a tiny fraction of the semiconductor revenues, so I thought I would add my own 2 cents to this discussion, not by regurgitating the data that has already been put out there, but to look at why FPGAs have not become as prevalent as the ever so humble microprocessor.

Let’s look at the microprocessor. It is a very inefficient device for almost every task. It is slow, it consumes a lot of power per unit of computation, and it is also one of the largest possible implementations in terms of chip size to perform the desired functions (bears a lot of similarity to the negative aspects of the FPGA, except that the FPGA is better on almost all counts). So why on earth did this device ever become so popular? I think it comes down to a few simple reasons – simplicity, independence, and abstraction. Let me explain what I mean by each of those and point out why the FPGAs at the moment do not meet the necessary expectations.

Simplicity – to program a processor you need little or no knowledge about how the processor actually works, only the paradigm set forth by Von Neumann. Instructions are fetched from memory and operated on in order. Registers inside of the device hold temporary variables, variables can be moved in and out of memory to registers, and certain instructions can change the place from where the next instruction will come from – and let’s face it, that is about it.

With the instruction set, it then just becomes a matter of logically thinking through the process. Just think what would have happened if reverse polish logic had become the way in which everything was done? My first calculator used reverse polish, and while it was more efficient, it was also a lot more complex to get it right. You had to constantly be thinking about what was on the stack, ordering of operations several steps in the future, or to look at that another way, you had to constantly think about the processing pipeline.

How do FPGAs compare? A study was performed by BDTi in early 2010.The focus of the study was ESL high-level synthesis tools targeting FPGAs. Their conclusion was that the high-level synthesis tools were great, but that they were let down by the back-end FPGA tools that required far too much hardware knowledge in order to achieve success. There we have problem #1. If we cannot insulate the user from knowing details about how the FPGA works, then it will never attain the same level of dominance as the processor.Independence – with a processor, it is never necessary to actually verify the processor when you want to verify the software. Software can be developed in isolation and in many cases it is even possible to develop the software without knowing what processor it will execute on. This allows total team size to grow without having the problems that face most engineering teams in that by adding additional people, you add to the communications and management problems.

Now I agree that there are hardware/software boundary issues, but this is not with the processor, it is usually with everything else in the hardware. The software team can operate completely independently except for that interface between them. This independence comes from the encapsulation methods used for a processor.

This is very similar to the way in which hardware used to be designed using discrete logic. Anyone who has been around as long as I have will remember the 7400 series of devices. We did not have to worry about interoperability issues or integration problems so long as we followed a few simple rules.

Can we do the same for an FPGA? Can we have one team developing the code for the FPGA while an independent team develops the environment in which the FPGA will operate? I don’t think so. We can’t even integrate pieces of IP within an FPGA without problems. And we most certainly cannot develop and verify the code for the FPGA and not have to re-verify it after it has been mapped to the device. I talked about this problem a few blogs ago.

Abstraction – software development has not stood still since those early days. Programmers wanted to be able to move above the instruction level of coding for all but the most highly optimized parts of the system. They wanted even more independence such that code would be even more portable, but more than that - they wanted the efficiencies that come from abstraction.

Abstraction raises productivity in every aspect of the flow – design, development, debug, verification… Now I have seen the charts that show that hardware productivity increases at a higher rate than software productivity. There are two reasons for this. The first is that hardware started at such a low level of abstraction that it has been catching up. We are now beginning to see people being able to develop hardware using C, C++ and SystemC - languages that have been the staple of the software industry for decades (well, not SystemC). That same BDTi report gave the high-level synthesis tools a fairly high endorsement for the usability of those tools even though they do not cover the whole language. This will improve over time.

The second reason is that the tools created for the hardware portion are so much better than the tools available for software engineers. That of course is partially a reflection of costs. Software developers do not want to pay a lot of money for tools, whereas ASIC developers have traditionally paid very high dollar amounts per seat. This directly impacts the investment in the tools. The early tools for the programmers came from the processor manufacturers, just as the FPGA vendors put out free tools, but the processor tools were adequate for the task. The FPGA tools are not, and even most of the tools that you can buy from the other vendors still fail in many aspects.

So, I think that FPGAs have a long way to go if they are ever going to attain the same kind of dominance as the humble processor. We need to stop thinking about them as hardware devices and start thinking about them more as programmable, encapsulated pieces of functionality and simplify their usage model. I do see some of this happening, but not enough and not fast enough.

Apple turned the MP3 player from a techno geek device into one for the masses by removing complexity and even some capability. The same has to happen for the FPGA. Who will be the Steve Jobs for the FPGA?

I loved reading the post of respected sir Brian and comments of different personnels of hardware engineering faculty. I find myself interested to this topic and is researching on how can I implement FPGA as a more easier platform to be used in general purpose activity. I am a M.Tech(Embedded Systems Design) student. I have practically less experience then respected personnels but I promise to show my dedication and hard work on this field. I humbly request all the respected Sir to help me out informing and discussing the advancement taking place on this subject matter. Thank you.:)

What about approaches such as National Instruments LabVIEW FPGA? The FPGA essentially looks like a hardware accelerator. One could imagine architecting FPGAs so that they are optimized for a particular software development environment. The downside is that the resulting unit is like an SBC, not a chip.

Hi Brian!
It takes desire to realize optimum performance of a system that can only be achieved as a result of appropriate mapping of functions to processing elements. We will continue to see tighter integration of a variey of types of processing elements on the same die that give the system designer choices for optimal mapping of their functions. Many domain specific tools exist to assist the design, verification, and implementation of hardware and software. And, some tools exist to assist the co-design of hardware and software. Most of the co-design tools are collections of tools from the domain specific sets simply bolted together with incremental value added. For many years we have created new languages and augmented languages to extend their domains. We have not done such a good job at creating new capabilities to enable the analysis of designs expressed abstractly and successively refine the abstract to the explicit. We need tools that help us partition the system and enable trade-off analysis of key parameters such as Size Weight and Power (SWaP), and Performance as implemented on the variety of processing elements. Such tools would accelerate the decisions system designers need to make to realize an optimum system and return orders of magnitude returns over current methods and tools. So, FPGAs will become as ubiquitous as processors because they will be one in the same and we will have tools to design at an abstract level and simplify the process and reduce the time to deliver products that meet customer requirements. I am encouraged by the efforts of the big 3 and start-ups to provide these tools.

Standardization will indeed help a lot. Huge IP (Appstore) infrastructure deployable easily. I think the good news is things are going in positive direction to mix, both FPGA vendors and processor vendors have realized it(it will take a while indeed).
FPGA guys (Hard Processors, IP leverage, HLLs) and big guys (Intel Stellarton direction) indicate future is gona be mix of both and efforts are on way.
I agree with Jagdish we need to look at the heterogeneous computing devices emerging compared to absolutely comparing them. There can be opportunites for IP vendors, EDA vendors, new startups, Academia to levarage this trend (find research/business opportunities). As things are indeed heading for collison in future.
Some more stuff in a changes trends talk i gave few months back at FPL-10
http://conferenze.dei.polimi.it/FPL2010/presentations/W1_B_1.pdf

I agree with Frank with the opinion that FPGAs donot really compete with microprocessors. In fact, if we complement both the technologies then we have a wonderful solution & such a thing is indeed happening around us with the integration of processor cores into FPGAs.
I feel we need to look at the situation with an attitude of combining the Two technologies, rather than trying to contrast them.

Given a system consisting of an embedded verified processor and several verified peripherals, then the software is the unverified part. It seems that a software simulator is needed for verification using this approach. How many embedded processors have software simulators and debuggers? Or is the RTL of the processor simulated and the software debugged based on the RTL?
I think the latter is the case, and it is doomed to failure.

Brian makes some excellent points. Like most types of design these days, verification is the major challenge. If we break down his key points, you’ll understand why:
Simplicity: The complexity is not in the way the design comes together, it is verifying that the design works as planned. Debugging of FPGA designs are in the stone age relative to other devices. Debug tools that enable direct diagnosis of implementation issues are required for FPGAs
Independence: The reason why software development and debug is so productive in hardwired microprocessors is that the source level debuggers generally run on the actual processor itself, meaning it is being tested in a “Native Device” manner. What is needed for FPGAs is the same “Device Native” debugging environment so the user sees the behavior of the code (in whatever form it originates) while in the software environment (i.e. the simulator). A Device Native approach like ours from GateRocket brings the microprocessor debug and verification use model to FPGAs.
Abstraction: The ability to work in higher levels of abstraction is nice – but to truly leverage its benefits from a verification standpoint, designers need to be able freely move blocks from the implementation (inside the FPGA) back to the software world (without having to re-run synthesis and place and route). In this way, teams can preserve design intent from HDL through to Silicon. We have a feature called SoftPatch that allows this type of interaction and flexibility that takes advantage of higher level design techniques, but also allows designers to work on only portions of the design.

Brian, here's a thought on your question:
"But why cannot FPGA become more like processors?" .. The FPGA tools focus on hardware description, placement, and routing. Computers focus on the sequences to perform a function. To date, synthesis main focus is the RTL/data flow and not on the control flow.
FPGA's need to have ways to map sequencial functions into hardware.
By parsing the C code and turning the sequencing conditions into Boolean, control logic can then be put into memory blocks and changes can be made to the memory contents rather than doing total synthesis, place, and route.
Less focus on how the hardware works is required.
Of course Synthesis will have to be trained to leave some spares laying around like we used to do with the 7400's.

This analogy works great even if the level of platform complexity is different. I think the key difference is in the maturity of 3rd-party apps/IP ecology of iPhone/FPGA platforms.
Before the venue of iPod/iPhone there was an existing mature ecology of 3rd-party IP providers in place (Sony Music selling “star IP” Céline Dion) but delivering IP on other media (CD). Apple’s success has been to make them converge on the same platform that’s replacing the CD by a file and the music store by a website (roughly). The level of maturity of this IP ecology was already at a point where the IP (CD) was completely decoupled from the platform (CD player) and since a very long time (before cassette/Walkman). This IP/platform decoupling had happened because people were asking better performance (professionals signers) and always cheaper and more private concerts (portable self-listening music). This sounds similar to the challenges that are facing the embedded systems design today (performance, costs, TTM).
In the embedded system space, no such 3rd-party IP ecology (I mean system-level, not component level) is still in place. On a system-level IP point of view, we are still at the age of people buying their chip&tools (instruments) to create their own songs (internally produced IP) and to sell them through their own product lines (private concerts?). However, this way to proceed as reached its limit in delivering more performance, cheaper and faster.
FPGA are well qualified to overcome those challenges but I agree with Dr.DSP, the key part resides in having a broader 3rd-party system-level IP ecology that allow apps to be provided on those FPGA. Would the iPhone been so popular if Apple expected people to program their apps themselves? Its success relies on the capacity to economically bring powerful 3rd-party apps/IP to easily customize a common HW platform (iPhone/FPGA). Everybody has the same HW but nobody has the same phone (apps/IP set).
More on my blog: www.pe-fpga.com