Monthly Archives: March 2015

Getting funding for a semiconductor startup is not getting easier. According to CBInsights, there have been fewer than 20 Angel/Seed and Series A deals in the US in each year since 2009.

So where is all the funding channeled to? In 2013, close to a quarter of all VC funding was directed towards internet companies. (Preqin, June 2013). If you were a VC, you’d probably do the same. Internet companies are more agile. They can easily change their product, business model, and transform the entire company direction overnight. On top of that, the investment is a fraction of what’s needed for a semiconductor startup and let’s face it – Internet companies are a bit sexier.

With maskset prices only going up, raising money is not going to become any easier. So what are you options as a fabless startup? While we can’t solve the funding challenge, here are a few tips that may help you limit your expenditures.

1. Collaborate with Foundries and Assembly Houses

As a new start up you must be a in a constant look for new competitive edge whether it’s price or performance, power consumption, there is a lot to gain in the production phase as well. New technologies in packaging can reduce the total device cost and better collaboration with the foundry can give you better understanding on how to optimize your design for better performance.

On the other hand, assembly house and foundries are looking for new requirements in order to drive new products and technologies and without having direct contact to the market they are missing real-life demands to enable new features.

2. Partner with IDMs

While qualcomm, broadcom and other large IDMs are looking for getting more market share and beat competition – perhaps you can help them. By using IDMs distribution channels you can get closer to your customers and even more close to get some cash.

The tricky part is to find the IDM that needs your technology and needs it now. Try to connect to business development folk to see if your offering make sense for their sales strategy. There have been several start-up and IDMs engagement in the past that ended up in a happy marriage.

3. Find Great Semiconductor Suppliers

To lower the risk and reduce time to market, you need great partners. Use AnySilicon search tool to identify vendors that can support you quickly. Some suppliers offer very professional solutions and services together with creative business models that can help with reducing the upfront investment. A combination of technical and commercial engagement could be a key to your success and really value add for a small startup.

AnySilicon’s directory is a good place to start when you are kicking off a new project. The number of vendors in the database is increasing daily and offer huge diversity, from ASIC design, wafer supply, package design to FIB services. Everything you would need to realize an ASIC.

4. Engage with Customers before you Start the Design

We have been involved in many semiconductor products for many years and seen big mistakes consistently repeated. One of the biggest mistakes is about product definition. Engineers and managers believe they know what customers want even without meeting them or talking to them.

In any company, and especially in hardware-based companies, where there is a lot of money involved with product re-definition – changing a product means money. For fabless companies adding a feature which is hardware-related (vs. software related) can cost a lot of money. Adding functionality into the silicon will consist of: design and verification of the ASIC, a new maskset, production test updates and sometimes changing the package type. This requires a huge capital investment and at least 6-12 months of delay.

5. Think Vertical

There are many examples of fabless companies changing their business models and ending up selling a product instead of a chip. The motivation behind this step was mostly because they could not find any customers for their silicon — so they decided to design, build and sell the end-product themselves.

There are great examples of chip companies selling end products. SanDisk is one of them. We all know them as the USB-stick company, but actually the core competence of the company is ASICs. The end product is different and obviously generate much more profit.

This means that Instead of selling chips, you may want to think vertical – can you sell an entire product based on your chip or perhaps a module?

The corner-based timing signoff approach is a historical and traditional method that has justified a development and enhancements of conventional STA tools and signoff flows. The number of signoff corners exponentially grows along with an increase of variation sources, their magnitude, and timing margins. It becomes a bottleneck in the design flow and leads to a risk of silicon fail-ure, an over-margining, over-design, a loss in the System-On-Chip (SoC) performance, timing yield, cost, etc. It causes a timing signoff deadlock and still does not guarantee against a silicon failure. This paper examines the situation and outlines possible solutions.

Chapter 2 discusses the corner-based timing signoff methodology and the corner number used in this methodology. It explains why the corner number grows exponentially and is becoming a challenge. It increases the duration of the timing signoff, makes timing closure difficult and worsens most of design metrics. The corner-based timing signoff is a justification for the current design flow and contemporary STA/SSTA signoff tools. It has multiple impacts on the design flow, Time-to-Market (TTM), cost, SoC performance F, timing yield Y, etc. It becomes a prob-lem for getting the most benefits from moving to next advanced technology nodes.

Chapter 3 discusses the conventional timing signoff methodology in details. It starts with a defi-nition of the current timing closure and the timing yield. It shows that the conventional timing signoff does not support the timing yield as a design signoff requirement and it becomes a chal-lenge. Then, timing derating (margins) methods of contemporary STA tools, which should cover for variations, are considered. An increase of variation sources and their magnitude leads to loss-es in the SoC performance and diminishes other design metrics. Some limitations and drawbacks of current derating methods are considered and, then, it is shown that Statistical STA (SSTA) tools provide a partial solution but are not panacea. Later, in this chapter, we consider a signoff optimism and conservatism (pessimism), different variability sources and, finally, the timing signoff deadlock.

Chapter 4 outlines new advanced timing signoff paradigms and methods that have been mainly developed at Abelite Corp. (POCV is Synopsys’ upcoming method). Namely, it discusses the following 4 options that may be adopted by the EDA industry: Option 1—Enhancing the AOCV derating method; Option 2— Switching to the Parametric OCV (POSV); Option 3— Developing pseudo-statistical tools; Option 4— Developing statistical Monte Carlo-based tools. Options (1) and (2) may be combined with minimizing the corner number and using a detailed examination of variations in found risky (timing critical) paths. Options (1), (2) and partially (3) may be con-sidered if a company is using the corner-based signoff and this is a must-to-use method for the company no matter what it takes. Options (3) and (4) may be more beneficial as it is shown in the chapter. Finally, Options (3) and (4) are computationally expensive, especially Option 4 and there is a challenge with their validation. It is not likely they will replace the AOCV/POCV in the best-in-class PrimeTime [1] and ICC [2] tools, but they can be used on top of PrimeTime (after a PT run).

During the last decades, important advances in microelectronic techniques and technologies had been fueling the introduction of new wireless enabled products accessible to a large number of people around the world. Without the competitive price offered by CMOS, the widespread use of wireless enabled engineering complex devices, e.g. smartphones, tablets, etc. would certainly have been delayed. At the same time, these advances have been inspiring research and industrial institutions to develop the next generation of such products. Thanks to large efforts in research and development, the digital part of a radio transceiver has been growing to accommodate complex signal synthesizing as the ones necessary to transmit the constantly growing amount of data exchanged by the most recent wireless connected devices.
While advanced node CMOS processes have been helping to integrate systems on chip (SoC) hosting complex digital basebands, these processes had shown their limits to generate the analog signal to be radiated. The generation and radiation of this signal represent a lot of challenges. A large power level is required to guarantee a resilient and reliable connection between two devices, located from centimeters to kilometers away from one another. At the same time, this signal should not interfere or be interfered by any other, external or internal to the device. Furthermore, the generation of such a signal should be as efficient as possible to guarantee long battery lifetime for mobile or remote user convenience.

This paper provides an overview of the different wireless transceivers analog architectures and related radio frequency front end (RF FE) modules. It describes some of the challenges associated to the integration of the transceiver blocks in CMOS processes and provides a survey of the current efforts to address these challenges. It is structured as follows: after this short introduction, section II provides a short description of the most common analog architectures. Section III describes the challenges associated with the integration of the key blocks of the RF FE associated with each of these architectures and details some examples of successful implementations in different CMOS processes. Section IV provides some conclusions based on the previous sections.

ANALOG FRONT-END ARCHITECTURES

A radio transmitter generates a signal modulated by information to be transmitted. A radio receiver recovers and interprets this information after demodulation of the signal. Both blocks are part of a transceiver which have four complementary functions: signal modulation/demodulation, frequency translation (upconversion/ down-conversion), power amplification (high power/low noise) and radiation/ reception.

There are two different transceiver analog architectures, see Fig. 1.a and 1.b. The first one, the heterodyne, is more robust to interference thanks, in part, to a simple or double intermediate frequency ( fI ) translation stage. The second one, the homodyne, has no fI stage making it more susceptible to interference and other issues, but easier to integrate. In modern wireless devices such as mobile phones, many radio transceivers should coexist for each specific communication standard. Homodyne architecture is therefore preferred due to its reduced die area and low off-chip passive element count (e.g. SAW filters).

Depending on the way the transmitted/received signal is synthesized/decomposed, homodyne architectures could be Cartesian or polar. If the signal is synthesized/decomposed from/to the in-phase (I) and quadrature-phase (Q) baseband information, the architecture is Cartesian (Fig. 1.a) and it is polar if the signal is synthesized/decomposed from/to the envelope and phase information (Fig. 1.b). Even though the polar architecture needs extra digital blocks, recently it has been gaining a lot of momentum among industrial and research institutions, thanks mainly to the fact that many of the signal emission/reception constraints rely on the digital blocks and not on the RF ones which become therefore easier to integrate. The following section describes some of the challenges of integration of the key RF blocks of modern wireless transceivers.

CMOS RF FRONT-END

In order to optimize power consumption, reduce area and costs, the ultimate goal is to integrate the whole transceiver, including its RF front end elements. On the receiver side, after a frequency band filtering the signal should be amplified in order to allow the following blocks extract the embedded data without or with little error. The amplification stage (low noise amplifier, LNA) should therefore add as low noise as possible to the received signal. Being, that noise is mainly a function of the LNA’s active elements, a reduced transistor noise figure will relax design constraints. Fig. 2 shows the 1/f noise of two identical size transistors, fabricated using the Altis ATS-130RF process, one with a low noise extra process mask and the other one, without it. From this figure, it is easy to understand the advantage of using the first device for LNA design. A largely successful receiver module integrated in top-selling 3G/3G+ mobile phones was designed using this extra process mask. LNA noise figure was reduced by up to 1.2dB with a minor gain penalty of 0.5dB. Low noise transistors are a good support to design, for among others, low noise phase locked loops (PLL) for large band frequency synthesizers required in modern wireless transceivers. Mobile phones, tablets and others should receive/transmit data/ voice at different frequencies under multiple standards: GSM, WCDMA, LTE, Wi-Fi, Bluetooth, NFC, GPS, etc.

Fig. 2. Flicker noise minimization

On the transmitter side, a lot of efforts have been allocated to the power amplifier (PA) integration. As CMOS process node shrinks, voltage supply values decrease making it more difficult
for an integrated solution to generate the large amount of power required. This issue is stressed by the lack of active devices able to handle the large values of voltage swing typical of PA
output stages. Furthermore, since the PA module demands a large amount of current from the battery of a mobile device, power level conversion efficiency is a key factor for long
battery autonomy. The larger amount of leakage, lower voltage supply and the lack of high voltage robust active devices in deep submicron compared to 130nm and/or 180nm CMOS
processes make the last ones more suited for PA integration. The same is true for the very last stages of the transmitter RF front end: the switch and the antenna tuning.

Fig. 3. RF front end (a) ET and (b) EER topologies

Modern wireless communication standards aim to allow transmitting a large amount of data modulated in complex signal waveforms. A highly linear amplification stage is required to minimize the amount of distortion. Highly linear PAs are less efficient than nonlinear ones; nevertheless some techniques help to partly overcome the nonlinearity while keeping high efficiency levels. Meant to be integrated with polar architectures, envelope tracking (ET), see Fig. 3.a, and envelope elimination and restoration (EER), see Fig. 3.b, are two RF front ends topologies helping minimize the distortion
introduced by highly efficient nonlinear PAs. In both, the information contained in the envelope of the signal is sampled (ET) or extracted (EER) before amplification and reintroduced by PA supply modulation. In ET topology the signal is amplified by a nonlinear PA (e.g. deep class AB) while in EER topology the signal from which envelope information has been removed, could be amplified by switch-mode PAs. Switch-mode PAs could theoretically attain an efficiency of 100% but in practice this efficiency ranges up to 40-50% [1], [2]. Deep class AB PAs have typically efficiencies ranging up to 30-40% [2], [3].

At the last stage of the RF transmitter, after the PA module and before the antenna filter, a switch is required to commute between the receiver and transmitter paths. This switch should be able to withstand the large voltage swing generated by the PA remaining highly linear and adding low noise. Between the antenna and the switch, an antenna tuning module is suitable to minimize the power loss due to antenna mismatching. PA, antenna switch and antenna tuning have remained three separate module chips in most of the modern transceivers. Furthermore, their requirements from a technology process point of view differs from one another. For PA integration, the most important features are low leakage, low parasitic fast enough RF transistors able to sustain a large enough voltage swing together with good quality passive elements (inductors, capacitors, resistors). For switch integration, the main process figure of merit (FOM) is the RONCOFF of the active devices. For
antenna tuning integration, the most important parameters are the RON and the linearity of the active devices and substrate, together with good quality capacitors of different capacitance density. A few companies have introduced pure CMOS PA modules for the wireless mobile devices market [4], [5] with moderate success. None offered switch and/or antenna tuning pure CMOS modules. While most of the three modules are designed and fabricated in compound processes such as GaAs or SiGe BiCMOS, a promising solution is gaining a lot of attention recently, CMOS silicon on insulator (SOI). Innovative solutions integrated in CMOS-SOI, as the one recently introduced by a major integrated wireless communication solutions provider [6] offer highly integrated modular RF front
ends, customizable by the world region. While this solution remains modular it represents a big step in the development of fully integrated, RF FE included, CMOS transceivers.

CONCLUSIONS

From what has been exposed, a high degree of process customization and optimization is required to help design and integrate the different blocks of modern CMOS RF front ends. Specialty foundries, facilitating this customization and optimization, help develop the next generation of transceivers. By accompanying innovative small, medium and large RF solution providers from the specification of their technical needs to the final production, specialty foundries provide what is frequently off-reach for large foundry providers; a co-design circuit-process, essential to achieve high integration levels,
allowing offering the latest technical developments to a large public in an affordable way.

Bob Colwell, Director of the Microsystems Technology Office (MTO) at the Defense Advanced Research Projects Agency (DARPA), recently made a compelling statement about the end of Moore’s Law: “When Moore’s Law ends, it will be economics that stops it, not physics. Follow the money.”

If we move beyond Moore’s Law, the progress of other industries that depend on semiconductor industry will also be slowed. The interrelation of the chip industry and this doubling of computer processing power has allowed for half a century of exponential growth. Colwell sees 7 nm as being the end of the road (although not all experts agree) and predicts that Moore’s Law will hit its limits around 2020 or 2022. It should be noted that end of Moore’s law at 7 nm happens because of its physical limits. The semiconductor industry is expecting to meet economic limits earlier than physical limits because of a transformation of the US economy from free market capitalism to monopoly capitalism.

As a result of monopoly capitalism, the consumer purchasing power of the majority in the economy has shrunk. There is a growing gap between the wages and the productivity of employees in the global economy, which has resulted into a loss of economic balance. When capitalism is reformed to a free market enterprise, and it works for all citizens in an economy, it results in an economic democracy.

In order to sustain the progress of semiconductor industry through the progress of Moore’s Law, economic reforms become critical to allow semiconductor companies to justify the ever-increasing capital-intensive investments made to sustain the progress of Moore’s law. Consider the features of mass capitalism that would help address the present economic limits of Moore’s Law:

1. Mass capitalism would ensure an economic and monetary policy in which there is no valueless hoarding of wealth by a few individuals and any valueless hoarding gets converted into valuable investments for sustaining the progress of Moore’s law.
2. It would ensure maximum utilization and rational distribution of all available resources in an economy.
3. It would optimize the business of operation for the semiconductor industry in such a way that the potential of all employees would be properly utilized towards the progress of Moore’s law.
4. It would redesign corporate human resources policies in order to encourage optimum utilization of all employees’ potential. However, organizations would also have to adjust properly to utilize that potential.
5. It would also ensure that the process of utilizing employees’ potential is not the same for all employees of the semiconductor industry. While it would encourage better methods of utilization to be continually developed, the process of utilization would be progressive in nature.

If mass capitalism comes to reality the result would be a robust growth of consumer purchasing power in an economy. By bringing back free markets, supply and demand would grow in proportion, thereby resulting in a balanced economic growth, low income taxes on individuals, higher investments, increased motivation for employees to work hard, and the growth of the overall economy. I believe that mass capitalism is the path forward for the US and global semiconductor industry in reaching the next level of innovation and financial success.

When these reforms become a reality, even if the future improvements are less from one process generation to another, the macroeconomic growth in the overall economy would be very high. Through such profound macroeconomic reforms the consumer purchasing power and hence the prosperity of overall economy would be very high. With a high economic demand, the demand for the latest and greatest electronic products will continue to grow.

Such a robust consumer demand would force semiconductor industry to make investments and manufacture products to meet the demand. In this way, mass capitalism envisions sustaining the progress of Moore’s Law in order to overcome its economic limits caused by monopoly capitalism. These reforms would usher in the era of high prosperity and replace Colwell’s hypothesis. If Moore’s law comes to an end, it will be due to physics, not economics.

_____________________________________________________

This is a guest post by Apek Mulay that was originally published on EBN.

The corner-based timing signoff approach is a historical and traditional method that has justified a development and enhancements of conventional STA tools and signoff flows. The number of signoff corners exponentially grows along with an increase of variation sources, their magnitude, and timing margins. It becomes a bottleneck in the design flow and leads to an over-margining, over-design, a loss in the System-On-Chip (SoC) performance, timing yield, costs, etc. It causes a timing signoff deadlock and still does not guarantee against a silicon failure. This paper exam-ines the situation and outlines possible solutions.

The corner-based timing signoff methodology and the corner number used in this methodology increase the duration of the timing signoff, make timing closure difficult and worsen most of design metrics. The corner-based timing signoff is a justification for the current design flow and contemporary signoff tools. It has multiple impacts on the design flow, Time-to-Market (TTM), cost, SoC performance F, timing yield Y, etc. It becomes a problem for getting the most benefits from moving to next advanced technology nodes. You can find all the details in white paper [1]. The same paper also discusses the conventional timing signoff methodology in details. It pro-vides a definition of the current timing closure and the timing yield. It shows that the conven-tional timing signoff does not support the timing yield as a design signoff requirement and it be-comes a challenge. Then, timing derating (margins) methods of contemporary STA tools, which should cover for variations, are considered. An increase of variation sources and their magnitude leads to losses in the SoC performance and diminishes other design metrics. Some limitations of current derating methods are considered and, then, it is shown that Statistical STA (SSTA) tools provide a partial solution but are not panacea. Later, in this paper [1], we consider a signoff optimism and conservatism (pessimism), different variability sources and, finally, the timing si-gnoff deadlock.

Chapter 3 provides design and timing signoff recommendations and tips that will minimize delay variations and, in most cases, are the same for the corner-based methodology and new statistical methodologies. They include discussions on corners and minimization of their number, using the useful skew, and variations in paths with zero and a useful skew.

Chapter 3 provides important design recommendations and timing signoff tips on how to minimize delay variations in cells by reducing slew and load.

There has been a substantial amount of discussion about the noise of power supplies and its harmful effect inside wireless electronic devices such as cell phones and WiFi or bluetooth enabled portable devices. Due to an increased pressure to extend battery life in portable devices, switching (or DC-DC) regulators have been extensively used in their design because of their natural high efficiency. But, DC-DC regulators are often the primary source of power supply noise in any system. This created noise must be filtered by additional circuitries that often result in increased design complexity and cost.

Figure 1. Common implementation of a DC/DC and LDO combination to produce 1.2V from 3.6V.

One common method to reduce the switching supply noise is to position a linear regulator immediately after the DC-DC regulator. Figure 1 illustrates this approach where a 1.2V power supply is created from a 3.6V battery by using a step-down DC/DC regulator and an LDO. Efficiency, board space, and cost are all sacrificed to simply reduce the ripple voltage. Furthermore, the fact that linear regulators can behave like a high-pass filter is often overlooked. A simple linear regulator may not be able to filter the switching noise of a modern DC/DC regulator which is designed for portable devices. So, the extra cost and additional effort might be completely wasted if the power delivery system is not planned properly.

Ripple rejection or Power Supply Rejection Ratio (PSRR) of a commercially available LDO is shown in Figure 2. Ripple rejection of this particular LDO is close to 60dB (1000X) at low frequencies of 10-1000Hz and less than 30dB (30X) at around 100kHz. Ripple rejection drops to 0dB (1X) above 600kHz frequency (no attenuation at all). A 100mV ripple voltage at 10Hz frequency would be reduced to roughly 100μV by this LDO. However, the same 100mV ripple at 100kHz is lessened to around 3mV, and at 1MHz the same 100mV ripple would simply passes through this LDO without any reduction at all.

Figure 2. Ripple Rejection from a Typical LDO.

Employing an LDO to reduce ripple voltage and noise in a system requires high ripple rejection (or PSRR) values at frequencies of interest, not merely at low frequencies such as 10Hz or 100Hz. Ideally, the selected LDO would have a high PSRR at the switching frequency of the chosen DC-DC regulator in order for it to be effective. Figure 3 shows measured ripple voltage of a commercially available DC-DC switching regulator that is designed for portable applications (operating in low power mode).

Figure 3. Output of a commercially available DC/DC regulator operating in low power mode.

A good power management solution in wireless portable electronic systems should be low noise, efficient, small in size, and cost effective. Preferably, an efficient DC-DC switching regulator that can maintain a low voltage ripple would be the best choice. Yet, using an LDO with today’s DC-DC regulators can help lowering high ripple voltage at a price of lower system efficiency, higher cost, larger board space, larger amount of heat, and reduced overall battery life.

_______________________________________________________________

This is a guest post by Aivaka that provides tailored power management solutions