semiconductor – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemFri, 09 Dec 2016 15:32:26 +0000en-UShourly1https://wordpress.org/?v=4.760365857RISC-V Startup Aims to Democratize Custom Siliconhttps://www.hpcwire.com/2016/07/13/risc-v-startup-aims-democratize-silicon/?utm_source=rss&utm_medium=rss&utm_campaign=risc-v-startup-aims-democratize-silicon
https://www.hpcwire.com/2016/07/13/risc-v-startup-aims-democratize-silicon/#respondThu, 14 Jul 2016 00:09:53 +0000https://www.hpcwire.com/?p=28684Momentum for open source hardware made a significant advance this week with the launch of startup SiFive and its open source chip platforms based on the RISC-V instruction set architecture. The founders of the fabless semiconductor company — Krste Asanovic, Andrew Waterman, and Yunsup Lee — invented the free and open RISC-V ISA at the University of California, Berkeley, six years ago. The progression of RISC-V and the launch of SiFive opens the door to a new way of chip building that skirts prohibitive licensing costs and lowers the barrier to entry...

]]>Momentum for open source hardware made a significant advance this week with the launch of startup SiFive and its open source chip platforms based on the RISC-V instruction set architecture. The founders of the fabless semiconductor company — Krste Asanovic, Andrew Waterman, and Yunsup Lee — invented the free and open RISC-V ISA at the University of California, Berkeley, six years ago.

The progression of RISC-V and the launch of SiFive opens the door to a new way of chip building that skirts prohibitive licensing costs and lowers the barrier to entry for custom chip design. The traction around RISC-V and other open source hardware efforts like the Facebook-initiated Open Compute Project, and to some extent even the growing diversity in the processor space, which reflects a demand for more openness and choice, may indicate the beginnings of a revolution similar to the one started by Linux on the software side.

Jack Kang, vice president of product and business development, addressed the significance of an open instruction set architecture and this trend toward open hardware.

“The economic demise of Moore’s law can no longer be disputed,” he shared. “The cost per transistor is no longer decreasing. The fixed cost to start a new design continues to rise. Due to these factors, we have seen incredible change in the semiconductor industry. The industry has been set up for the past 30, 40 years based on Moore’s law. How they engineer chips, what products they build, how they work with customers, all of that is based on 30+ years of legacy. Last year, we saw over $100B in mergers & acquisition activity in the semiconductor space, due to these factors and the requirement to look for larger and larger customer volume sockets.”

Designing a custom chip can cost tens and even hundreds of millions of dollars, said SiFive Co-founder Yunsup Lee in an official statement. “It is simply impossible for smaller system designers to get a modern, high-performance chip, much less one customized to their unique requirements.”

SiFive sees custom silicon as an opportunity for the markets that are not being adequately served by the traditional semiconductors. The founders want to democratize access to custom silicon beyond the big players to the inventors, makers, startups, and smallest companies. Included here are fragmented or new markets that do not have the volume or revenue required under the conventional proprietary semiconductor approach, Kang said.

Target markets for SiFive span machine learning, storage and networking as well as the fast-growing IoT market with the launch of two platforms:

Freedom U500 platform

The Freedom U500 Series — part of the Freedom Unleashed family — includes a Linux-capable embedded application processor with multicore RISC-V CPUs, running at a speed of 1.6 GHz or higher with support for accelerators and cache coherency. This SoC was manufactured by TSMC on 28nm process and targets the machine learning, storage and networking space. The U500 supports PCIe 3.0, USB 3.0, Gigabit Ethernet, and DDR3/DDR4.

The Freedom E300 Series, the first product in the Freedom Everywhere family, is aimed at the embedded microcontroller, IoT and wearables markets. The 180nm TSMC chip implements small and efficient RISC-V cores with RISC-V compressed instructions, shown to reduce code size by up to 30 percent, according to the company.

Kang said that he and his colleagues have been witnessing the benefits of the growth of the RISC-V ecosystem. To this point, RISC-V Foundation has more than doubled membership since January. At the last RISC-V workshop in January, there were only 16 member companies, reports Kang, and that roster now includes 40 member companies, including heavyweights Google, Microsoft, IBM, NVIDIA, HP Enterprise, AMD, Qualcomm, Western Digital and Oracle.

SiFive timed its launch to coincide with the 4th RISC-V workshop, happening this week in Boston, where the founders demoed both platforms.

While SiFive is focusing on the embedded and industrial space, the opportunity exists to use RISC-V for other purposes, including server-class silicon. The ISA’s designers sought to ensure that it would support implementation in an ASIC, FPGA or full-custom architecture. Earlier this year at the Stanford HPC Conference, MIT’s Kurt Keville said that RISC-V addresses several of the exascale challenges that were included in the DOE’s oft-cited Exascale report. RISC-V also works well as a teaching tool in academia, said Keville, having a fraction of the instructions of x86 (177 versus roughly 3,000) and about fifth that of ARMv8 (with about 1,000 instructions).

There is even a chapter in the RISC-V ISA manual covering the a variant of the RISC-V ISA that supports a flat 128-bit address space, which has promise for future extreme-scale systems.

Here the manual notes:

“At the time of writing, the fastest supercomputer in the world as measured by the Top500 benchmark had over 1 PB of DRAM, and would require over 50 bits of address space if all the DRAM resided in a single address space. Some warehouse-scale computers already contain even larger quantities of DRAM, and new dense solid-state non-volatile memories and fast interconnect technologies might drive a demand for even larger memory spaces. Exascale systems research is targeting 100 PB memory systems, which occupy 57 bits of address space. At historic rates of growth, it is possible that greater than 64 bits of address space might be required before 2030.”

At the time of launch, SiFive has one announced customer, Microsemi Corporation, which is also a partner for its FPGA dev boards. The company’s SoC business unit worked with SiFive to build a complete RISC-V sub-system and tool-chain targeting its low power SmartFusion2 SoC FPGA platform. FPGA Freedom platforms are available now.

“We think the industry needs to change,” Kang reflected. “Open-source hardware has the potential to be the solution this industry needs [and] RISC-V has the benefit of being designed for modern software stacks and modern circuit techniques. It’s simple, modern, and clean.”

]]>Perhaps Moore’s law isn’t doomed just yet. Maybe. IBM Research (NYSE: IBM) reported in a paper in Science today a technique for making carbon nanotube transistors with tiny (~9nm) contacts that exhibit low, size-independent resistance. This overcomes a huge hurdle in shrinking transistor size beyond current limits.

“I think this is the first carbon nanotube transistor demonstration with such a small, low resistance contact,” said Shu-Jen Han, manager of the Nanoscale Science & Technology Group at IBM Research and an author on the paper (End-bonded contacts for carbon nanotube transistors with low, size-independent resistance).

“This is critically important for extending Moore’s law,” Han continued. “We all know the carbon nanotube has excellent electrical properties; the carriers move much faster in carbon nanotubes than silicon. That’s why we are all, including IBM, so interested in them. The big challenge has been contact size. I would argue it’s now more important than the channel [in the efforts to shrink transistors].”

Here’s a portion of the paper’s abstract:

“Carbon nanotubes provide high-performance channels below 10 nanometers, but as with silicon, the increase in contact resistance with decreasing size becomes a major performance roadblock. We report a single-walled carbon nanotube (SWNT) transistor technology with an end-bonded contact scheme that leads to size-independent contact resistance to overcome the scaling limits of conventional side-bonded or planar contact schemes. A high-performance SWNT transistor was fabricated with a sub–10-nanometer contact length, showing a device resistance below 36 kilohms and on-current above 15 microampere per tube. The p-type end-bonded contact, formed through the reaction of molybdenum with the SWNT to form carbide, also exhibited no Schottky barrier. This strategy promises high-performance SWNT transistors enabling future ultimately scaled device technologies.”

Earlier this summer, IBM unveiled the first 7 nanometer node silicon test chip, pushing the limits of silicon technologies and ensuring further innovations for IBM Systems and the IT industry. By advancing research of carbon nanotubes to replace traditional silicon, IBM is hoping to pave the way for a post-silicon future and delivering on its $3 billion chip R&D investment announced in July 2014.

“These chip innovations are necessary to meet the emerging demands of cloud computing, Internet of Things and Big Data systems,” said Dario Gil, vice president of Science & Technology at IBM Research. “As technology nears the physical limits of silicon, new materials and circuit architectures must be ready to deliver the advanced technologies that will drive the Cognitive Computing era. This breakthrough shows that computer chips made of carbon nanotubes will be able to power systems of the future sooner than the industry expected.”

Sooner than expected doesn’t necessarily mean soon. Han says it may take 10 years or so to flesh out all the problems.

Source: IBM

“This is an important advance but there are many other challenges to be solved such as how to purify the nanotubes, how to place them properly, and we also made good progress there but when we are talking about new technology so many things have to be right. People tend to divide the technology into two parts, materials and the device. Solving the contact size is probably top challenge on the device side. There are still a bunch of issues on the materials side,” said Han.

Indeed the paper points out, “We have only demonstrated p-channel SWNT transistors using p-type end contacts. It will be difficult to form end-bonded n-type contacts to SWNTs in which electrons are directly injected into the conduction band of SWNTs with this carbide formation approach as metals with low enough work function tend to oxidize first rather than react with C. However, it is still possible to realize n-channel SWNT device operation even with end-bonded contacts to high work function metals through electrostatic doping in the vicinity of the source electrode.”

Caveats aside, this is an impressive advance. After decades of processor performance gains, clock rates have stalled in the 3-5GHz range as silicon MOSFETs approach their physical limits. Carbon nanotubes are one of the most promising replacements for silicon in semiconductors. IBM has previously shown that carbon nanotube transistors can operate as excellent switches at channel dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of today’s leading silicon technology.

“Single-walled carbon nanotubes (SWNTs) potentially offer the optimal performance as the channel material for ultrascaled FETs,” write Han and coauthors, “The SWNT saturation velocity is several times higher than that of Si, and the intrinsic thinness (~1 nm in diameter) of SWNTs provides the superior electrostatic control needed for devices with ultrashort Lch (channel length). Indeed, SWNT transistors with 9 nm Lch outperform the best Si MOSFETs with similar Lch.”

The key obstacle to ultrascaling carbon nanotube transistor technology has been forming low-resistance and scalable contact. The recent work achieves that. Earlier work has relied on so-called side-bonded contacts (conducting metal deposited along the length of the nanotube channel), which exhibited contact-length-dependent resistance behavior – the smaller the contact area, the greater the resistance.

Schematics showing the conversion from a side-bonded contact (left), where the SWNT is partially covered by Mo, to end-bonded contact (right), where the SWNT is attached to the bulk Mo electrode through carbide bonds while the carbon atoms from originally covered portion of the SWNT uniformly diffuse out into the Mo electrode. Source: IBM

IBM Research group overcame the challenge with development of an end-bonded contact in which “the SWNT channel abruptly ends at the metal electrodes through a solid-state reaction between the nanotube and deposited Molybdenum (Mo) electrodes. Although the carrier injection area is limited to a ~2nm2 no barrier was observed for hole transport and resistance remained low.”

“For any advanced transistor technology, the increase in contact resistance due to the decrease in the size of transistors becomes a major performance bottleneck,” said Han. “Our novel approach is to make the contact from the end of the carbon nanotube, which we show does not degrade device performance. This brings us a step closer to the goal of a carbon nanotube technology.”

Besides helping to extend Moore’s law, Han foresees many other interesting applications for carbon nanotube transistors such as the base material for flexible electronics and transparent electronics.

]]>https://www.hpcwire.com/2015/10/01/ibm-reports-carbon-nanotube-transistor-breakthrough/feed/021452Unmasking the Speed Limit of Modern Electronicshttps://www.hpcwire.com/2014/12/11/unveiling-speed-limit-modern-electronics/?utm_source=rss&utm_medium=rss&utm_campaign=unveiling-speed-limit-modern-electronics
https://www.hpcwire.com/2014/12/11/unveiling-speed-limit-modern-electronics/#respondFri, 12 Dec 2014 04:27:36 +0000http://www.hpcwire.com/?p=16806For the first time, scientists have captured the essence of semiconductor computing on film by taking snapshots of the electron transfer from valence to conduction band states. It is this leap that forms the basis for the entire semiconductor industry, digital electronics and modern computing as we know it. Using attosecond extreme ultraviolet (XUV) spectroscopy […]

]]>For the first time, scientists have captured the essence of semiconductor computing on film by taking snapshots of the electron transfer from valence to conduction band states. It is this leap that forms the basis for the entire semiconductor industry, digital electronics and modern computing as we know it.

Using attosecond extreme ultraviolet (XUV) spectroscopy much like a stopwatch, the team of physicists and chemists based at UC Berkeley were able to time the step rise at ~450-attoseconds, shedding light on the fundamental speed limit of modern electronic circuitry.

Just how fast is this microscopic event? Consider that an attosecond is equal to one quintillionth of a second. Put another way, an attosecond is to a second what a second is to approximately 31.7 billion years.

As explained by Berkeley science writer Robert Sanders, the age of digital electronics is based on mobile electrons making a semiconductor material conductive so that the application of light or voltage results in a flowing current. In a computer chip, electronic current flowing across transistors facilitates the switch between two binary states, zero and one, giving rise to the fundamental language of computers.

The key event occurs when electrons attached to atoms in the crystal lattice jumps from the valence shell of the silicon atom across the band-gap into the conduction electron region. The previous generation of femtosecond lasers were unable to glimpse this event, which takes place faster than a quadrillionth of a second after laser excitation from the slower lattice motion of the silicon atomic nuclei.

“Though this excitation step is too fast for traditional experiments, our novel technique allowed us to record individual snapshots that can be composed into a ‘movie’ revealing the timing sequence of the process,” said Stephen Leone, UC Berkeley professor of chemistry and physics.

The attosecond extreme ultraviolet (XUV) spectroscopy responsible for the breakthrough recording was developed in the Attosecond Physics Laboratory, which is operated by Leone and Daniel Neumark, UC Berkeley professor of chemistry.

The experimental data was supported by supercomputer simulations of the excitation process and the subsequent interaction of X-ray pulses with the silicon crystal. A team from the University of Tsukuba and the Molecular Foundry at the Department of Energy’s Lawrence Berkeley National Laboratory performed the computing using resources provided by Lawrence Berkeley National Laboratory, the National Energy Research Scientific Computing Center (NERSC) and the Institute of Solid State Physics, University of Tokyo. Funding for the project was provided by the US Department of Defense and the Defense Advanced Research Projects Agency’s PULSE program.

The UC Berkeley colleagues together with researchers from Ludwig-Maximilians Universität in Munich, Germany, the University of Tsukuba, Japan, and the Molecular Foundry at Lawrence Berkeley National Laboratory describe their findings in the Dec. 12 issue of the journal Science.

]]>https://www.hpcwire.com/2014/12/11/unveiling-speed-limit-modern-electronics/feed/016806Deconstructing Moore’s Law’s Limitshttps://www.hpcwire.com/2014/08/18/deconstructing-moores-laws-limits/?utm_source=rss&utm_medium=rss&utm_campaign=deconstructing-moores-laws-limits
https://www.hpcwire.com/2014/08/18/deconstructing-moores-laws-limits/#respondMon, 18 Aug 2014 20:37:32 +0000http://www.hpcwire.com/?p=14704For the past five decades, computers have progressed on a predictable trajectory, doubling in speed roughly every two years in tune with Gordon Moore’s oft-cited observation-turned-prophecy. Although semiconductor scaling continues to yield performance gains, many perceive a tipping point is nigh, where the cost-benefit analysis of further miniaturization breaks down. The latest researcher to weigh […]

]]>For the past five decades, computers have progressed on a predictable trajectory, doubling in speed roughly every two years in tune with Gordon Moore’s oft-cited observation-turned-prophecy. Although semiconductor scaling continues to yield performance gains, many perceive a tipping point is nigh, where the cost-benefit analysis of further miniaturization breaks down.

The latest researcher to weigh in on this tipping point, commonly referred to as the death of Moore’s law, is University of Michigan computer scientist Igor Markov. In a recent article in the journal Nature, Markov tackles the issue not just in terms of the physical limits of integrated-circuit scaling, but as a culmination of various limiting factors in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. With consideration of these limitations, as well as to emerging alternative technologies, Markov outlines “what is achievable in principle and in practice.”

“What are these limits, and are some of them negotiable?” he asks. “On which assumptions are they based? How can they be overcome?”

“Given the wealth of knowledge about limits to computation and complicated relations between such limits, it is important to measure both dominant and emerging technologies against them.”

Advanced techniques such as “structured placement,” shown here and developed by Markov’s group, are currently being used to wring out optimizations in chip layout. Different circuit modules on an integrated circuit are shown in different colors. Algorithms for placement optimize both the locations and the shapes of modules; some nearby modules can be blended when this reduces the length of the connecting wires.

The Nature article addresses the more obvious physical limitations in materials and manufacturing as well as limits related to design and validation, power and heat, time and space, and information and computational complexity. Markov recounts how certain previous limits were circumvented, and compares loose and tight limits. An overview of emerging technologies includes the reminder that these can also indicate as yet unknown limits.

“When a specific limit is approached and obstructs progress, understanding the assumptions made is key to circumventing it,” remarks an NSF writeup of the research. “Chip scaling will continue for the next few years, but each step forward will meet serious obstacles, some too powerful to circumvent.”

“Understanding these important limits,” says Markov, “will help us to bet on the right new techniques and technologies.”

]]>https://www.hpcwire.com/2014/08/18/deconstructing-moores-laws-limits/feed/014704Putting the ‘Silicon’ Back in Silicon Valleyhttps://www.hpcwire.com/2013/09/13/putting_the_silicon_back_in_silicon_valley/?utm_source=rss&utm_medium=rss&utm_campaign=putting_the_silicon_back_in_silicon_valley
https://www.hpcwire.com/2013/09/13/putting_the_silicon_back_in_silicon_valley/#respondFri, 13 Sep 2013 07:00:00 +0000http://www.hpcwire.com/2013/09/13/putting_the_silicon_back_in_silicon_valley/In a recent opinion piece, Angel Orrantia points to a steadily decreasing fab industry and calls on Silicon Valley to refocus its efforts on the transformation of this sector forthwith.

]]>A blog article, titled “Silicon Valley Must Reinvigorate the Semiconductor Industry,” points to a steadily decreasing fab industry and calls on Silicon Valley to refocus its efforts on the transformation of this sector forthwith.

Angel Orrantia, who penned the article, is the Director of Business Development at SKTA InnoPartners LLC, a startup accelerator that secures seed funding for early stage companies.

He writes that “investments in fabless semiconductor startups have been steadily decreasing in both number and dollars since 2000.”

He attributes the drop-off to two primary reasons. There are fewer companies bowing out via mergers and acquisitions. The number has fallen from a high of more than 120 in 2000 to a low in 2009 of just over 40. The number of IPOs has likewise dropped. While there were 26 in 2000, there were none in the years 2002, 2008, and 2009. In 2012, fewer than five fabless semiconductor companies went public.

During the same time, there’s been a parallel rise in software startups. It’s no coincidence, remarks Orrantia.

“With limited exit options, venture capitalists and other investors are drawn to the higher and faster returns of software companies,” he writes. “The semiconductor industry has largely been abandoned by all except the most ardent believers.”

As we enter this age of software-defined everything, it’s easy to forget that hardware and physical networking undergirds the entire computing platform. Yet other countries have not fallen prey to this way of thinking. China, Taiwan, India – they all continue to invest in semiconductor research, design and manufacturing, says Orrantia.

The issue hasn’t yet reached the “brain-drain” tipping point, in Orrantia’s view, but he notes that the current model is not sustainable if Silicon Valley and by extension the US is to maintain its leadership position in semiconductors and enterprise hardware.

Orrantia proposes a different model to help usher in a re-energized semiconductor sector, and it’s one The Global Semiconductor Alliance (GSA) also espouses, called the “capital-lite” structure. The foundation for this model involves matching a startup with a strategic partner who provides guidance on a range of issues. The startup is assured funding, facilities, professional services, as well as a guaranteed exit. The strategic partner can use the startup’s solution to meet a product portfolio need or enter a new market.

“Properly executed, this model drastically shortens development times and increases the number of entrepreneurial pipe-dreams that become industry creating innovations,” notes Orrantia. “It’s a win-win for the Silicon Valley semiconductor industry.”

]]>https://www.hpcwire.com/2013/09/13/putting_the_silicon_back_in_silicon_valley/feed/03820STARnet Alliance Seeks Revolution in Chip Designhttps://www.hpcwire.com/2013/01/23/darpa-led_starnet_alliance_seeks_revolution_in_chip_design/?utm_source=rss&utm_medium=rss&utm_campaign=darpa-led_starnet_alliance_seeks_revolution_in_chip_design
https://www.hpcwire.com/2013/01/23/darpa-led_starnet_alliance_seeks_revolution_in_chip_design/#respondWed, 23 Jan 2013 08:00:00 +0000http://www.hpcwire.com/?p=4219<img style="float: left;" src="http://media2.hpcwire.com/hpcwire/STARnet_logo_120x.jpg" alt="" width="94" height="116" />The Defense Advanced Research Projects Agency (DARPA) and the Semiconductor Research Corporation (SRC) have launched a new consortium to advance the pace of semiconductor innovation in the US as the technology approaches the limits of miniaturization. The main thrust of the project is the creation of the Semiconductor Technology Advanced Research Network, aka STARnet.

]]>The Defense Advanced Research Projects Agency (DARPA) and the Semiconductor Research Corporation (SRC) have launched a new consortium to advance the pace of semiconductor innovation in the US as the technology approaches the limits of miniaturization.

The main thrust of the project is the creation of the Semiconductor Technology Advanced Research Network, aka STARnet, a network of six Semiconductor Technology Advanced Research centers, tasked with providing “long-term breakthrough research that results in paradigm shifts and multiple technology options.”

At each of the six STARnet university hubs – University of Illinois at Urbana-Champaign, University of Michigan, University of Minnesota, Notre Dame, University of California at Los Angeles and University of California at Berkeley – researchers will pursue CMOS-and-beyond technologies with an emphasis on design, software, system-level verification, and validation. By assessing and eliminating technological barriers identified by the International Technology Roadmap for Semiconductors (ITRS) and engaging in pre-competitive exploratory research, the teams will help secure the continued success of the nation’s microelectronics and defense industries.

DARPA and contributing companies have allocated $194 million in joint funding. Although the specific dollar amount varies according to their individual contracts, each STARnet center will receive more than $6 million annually for up to five years. The project is administered by Microelectronics Advanced Research Corporation (MARCO), a subsidiary of SRC.

The multi-disciplinary, collaborative effort draws upon the expertise of 148 faculty researchers and 400 graduate students from 39 universities. In addition to DARPA and SRC, members include the U.S. Air Force Research Laboratory, the Semiconductor Industry Association (SIA), and eight industry partners: Applied Materials, GLOBALFOUNDRIES, IBM, Intel Corporation, Micron Technology, Raytheon, Texas Instruments and United Technologies.

The semiconductor industry, a $144 billion market in the US, has so far benefited from a seemingly endless cycle of transistor shrinks, but Moore’s Law is waning. While researchers will likely find a way to squeeze silicon for another decade or so, there are undeniable physical limitations associated with the nanoscale frontier.

“The dimensions of the transistors of today are in the tens of atoms,” explains Todd Austin, professor of electrical engineering and computer science and C-FAR director. “We can still make them smaller, but not without challenges that threaten the progress of the computing industry.”

With microelectronics so tied to the nation’s security and economy, it’s imperative that these challenges are addressed. In the words of SRC Executive Director Gilroy Vandentop, “STARnet is a collaborative network of stellar research centers finding paths around the fundamental physical limits that threaten the long term growth of the microelectronics industry.”

A breakdown of the six multi-university teams and their primary areas of research:

The Center for Future Architectures Research (C-FAR), led by the University of Michigan, is focused on computer systems architectures for the 2020-2030 timeframe. They anticipate that application-driven architectures that can leverage emerging circuit fabrics will be key to extending the life of CMOS technology. Participating universities include Columbia, Duke, Georgia Tech, Harvard, MIT, Northeastern, Stanford, UC Berkeley, UCLA, UC San Diego, Illinois, Washington and Virginia.

The Center for Function Accelerated nanoMaterial Engineering (FAME), led by the University of California, Los Angeles, is studying nonconventional materials, including nanostructures with quantum-level properties. The research seeks to support analog, logic and memory devices for “beyond-binary computation.” Participating universities include Columbia, Cornell, UC Berkeley, MIT, UC Santa Barbara, Stanford, UC Irvine, Purdue, Rice, UC Riverside, North Carolina State, Caltech, Penn, West Virginia and Yale.

The Center for Low Energy Systems Technology (LEAST), led by the University of Notre Dame, will investigate new materials and devices for their potential to enable low-power electronics.Participating universities include Carnegie Mellon, Georgia Tech, Penn State, Purdue, UC Berkeley, UC San Diego, UC Santa Barbara, UT Austin and UT Dallas.

The Center for Systems on Nanoscale Information Fabrics (SONIC), led by the University of Illinois at Urbana-Champaign, is exploring the benefits of a transitioning from a deterministic to a statistical model. Participating universities include UC Berkeley, Stanford, UC Santa Barbara, UC San Diego, Michigan, Princeton and Carnegie Mellon.

The TerraSwarm Research Center (TerraSwarm), hosted by the University of California, Berkeley, seeks to develop city-scale capabilities using distributed applications on shared swarm platforms. Participating universities include Michigan, Washington, UT Dallas, Illinois at Urbana-Champaign, Penn, Caltech, Carnegie Mellon and UC San Diego.

“Each of these six centers is composed of several university teams jointly working toward a single goal: knocking down the barriers that limit the future of electronics,” comments DARPA program manager Jeffrey Rogers.

“With such an ambitious task, we have implemented a nonstandard approach. Instead of several different universities competing against each other for a single contract, we now have large teams working collaboratively, each contributing their own piece toward a large end goal.”

The project founders believe that long-term research is necessary to bolster semiconductor innovation and ensure the future of US military and industry competitiveness. They state that while short-term programs are suitable for sustaining an evolutionary pace, longer-term efforts are necessary to spur revolutionary advances, especially in light of impending technology constraints.

“STARnet will perform longer-term, more broad-based research, with the goal of expanding the knowledge base of the semiconductor industry, [and] researchers at STARnet centers willgenerate ideas for technology solutions,” notes the program literature.

Industry partners gain access to bleeding-edge research subsidized through Department of Defense funding. And while SRC estimates that STARnet research technology likely won’t be commercially viable for at least another 10-15 years, members will be able to sub-license the resulting IP.

STARnet continues the work of the Focus Center Research Program (FCRP), a similar program that has been in place since 1997 but is set to conclude on Jan. 31, 2013.

]]>https://www.hpcwire.com/2013/01/23/darpa-led_starnet_alliance_seeks_revolution_in_chip_design/feed/04219Keeping Moore’s Law Alivehttps://www.hpcwire.com/2012/07/05/keeping_moores_law_alive/?utm_source=rss&utm_medium=rss&utm_campaign=keeping_moores_law_alive
https://www.hpcwire.com/2012/07/05/keeping_moores_law_alive/#respondThu, 05 Jul 2012 07:00:00 +0000http://www.hpcwire.com/?p=4403Processor speed and power consumption are now at odds, which will force chipmakers to rethink their designs..

]]>There has been a lot of discussion regarding the end of Moore’s Law, almost since its inception. Renowned leaders in high performance computing and physics have predicted scenarios detailing how chip advancements will eventually come to a halt. Last week, IEEE Spectrum dedicated a podcast to the subject and talked about a number of design changes aimed at extending silicon’s viability.

In a recent IEEE Spectrum article, associate editor Rachel Courtland explained that silicon has become increasingly difficult to work with as semiconductor manufacturers continue to push the physical limits of the technology. Transistors have become so small, that they have begun to leak electrical current. This problem has led to a search for new technologies that may eventually replace or enhance conventional chip designs.

Courtland met up with Bernd Hoefflinger, editor of Chips 2020, a book written by experts in the field explaining their thoughts regarding the future of computing. In Courtland’s interview, Hoefflinger noted that computational performance is not the only issue at hand. The power consumed by these technologies has a profound impact on their practicality. Said Hoefflinger:

“They expect 1000 times more computations per second within a decade. If we were to try to accomplish this with today’s technology, we would eat up the world’s total electric power within five years. Total electric power!”

He was referring to Dennard’s scaling, which is related to Moore’s Law. Essentially, as transistors get smaller, they will increase in speed and consume less power. Unfortunately, this phenomenon is losing steam and overcoming this limitation has become a primary focus by semiconductor designers. Hoefflinger believes if the power needed to compute a simple multiplier could be reduced to 1 femtojoule, silicon will keep Moore’s Law alive for the next decade. A femtojoule is roughly 10 percent of the energy fired from a human synapse.

To reach these low-power benchmarks, new 3-D circuit designs have emerged. Currently, 3-D chips have entered the market, using wires to connect multiple dies together. In addition, tri-gate or FinFET transistors have been developed, but Hoefflinger thinks that another design holds more promise.

According to him, 3-D merged transistors can be developed that combine two transistors into a single device. Instead of designing p-doped and n-doped transistors with their own gates, they share a gate with a PMOS transistor on one side and a NMOS transistor on the other side. These have sometimes been referred to as “hamburger transistors.”

Another method to reduce power has to do with how calculations are performed. For example, if multiplication was performed starting with the most significant bits (rather than the least significant bits), it could reduce the amount of transistors required for a calculation. While the reduction might not drop the energy to one femtojoule, it may bring consumption down “by an order of magnitude or two”.

Lastly, Hoefflinger suggested changing chip circuit architecture similar to communication circuitry. The change in design would allow for an integrated error correction, also leading to lower operational voltage.

If all of these suggestions for power reduction are implemented, it may extend Moore’s Law beyond 2020. Hoefflinger believes it could go either way, but is encouraged by the fact that these issues are getting a lot of attention right now.

]]>https://www.hpcwire.com/2012/07/05/keeping_moores_law_alive/feed/04403Novel Chip Technology to Power GRAPE-8 Supercomputerhttps://www.hpcwire.com/2012/05/10/novel_chip_technology_to_power_grape-8_supercomputer/?utm_source=rss&utm_medium=rss&utm_campaign=novel_chip_technology_to_power_grape-8_supercomputer
https://www.hpcwire.com/2012/05/10/novel_chip_technology_to_power_grape-8_supercomputer/#respondThu, 10 May 2012 07:00:00 +0000http://www.hpcwire.com/?p=4473<img style="float: left;" src="http://media2.hpcwire.com/hpcwire/eASIC_logo_small.jpg" alt="" width="108" height="34" />With the fastest supercomputers on the planet sporting multi-megawatt appetites, green HPC has become all the rage. The IBM Blue Gene/Q machine is currently number one in energy-efficient flops, but a new FPGA-like technology brought to market by semiconductor startup eASIC is providing an even greener computing solution. And one HPC project in Japan, known as GRAPE, is using the chips to power its newest supercomputer.

]]>With the fastest supercomputers on the planet sporting multi-megawatt appetites, green HPC has become all the rage. The IBM Blue Gene/Q machine is currently number one in energy-efficient flops, but a new FPGA-like technology brought to market by semiconductor startup eASIC is providing an even greener computing solution. And one HPC project in Japan, known as GRAPE, is using the chips to power its newest supercomputer.

GRAPE, which stands for Gravity Pipe, is a Japanese computing project that is focused on astrophysical simulation. (More specifically, the application uses Newtonian physics to compute the interaction of particles in N-body systems). The project, which began in 1989, has gone through eight generations of hardware, all of which were built as special-purpose supercomputer systems.

Each of the GRAPE machines was powered by a custom-built chip, specifically designed to optimize the astrophysical calculations that form the basis of the simulation work. The special-purpose processors were hooked up as external accelerators, using more conventional CPU-based host systems, in the form or workstations or servers, to drive the application.

The first-generation machine, GRAPE-1, managed just 240 single precision megaflops in 1989. The following year, the team build a double precision processor, which culminated in the 40-megaflop GRAPE-2. In 1998, they fielded GRAPE-4, their first teraflop system. The most recently system, GRAPE-DR, was designed to be a petascale machine, although its TOP500 entry showed up in 2009 as an 84.5 teraflop cluster.

Even though the GRAPE team was able to squeeze a lot more performance out of specially built hardware than they would have using general-purpose HPC machinery, it’s an expensive proposition. Each GRAPE iteration was based on a different ASIC design, necessitating the costly and time-consuming process of chip design, verification, and production. And as transistor geometries shrunk, development costs soared.

As the GRAPE team at Hitotsubashi University and the Tokyo Institute of Technology began planning the next generation, they decided that chip R&D could take up no more than a quarter of system’s cost. But given the escalating expense of processor development, they would overshoot that by a wide margin. In 2010, they estimated it would take on the order of $10 million to develop a new custom ASIC on 45nm technology. So when it came time for GRAPE-8, the engineers were looking for alternatives.

The natural candidates were GPUs and FPGAs, which offer a lot of computational horsepower in an energy-efficient package. Each had its advantages: FPGAs in customization capability, GPUs in raw computing power. Ultimately though, they opted for a technology developed by eASIC, a fabless semiconductor company that offered a special kind of purpose-built ASIC, based on an FPGA workflow.

The technology had little grounding in high performance computing, being used mostly in embedded platforms, like wireless infrastructure and enterprise storage hardware. But the GRAPE designers were impressed by the efficiency of the technology. With an eASIC chip, they could get the same computational power as an FPGA for a tenth of the size and at about a third of the cost. And although the latest GPUs were slightly more powerful flop-wise than what eASIC could deliver, power consumption was an order of magnitude higher.

In a nutshell, the company offers something between an FPGA and a conventional ASIC. According to Niall Battson, eASIC’s Senior Product Manager, it looks like a field-programmable gate array, but “all the programming circuitry has been taken out.” That saves on both chip real estate and power since that circuitry doesn’t end up on the die.

In essence, the company is able to take an FPGA design (in RTL or whatever) and produce an ASIC from it. But not a conventional one. Battson says their real secret sauce is that the logic is laid down in a single silicon layer, rather than the four or five used for conventional ASICs. That simplification greatly speeds up chip validation and manufacturing, so much so that they can turn around a production chip in 4 to 6 months, depending upon the complexity of the design.

While the logic density and power efficiency are less than that of a standard ASIC, the up-front costs are considerably lower. For customers whose volumes eventually warrant a “true” ASIC (like for disk drive controllers), eASIC provides a service that takes the customer’s design through that final step of hardening.

For the astrophysics simulation supercomputer, no such step was necessary. The 45nm chip eASIC built and delivered for the new GRAPE-8 system achieves close to 500 gigaflops (250 MHz) with a power draw of just 10 watts. The GRAPE-8 accelerator board houses two of these custom chips, plus a standard processor, delivering 960 gigaflops in 46 watts. When hooked up to a PC host, another 200 watts is added. Even in this makeshift configuration, the system achieves 6.5 gigaflops per watt, about three times better that the 2.1 gigaflops per watt held by IBM’s Blue Gene/Q, the current Green500 champ.

Of course, the Blue Gene/Q is a general-purpose supercomputer, so the comparison is bit of apples-to-oranges. But the generality of computer designs exists on a continuum, not as a binary taxonomy. In general, better performance and power efficiency can be achieved as more specialization is incorporated into the hardware. The downside is that such single-application machines are notoriously expensive, which explains why there are so few of them. Besides GRAPE, only the Anton supercomputer (for molecular dynamics simulations) is still using application-specific ASICs.

The GRAPE designers are actually interested in building a more ambidextrous machine to handle a greater variety of science applications. In fact, the GRAPE-DR machine was a bit of a departure from its predecessors and was intended for applications outside of astrophysics simulations, including genome analysis, protein modeling and molecular dynamics.

According to Battson, a more general-purpose SIMD chip is certainly possible under an eASIC scheme, and they’re considering how they might be able to tweak their technology to make that happen. The company’s next generation 28nm product is slated to deliver twice the performance, while halving power consumption, so there is some headroom for added capabilities. The main problem he says is that a general-purpose SIMD ASIC would probably need to run twice as fast as the GRAPE-8 chip to deliver reasonable performance, and that drives up power consumption.

Of course, with the prospect of energy-sucking exascale machines on the horizon, application-specific supercomputing could make a comeback, especially if spinning out purpose-built accelerators was made fast and affordable. In that case, eASIC and its technology might find itself with a lot of eager suitors.

]]>https://www.hpcwire.com/2012/05/10/novel_chip_technology_to_power_grape-8_supercomputer/feed/04473Cloud is History: The Sum of Trusthttps://www.hpcwire.com/2010/07/05/cloud_is_history_the_sum_of_trust/?utm_source=rss&utm_medium=rss&utm_campaign=cloud_is_history_the_sum_of_trust
https://www.hpcwire.com/2010/07/05/cloud_is_history_the_sum_of_trust/#respondMon, 05 Jul 2010 07:00:00 +0000http://www.hpcwire.com/?p=9405It is critical that there is a healthy cloud ecosystem for vendors and those who use their services, which can only be propelled by trust. The longevity of the cloud computing paradigm as a form of outsourcing critical applications depends on this ecosystem but trust can be easily compromised.

]]>To continue where we left off with the last blog, this time we are focusing the discussion around trust. In considering cloud, this is probably the largest barrier we will encounter.

If we look at history, the issues associated with trusting someone else to perform what we view as a critical element of our business has been faced and successfully addressed in the past. Semiconductor companies had to have the entire process of manufacturing under their direct oversight and control because portions of that process were considered business differentiating and proprietary, and close coupling between design process and manufacturing process were required for successful ASIC development (lots of iterative, back and forth process). As time marched on, capacity needs increased, complexity climbed, the cost increased with each of those dynamics, creating an ever higher barrier to entry for maintaining existing or creating new fabrication facilities. In the mid 1980’s, we witnessed the birth of the first foundry, with TSMC coming onto the scene to create a differentiated business model (Fabless Semiconductor), where engineering companies could focus on just the process of design, and then hand off their designs off to TSMC to be manufactured. The Fabless Semiconductor Industry is a $50B market today, and growing.

So, are the issues we face with datacenters today any different? Not really, just a slight different view of the same picture. The dynamics are the same: a non-linear cost increase due to capacity and complexity increases is the driver for re-evaluating the current position. The function is considered critical, and sometimes differentiating and/or proprietary to the business, and is therefore internally maintained at present. And finally, the function deals directly with the core product of the company, therefore security is a paramount concern. What we witnessed with the fabrication facilities is that many companies were able to realize the cost benefits of outsourcing that function without damaging the business, so we should be able to follow that model to realize the cost benefits that cloud computing offers with respect to the datacenter. And we even have a recipe for success to look and use as a template for what to do and how to do it.

What customers of cloud will be looking for from service providers is multi-faceted:

Budget control – making sure they can continue to do the right thing for their company from a cost perspective and continue to come up with creative ways to keep budgets under control. This includes making sure they do not get locked into exclusive relationships, so they need to make sure that there are multiple vendor options so that there can be competition. In the same light, they need to make sure that the solution they consume is standards based, so that moving to another provider is simple, straight forward, and not costly.

Do it my way – points to customer intimacy. The consumer company must understand the solution they are leveraging and the supplier must provide the solution in such a way that it makes sense to the customer. This sounds obvious, but in many cases, companies have been held hostage even by their own internal IT organizations through confusing terminology, overly complex descriptions of solutions, and territorial behavior. The customer should understand the solution on their terms, which implies that the service provider must intimately understand the customer’s core business. Customers should get the services and solution they need, which is something specific to their business, not something bootstrapped from another industry or something built for a different or generic purpose. And it is not sufficient to have really smart technology people on staff, and have the customer tell the service provider exactly what they need so the supplier can do the right thing – many times the customer doesn’t know what they need, they just want it to work right. That is why this needs to be domain specific, performed by domain experts in the customer’s space.

Honesty – do I believe you? The customer needs to have faith and confidence that the supplier has the best interest of the consumer as a driver. Understanding intent and understanding positive behavioral characteristics as compared to negative ones. Any competitive or adversarial behavior will be the tip that trust should be called into question.

Focus on my business, not yours (counter-intuitive concept). This is really the crux of the issue. If the customer can really believe that the supplier is looking out for customer interests first, and not only trying to tell the customer whatever they think they want to hear, only then will the customer allow the supplier to absorb responsibility from them for their infrastructure to help make them successful. This is key because if the customer has to continue to drive success and own all the responsibility, then nothing has really changed, and it is probably easier for the customer to continue keeping all the resource in-house where they have much more direct control over hire/fire, retention, resource caliber, etc.

As a result, cloud service providers will need to demonstrate many things in order to establish trustworthiness. From an intent standpoint, make sure the focus is on the end customer. In the EDA space, that would be the engineer. Understand the customer’s business to the point that you can help them do their job. This implies an intimate understanding of the tools, what they do, how they work, and where they fit as well as business model, economic drivers, and a solid grasp of the industry dynamics. Also, the supplier should maintain a long term view (strategic) in addition to a short term perspective (tactical). Always do the right thing now, but how solutions are designed to scale into the future can have significant cost impacts over time. Finally, it should always be relationship focused. The ability to judge trustworthiness is measured over time, and your every action defines the integrity and character of your organization.

The behavior portion for the supplier is fairly straightforward. Deal with customers in a transparent, honest fashion. Don’t try to hide things, don’t try to play the poker game of masking your agenda, or worrying about what you’re leaving on the table, masking how much anyone is getting, trying to optimize one variable in the whole equation (profit/one sided benefit/etc.). Don’t create win / lose scenarios and don’t try to get some undeserved benefit. Exchanges should always be “appropriate” and fair, avoid adversarial relationship development. If relationship turns adversarial, be open to walking away. Customers need to be trained how to conduct themselves in a trustworthy manner as well as service providers, and have an equal hand in creating a trusting relationship. Make sure your relationships are cooperative, and not competitive. If you compete with your customers about who is smarter or who is the better negotiator, or only believing a deal is good if you win and the customer loses, you are building a bomb, not a partnership.

There is an equal amount of responsibility on the consumer side of the equation in order to get a partner. From an intent standpoint the customer should make sure the focus is on the business problem (not departmental issues, not policy issues, not contract issues, etc.), and help the service provider navigate the customer internal process in order to keep the focus on the business problem. The customer also needs to make sure there is strong communication with regard to intended future direction for the company to ensure that plans are strategic and not only focused only on the present. The concept of relationship implies a mutual dependence, and it is recognized that interdependence creates risk/exposure, but also accomplishes the desired efficiencies, economies of scale, superior solutions, and optimizes economic benefit.

Behaviorally, the customer should also demonstrate transparency and honesty, not hiding information from the provider. Create an environment where the supplier can feel safe being open and honest. The customer wants to understand that they are not being taken advantage of, and that can happen in good ways or bad ways. We will talk more about the good ways in our next blog on organization changes. The good way is to have done all the homework necessary to know roughly what the right answer looks like prior to getting that answer (whether price, technical solution, or technology direction). There is a tremendous amount of work that goes into the development of instincts. The wrong answer, adversarial behavior – just pounding vendors for a better price or a better discount or more resources so that you feel you got a deal, without any comprehension of what an appropriate price or solution looks like, will have fatal results for trust and your relationship with your vendors. Competitive or adversarial behavior will result in an adversarial response, which causes a lack of honesty leading to no trust.

You should not worry about “am I getting a better deal than anyone else in the world” or masking a lack of understanding by treating vendor brutally. Do your homework, know how much something is worth, and make sure you are getting an appropriate price and an appropriate solution. Don’t try to optimize one variable in the whole equation (overly custom for no benefit, only focus on cost, etc.) and don’t create a win / lose scenarios or expect to get something undeserved. Everyone needs to care about the health of the ecosystem. Lack of trust means that you will not get good deals or appropriate solutions for the long run.

In conclusion, businesses should focus on the core competency of the business. All non-core portions of the business should be considered for outsource provided good business practices. If there exists a trustworthy, cost effective, customer focused provider of non-core, non-strategically differentiated functions of the business, those providers should be patronized. If not, create them. Examples of this would be Global Foundries spin off from AMD, Jazz Semiconductor spin off from Conexant, etc. Outsource needs to be structured and contracted in such a way that it facilitates trustworthiness. Make sure the solutions can be moved to alternate provider without significant modification or cost. Avoid getting committed to vendor locked-in solutions (hardware, software, people, or process). Make sure the solution is standards based and non-proprietary. Make sure that the solution can take advantage of new innovations immediately. Ensure that you negotiate built in growth ramps for normal business evolution while maintaining flat (predictable) cost to the business (budget control). And make sure the solution scales with the business use case (up or down).

]]>In a move that could revolutionize nanoelectronics manufacturing and the semiconductor industry, scientists at the Tyndall National Institute (Cork, Ireland) have designed and fabricated what they claim is the world’s first junctionless transistor. The breakthrough is based on the deployment of a control gate around a silicon wire that measures just a few dozen atoms across.