HPCwire » siliconhttp://www.hpcwire.com
Since 1986 - Covering the Fastest Computers in the World and the People Who Run ThemTue, 31 Mar 2015 19:48:35 +0000en-UShourly1http://wordpress.org/?v=4.1.1One Atom Thin Silicon Transistors Are Herehttp://www.hpcwire.com/2015/02/07/one-atom-thin-silicon-transistors/?utm_source=rss&utm_medium=rss&utm_campaign=one-atom-thin-silicon-transistors
http://www.hpcwire.com/2015/02/07/one-atom-thin-silicon-transistors/#commentsSat, 07 Feb 2015 17:16:27 +0000http://www.hpcwire.com/?p=17330Move over graphene, there’s a new 2D wonder material being hailed as a potential Moore’s law extender, called silicine. This one‐atom‐thick two‐dimensional crystal of silicon could be the ultimate miniaturization enabler, setting the stage for future generations of faster, more energy-efficient microchips. A cousin to graphene, silicine consists of a single layer of atoms arranged

]]>Move over graphene, there’s a new 2D wonder material being hailed as a potential Moore’s law extender, called silicine. This one‐atom‐thick two‐dimensional crystal of silicon could be the ultimate miniaturization enabler, setting the stage for future generations of faster, more energy-efficient microchips.

A cousin to graphene, silicine consists of a single layer of atoms arranged in a honeycomb pattern, but where graphene is carbon-based, silicine is made from silicon atoms. Thus it offers an easier path to integration for a microelectronics industry already dominated by silicon.

Traditional silicon-based transistors can only get down to about 5 atoms thick before becoming unpredictable and losing performance, the invention of a silicene transistor pushes that limitation back to just one atom. And in computing, smaller means faster.

Silicine has had a rather remarkable journey. First theorized in 1994, it wasn’t until 2010 that researchers began making headway in developing the silicon analogue to graphene. Two years later, several teams around the world independently succeeded in creating silicine in the lab.

Now researchers at the University of Texas at Austin’s Cockrell School of Engineering are unveiling details of the first silicene transistor, a crucial characteristic for logic operations.

The devices were developed by Deji Akinwande, an assistant professor in the Cockrell School’s Department of Electrical and Computer Engineering, and his team, which includes lead researcher Li Tao. Their demonstration that silicine can be made into transistors is a major advancement in the search for alternative CMOS materials.

“Nobody could have expected that in such a short time, something that didn’t exist could make a transistor,” says Guy Le Lay, a materials scientist at Aix-Marseille University in France, who was one of the first scientists to create silicine.

Despite its promise, there are major challenges associated with silicene, such as its instability when exposed to air.

To reduce exposure to air, the researchers formed a silicene sheet by allowing a hot vapor of silicon atoms to condense onto a crystalline block of silver inside a vacuum chamber, then added a 5-nano­meter-thick layer of alumina on top. Thus protected, the team was able to safely peel it of its base and transfer it to an oxidized-silicon substrate. By scraping off some of the silver, the team exposed two islands of metal (acting as electrodes) with a strip of silicene between them.

The exposed silicine still degrades in about two minutes, but that window of time was sufficient to measure its properties. While its electrons were “sluggish” in comparison with graphene, silicine’s buckled structure endows it with a tuneable band gap, which graphene, being flat, lacks. Since band gaps are what give semiconductors the ability to switch currents on and off, they are the foundation of transistors.

The technique shows promise for other paper-thin, air-sensitive materials too, but its silicon-nature makes silicine a serious contender for commercial adoption.

“Apart from introducing a new player in the playground of 2-D materials, silicene, with its close chemical affinity to silicon, suggests an opportunity in the road map of the semiconductor industry,” Akinwande said. “The major breakthrough here is the efficient low-temperature manufacturing and fabrication of silicene devices for the first time.”

]]>http://www.hpcwire.com/2015/02/07/one-atom-thin-silicon-transistors/feed/0Unmasking the Speed Limit of Modern Electronicshttp://www.hpcwire.com/2014/12/11/unveiling-speed-limit-modern-electronics/?utm_source=rss&utm_medium=rss&utm_campaign=unveiling-speed-limit-modern-electronics
http://www.hpcwire.com/2014/12/11/unveiling-speed-limit-modern-electronics/#commentsFri, 12 Dec 2014 04:27:36 +0000http://www.hpcwire.com/?p=16806For the first time, scientists have captured the essence of semiconductor computing on film by taking snapshots of the electron transfer from valence to conduction band states. It is this leap that forms the basis for the entire semiconductor industry, digital electronics and modern computing as we know it. Using attosecond extreme ultraviolet (XUV) spectroscopy

]]>For the first time, scientists have captured the essence of semiconductor computing on film by taking snapshots of the electron transfer from valence to conduction band states. It is this leap that forms the basis for the entire semiconductor industry, digital electronics and modern computing as we know it.

Using attosecond extreme ultraviolet (XUV) spectroscopy much like a stopwatch, the team of physicists and chemists based at UC Berkeley were able to time the step rise at ~450-attoseconds, shedding light on the fundamental speed limit of modern electronic circuitry.

Just how fast is this microscopic event? Consider that an attosecond is equal to one quintillionth of a second. Put another way, an attosecond is to a second what a second is to approximately 31.7 billion years.

As explained by Berkeley science writer Robert Sanders, the age of digital electronics is based on mobile electrons making a semiconductor material conductive so that the application of light or voltage results in a flowing current. In a computer chip, electronic current flowing across transistors facilitates the switch between two binary states, zero and one, giving rise to the fundamental language of computers.

The key event occurs when electrons attached to atoms in the crystal lattice jumps from the valence shell of the silicon atom across the band-gap into the conduction electron region. The previous generation of femtosecond lasers were unable to glimpse this event, which takes place faster than a quadrillionth of a second after laser excitation from the slower lattice motion of the silicon atomic nuclei.

“Though this excitation step is too fast for traditional experiments, our novel technique allowed us to record individual snapshots that can be composed into a ‘movie’ revealing the timing sequence of the process,” said Stephen Leone, UC Berkeley professor of chemistry and physics.

The attosecond extreme ultraviolet (XUV) spectroscopy responsible for the breakthrough recording was developed in the Attosecond Physics Laboratory, which is operated by Leone and Daniel Neumark, UC Berkeley professor of chemistry.

The experimental data was supported by supercomputer simulations of the excitation process and the subsequent interaction of X-ray pulses with the silicon crystal. A team from the University of Tsukuba and the Molecular Foundry at the Department of Energy’s Lawrence Berkeley National Laboratory performed the computing using resources provided by Lawrence Berkeley National Laboratory, the National Energy Research Scientific Computing Center (NERSC) and the Institute of Solid State Physics, University of Tokyo. Funding for the project was provided by the US Department of Defense and the Defense Advanced Research Projects Agency’s PULSE program.

The UC Berkeley colleagues together with researchers from Ludwig-Maximilians Universität in Munich, Germany, the University of Tsukuba, Japan, and the Molecular Foundry at Lawrence Berkeley National Laboratory describe their findings in the Dec. 12 issue of the journal Science.

]]>http://www.hpcwire.com/2014/12/11/unveiling-speed-limit-modern-electronics/feed/0IBM Bets on Nanotubes to Succeed Silicon in 2020http://www.hpcwire.com/2014/07/02/ibm-bets-nanotubes-succeed-silicon-2020/?utm_source=rss&utm_medium=rss&utm_campaign=ibm-bets-nanotubes-succeed-silicon-2020
http://www.hpcwire.com/2014/07/02/ibm-bets-nanotubes-succeed-silicon-2020/#commentsWed, 02 Jul 2014 23:36:19 +0000http://www.hpcwire.com/?p=13630The effect of five decades of exponential progress with silicon chips doubling in speed every couple years as observed by Intel cofounder Gordon Moore in 1965 cannot be overstated. As silicon-based transistors push against the limits of physics, the death of Moore’s law could pack a devastating blow to the industry and even the global

]]>The effect of five decades of exponential progress with silicon chips doubling in speed every couple years as observed by Intel cofounder Gordon Moore in 1965 cannot be overstated. As silicon-based transistors push against the limits of physics, the death of Moore’s law could pack a devastating blow to the industry and even the global economy. It’s a big problem that has chip makers, like IBM, Intel and others, scrambling for a workaround. One of the most promising strategies for extending Moore’s law involves using carbon nanotube-based transistors.

Currently, Intel makes most of its CPUs on a 22nm manufacturing process, and its smallest silicon transistor measures 14 nanometers. The semiconductor industry group, ITRS, anticipates that the five-nanometer “node” will debut in 2019. It’s a point that may very well spell the death of silicon from a practical standpoint. That’s the opinion of Wilfried Haensch, who heads up IBM’s nanotube project at the T.J. Watson research center in Yorktown Heights, New York.

“That’s where silicon scaling runs out of steam, and there really is nothing else,” says Haensch in an article on MIT’s Technology Review.

When this day comes, IBM wants to have its carbon nanotube-based processors ready to roll out. It’s a plan that’s been many years in the making.

IBM’s history with carbon nanotube transistors dates back to 1998, when company researchers showed that it was a viable approach by building one of the first working prototypes. Now IBM is working to bring the technology to commercialization.

According to simulations carried out at T.J. Watson research center, the design that IBMers are implementing will be five times faster than silicon-based microprocessors using the same amount of power. The technology, while very real, is still in the design stage, however, and there are no guarantees it will pan out.

IBM obviously has a lot of investment sunk into the silicon-based manufacturing process so naturally the company is focusing on building a carbon-based transistor using similar design and manufacturing methods. The research group recently made chips with 10,000 nanotube transistors, using six-packs of nanotubes, each 1.4 nanometers wide and 30 nanometers long. The ends of the tubes make contact with electrodes which supply current, while a third electrode runs underneath and acts as a switch.

At this stage of design, the researchers cannot get the nanotubes close enough because existing chip technology doesn’t operate at that scale. They are working on a solution that would cause the tubes to self-assemble into position. The helper compounds would then be removed, leaving the nanotubes in the proper configuration ready for the electrodes and other circuitry to be added.

A lot is riding on the research. If the nanotube transistors are not ready in time to meet the post-silicon demand, they may miss their market opportunity, according to IBM’s James Hannon, head of the company’s molecular assemblies and devices group. But there’s not a lot of other options out there. Possibilities like spintronics exist, but they’re less mature, and don’t have the advantage of behaving like silicon transistors, so they wouldn’t be compatible with existing semiconductor manufacturing techiques.

]]>http://www.hpcwire.com/2014/07/02/ibm-bets-nanotubes-succeed-silicon-2020/feed/0Moore’s Law in a Post-Silicon Erahttp://www.hpcwire.com/2014/01/10/moores-law-post-silicon-era/?utm_source=rss&utm_medium=rss&utm_campaign=moores-law-post-silicon-era
http://www.hpcwire.com/2014/01/10/moores-law-post-silicon-era/#commentsSat, 11 Jan 2014 00:03:11 +0000http://www.hpcwire.com/?p=3098When it comes to ushering in the next-generation of computer chips, Moore’s Law is not dead, it is just evolving, so say some of the more optimistic scientists and engineers cited in a recent New York Times article from science writer John Markoff. Despite numerous proclamations foretelling Moore’s Law’s imminent demise, there are those who

]]>When it comes to ushering in the next-generation of computer chips, Moore’s Law is not dead, it is just evolving, so say some of the more optimistic scientists and engineers cited in a recent New York Timesarticle from science writer John Markoff. Despite numerous proclamations foretelling Moore’s Law’s imminent demise, there are those who remain confident that a new class of nanomaterials will save the day. Materials designers are investigating using metals, ceramics, polymeric and composites that organize via “bottom up” rather than “top down” processes as the substrate for future circuits.

Moore’s Law refers to the observation put forth by Intel cofounder Gordon E. Moore in 1965 that stated that the number of transistors on a silicon chip would double approximately every 24 months. The prediction has lasted through five decades of faster and cheaper CPUs, but it’s run out of steam as silicon-based circuits near the limits of miniaturization. While future process shrinks are possible and 3D stacking will buy some additional time, pundits say these tweaks are not economically feasible past a certain point. In fact, the high cost of building next-generation semiconductor factories has been called “Moore’s Second Law.”

With the advantages of Moore’s Law-type progress hanging in the balance, semiconductor designers have been forced to innovate. A lot of the buzz lately is around “self assembling” circuits. Industry researchers are experimenting with new techniques that combine nanowires with conventional manufacturing processes, setting the stage for a new class of computer chips, that continues the price/performance progression established by Moore’s law. Manufacturers are hopeful that such bottoms-up self-assembly techniques will eliminate the need to invest in costly new lithographic machines.

“The key is self assembly,” said Chandrasekhar Narayan, director of science and technology at IBM’s Almaden Research Center in San Jose, Calif. “You use the forces of nature to do your work for you. Brute force doesn’t work any more; you have to work with nature and let things happen by themselves.”

Moving from silicon-based manufacturing to an era of computational materials will require a concerted effort and a lot of computing power to test candidate materials. Markoff notes that materials researchers in Silicon Valley are using powerful new supercomputers to advance the science. “While semiconductor chips are no longer made here,” says Markoff referring to Silicon Valley, “the new classes of materials being developed in this area are likely to reshape the computing world over the next decade.”

]]>http://www.hpcwire.com/2014/01/10/moores-law-post-silicon-era/feed/0Stanford Debuts First Carbon Nanotube Computerhttp://www.hpcwire.com/2013/09/27/stanford_debuts_first_carbon_nanotube_computer/?utm_source=rss&utm_medium=rss&utm_campaign=stanford_debuts_first_carbon_nanotube_computer
http://www.hpcwire.com/2013/09/27/stanford_debuts_first_carbon_nanotube_computer/#commentsFri, 27 Sep 2013 07:00:00 +0000http://www.hpcwire.com/2013/09/27/stanford_debuts_first_carbon_nanotube_computer/A new computer made of carbon nanotubes, created by a team of Stanford engineers, may be the first serious silicon challenger.

]]>As silicon-based electronics come up against the physical limitations of nanoscale, researchers are scrambling to find a viable replacement that would breath new life into Moore’s law and satisfy the demand for ever faster, cheaper and more energy-efficient computers. A new computer made of carbon nanotubes, created by a team of Stanford engineers, may be the first serious silicon challenger.

A scanning electron microscopy image of a section of the first ever carbon nanotube computer. Credit: Butch Colyear

Carbon nanotubes, long chains of carbon atoms, have remarkable material and electronic properties which make them attractive as a potential electronics substrate. The Stanford team, led by Stanford professors Subhasish Mitra and H.-S. Philip Wong, contends that this new semiconductor material holds enormous potential for faster and more energy-efficient computing.

“People have been talking about a new era of carbon nanotube electronics moving beyond silicon,” said Mitra, an electrical engineer and computer scientist at Stanford. “But there have been few demonstrations of complete digital systems using this exciting technology. Here is the proof.”

According to a paper in the journal Nature, the simple computer is comprised of 142 low-power transistors, each of which contains carbon nanotubes that are about 10 to 200 nanometer long. The prototype has about the same power as a 1970s-era chip, called the Intel 4004, Intel’s first microprocessor.

“The system is a functional universal computer, and represents a significant advance in the field of emerging electronic materials,” write the authors in the Nature article.

The device employs a simple operating system that is capable of multitasking and can perform four tasks (instruction fetch, data fetch, arithmetic operation and write-back). The inclusion of 20 different instructions from the commercial MIPS instruction set highlights the general nature of this computer. For the demonstration, the team ran counting and integer-sorting workloads simultaneously.

Professor Jan Rabaey, a world expert on electronic circuits and systems at the University of California-Berkeley, noted that carbon had long been a promising candidate to replace silicon, but scientists weren’t sure if CNTs would be able to overcome certain hurdles.

While the first carbon nanotube-based transistors came on the scene about 15 years ago, the Stanford team showed that they could be used as the basis for more complex circuits.

“First, they put in place a process for fabricating CNT-based circuits,” explained Professor Giovanni De Micheli, director of the Institute of Electrical Engineering at École Polytechnique Fédérale de Lausanne in Switzerland. “Second, they built a simple but effective circuit that shows that computation is doable using CNTs.”

By showing that CNTs have a role in designing complex computing systems, other researchers will be more motivated to take the next step, potentially leading to the development of industrial-scale production of carbon nanotube semiconductors.

“There is no question that this will get the attention of researchers in the semiconductor community and entice them to explore how this technology can lead to smaller, more energy-efficient processors in the next decade,” observed Rabaey.

]]>http://www.hpcwire.com/2013/09/27/stanford_debuts_first_carbon_nanotube_computer/feed/0New Hope for Graphene-based Logic Circuitshttp://www.hpcwire.com/2013/09/06/new_hope_for_graphene-based_logic_circuits/?utm_source=rss&utm_medium=rss&utm_campaign=new_hope_for_graphene-based_logic_circuits
http://www.hpcwire.com/2013/09/06/new_hope_for_graphene-based_logic_circuits/#commentsFri, 06 Sep 2013 07:00:00 +0000http://www.hpcwire.com/2013/09/06/new_hope_for_graphene-based_logic_circuits/As an excellent conductor of heat and electricity, graphene is a promising electronics substrate, but it can't be switched on and off like silicon can. With no solution in sight, a team of UC Riverside researchers has taken a completely new approach.

]]>For more than a half century, computer processors have increased in power and shrunk in size at a phenomenal rate, but the exponential advances described by Moore’s law are winding down. Electronics based on silicon complementary metal–oxide–semiconductor (CMOS) technology are coming up against the physical limitations of nanoscale. Currently, there is no technology to take the place of CMOS, but a number of candidates are on the table, including graphene, a one-atom thick layer of graphite. Research suggests this incredibly strong and lightweight material could provide the foundation for a new generation of nanometer scale devices.

Scanning electron microscopy image of graphene device used in the study. The scale bar is one nanometer.

As an excellent conductor of heat and electricity, graphene is a promising electronics substrate, yet other characteristics of this material have stymied its progress as a silicon alternative. To address these limitations, researchers at the University of California Riverside have taken a completely new approach.

Semiconductor materials have an energy band gap, which separates electrons from holes and allows a transistor to be completely switched off. This on/off switch enables Boolean logic, the foundation of modern computing.

Graphene does not have an energy band gap, so a transistor implemented with graphene will be very fast but will experience high leakage currents and prohibitive power dissipation. So far, efforts to induce a band-gap in graphene have been unsuccessful, leaving scientists to question the feasibility of graphene-based computational circuits.

But Boolean logic is not the only way to process information. The UC Riverside team showed that it was possible to construct viable non-Boolean computational architectures with the gap-less graphene. Their solution relies on specific current-voltage characteristics of graphene, a manifestation of negative differential resistance. The researchers demonstrate that this intrinsic property of graphene appears not only in microscopic-size graphene devices but also at the nanometer-scale – a finding that could set the stage for the next generation of extremely small and low power circuits.

“Most researchers have tried to change graphene to make it more like conventional semiconductors for applications in logic circuits,” Alexander Balandin, a professor of Electrical Engineering, said. “This usually results in degradation of graphene properties. For example, attempts to induce an energy band gap commonly result in decreasing electron mobility while still not leading to sufficiently large band gap.”

“We decided to take alternative approach,” Balandin continued. “Instead of trying to change graphene, we changed the way the information is processed in the circuits.”

]]>http://www.hpcwire.com/2013/09/06/new_hope_for_graphene-based_logic_circuits/feed/0Moore’s Law We Miss You Alreadyhttp://www.hpcwire.com/2013/08/29/moore_s_law_we_miss_you_already/?utm_source=rss&utm_medium=rss&utm_campaign=moore_s_law_we_miss_you_already
http://www.hpcwire.com/2013/08/29/moore_s_law_we_miss_you_already/#commentsThu, 29 Aug 2013 07:00:00 +0000http://www.hpcwire.com/2013/08/29/moore_s_law_we_miss_you_already/As transistors reach the limits of miniaturization, it is only a matter of time until Moore's Law runs out of steam. The latest expert to weigh-in says Moore's Law will expire in 2020 at the 7nm node.

]]>As transistors reach the limits of miniaturization, it is only a matter of time until Moore’s Law runs out of steam. The latest expert to weigh-in says Moore’s Law will expire in 2020 at the 7nm node.

The prediction was made by Robert Colwell, director of the microsystems group at Defense Advanced Research Projects Agency (DARPA), during the Hot Chips conference.

“For planning horizons, I pick 2020 as the earliest date we could call it dead. You could talk me into 2022, but whether it will come at 7 or 5nm, it’s a big deal,” Colwell observed, as quoted in EETimes piece.

The doubling of transistor density every 18-24 months was a boon to the entire computing industry, one that Colwell is already mourning. These exponentials have “unsustainable heady growth” and “such rides are rare” in Colwell’s opinion.

The observation made by Gordon Moore described a growth factor that boosted speeds from 1 MHz to 5Ghz over a 30 year span – a 3,500-fold improvement. Architectural innovations during the same timeframe delivered only about a 50x increase by comparison.

As Gordon E. Moore himself said in 2005: “It can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens.”

DARPA has its eye on lots of alternative technologies, but Colwell is dismissive of the blind faith in an as-good-or-better replacement, saying that “you can’t fix the loss of an exponential.”

The list of contenders to maintain the pace of progress post-CMOS includes 3D stacking, and new architectures and switching technologies, as well as better human interfaces and even creative marketing, according to Colwell. DARPA is also investing in approximate computing and the use of spin-torque oscillators.

While it’s been proposed that single-atom transistors could extend Moore’s Law, Colwell believes that economics not physics will signal the final death knell. “So keep your eye on the money,” he counsels.

]]>From germanium to silicon and finally down to carbon. Back in the 1940s, scientists at Bell Labs purified germanium, a heavier element in the carbon/silicon family, used to make the first transistors. For the last several decades, scientists have been making smaller and smaller transistors out of silicon, doubling the transistor density every twelve to eighteen months, in accordance with Moore’s Law.

Computing advances rely on the continued exponential growth of transistors per chip. However, there is a finite amount that silicon transistors can shrink, and there are signs that the trend has already slowed. While transistor density continues to increase, “clock speed,” or the speed at which the transistors can turn on and off, has not. As such, processing in parallel to overcome this limitation has gained increased importance.

Carbon nanotubes would hypothetically kill two birds with one stone. Earlier this year, IBM presented a paper that demonstrated the ability of carbon to act as a transistor across just ten nanometers – half the size of silicon. Further, semi-conducting carbon provides a friendlier environment for electron movement. From a computing standpoint, that means data can be transported at higher speeds and that transistors can be more easily switched. This ultimately increases efficiency of the device by a factor of five to ten.

The issue with carbon nanotubes that prevented their consistently being turned into transistors is that they consist of both semi-conductive and metallic material. While the semi-conductive carbon nanotubes allow electrons to shift along them easier, the metallic side does not. As such, great pains have to be taken to sift out the metallic. Further, like their silicon brethren, carbon nanotubes have to be placed delicately and in a controlled fashion on the chip.

If those obstacles can be overcome, the technology has the potential to surpass silicon as the preferred transistor material. “The motivation to work on carbon nanotube transistors is that at extremely small nanoscale dimensions, they outperform transistors made from any other material,” said Supratik Guha, director of Physical Sciences at IBM Research.

According to him, carbon could outperform conventional silicon by a factor of five. “However,” he said, “there are challenges to address, such as ultra-high purity of the carbon nanotubes and deliberate placement at the nanoscale.”

To address those challenges, IBM came up with a chemical method to attract the semi-conducting material. By immersing a substrate consisting of hafnium oxide and silicon oxide into a liquid solution of carbon nanotubes, the semi-conductors attach themselves to the hafnium oxide. As a result, IBM was able to place ten thousand carbon-based transistors onto a wafer using standard semi-conductor processes.

Of course, today’s silicon-bearing chips harbor millions of transistors, a number that is expected to rise into the billions within the next couple of years. So carbon nanotubes still have a long way to go.

With that being said, IBM beat previous efforts, which were only able to mount a few hundred working carbon transistors. The advancements from here will come from being able to separate out the metallic from the semi-conducting, a process which is continues to be refined. According to Guha, IBM is fairly confident that by the end of the decade, they will be able to ensure 99.99 percent purity, leading to a point where carbon may take over as the element of choice for next-generation computing.

]]>http://www.hpcwire.com/2012/10/30/ibm_takes_step_toward_nanotube-based_computing/feed/0Project Denver Within Reachhttp://www.hpcwire.com/2011/07/18/project_denver_within_reach/?utm_source=rss&utm_medium=rss&utm_campaign=project_denver_within_reach
http://www.hpcwire.com/2011/07/18/project_denver_within_reach/#commentsMon, 18 Jul 2011 07:00:00 +0000http://www.hpcwire.com/?p=4759Sources suggest that December could be the month that NVIDIA's Project Denver finds its way to silicon.

]]>Reporting on overheard information from a number of unnamed sources, Theo Valich claimed today that NVIDIA has embarked on finding design targets for their upcoming Project Denver release.

Although the writer reminds readers to take the news with a grain of salt given the unverified sources, the Project Denver CPU core is “looking to be very much aligned with T40 i.e. ‘Tegra 4’ i.e. Wayne and, according to schedule, Wayne silicon is going to be taped out in the next couple of weeks.” The rumor is that developers will be able to get their first view of the 4-ARM core prototype for Christmas.

If Valich’s sources are correct, December will be the month NVIDIA tapes out the first silicon that is based on the Denver design which blends up to 8-core custom NVIDIA-ARM 64 CPUs with NVIDIA’s GeForce 600-class GPU. He notes that the company has hit some roadblocks during the CPU side of development and that they are taking a more cautious approach by sticking with their Fermi roots in the design.

He writes that another source that is close to the development process claimed that the filling for the GPU will consist of at least 256 cores. Valich goes on to write that this:

“…would put the product on pair with AMD’s Trinity APU which will pair a Bulldozer-Enhanced CPU core with “Northern Islands” VLIW4 architecture and will be the key APU for AMD in 2012. Compute power-wise, NVIDIA doesn’t want to clock it to heavens’ high, but rather to squeeze each IPC (Instruction Per Clock) as possible. Still, it is realistic to expect 2.0-2.5GHz for CPU and similar clock for the GPU part, with memory controller and the rest of the silicon working at a lower rate to keep everything well fed.”

He claims that December could prime NVIDIA for the blade market with its ability to pull a robust GPU into the market, exterminating the requests for Intel or AMD x86 cores for their Tesla market.

]]>http://www.hpcwire.com/2011/07/18/project_denver_within_reach/feed/0Researchers Scale Silicon Wallhttp://www.hpcwire.com/2011/05/27/researchers_scale_silicon_wall/?utm_source=rss&utm_medium=rss&utm_campaign=researchers_scale_silicon_wall
http://www.hpcwire.com/2011/05/27/researchers_scale_silicon_wall/#commentsFri, 27 May 2011 07:00:00 +0000http://www.hpcwire.com/?p=4830As the limitations of silicon at the nanoscale become apparent, new materials are emerging to address the performance gap.

]]>In order to work around some of the performance limitations of silicon at the nanoscale, researchers are looking for ways to improve on existing architectures and engineer new materials to prevent performance degradation. A rising tide of interest and funding has spilled into work to discover high performance nanaoscale materials that will replace silicon transistors in the next decade.

Dr. Bhagawan Sahu at the Microelectronic Research Center in Austin, Texas is one of several scientists looking for silicon replacements at SWAN, a research center exploring next-generation nanotransistors.

SWAN is one of four nanoelectronics centers that is funded by the Semiconductor Research Corporation’s Nanoelectronics Research Initiative. This effort is backed by international semiconductor firms, including Intel, Texas Instruments, IBM and others, with vested interest in “safeguarding and going beyond Moore’s Law.”

According to a report today from the Texas Advanced Computing Center, Dr. Sahu and his team have made significant progress in their nanoscale materials research. As Aaron Dubrow reported:

“Today’s smallest semiconductor transistors are about 32 nanometers (nm) long. Dr. Sahu and the SWAN team aim to make 10nm transistors, with a thickness of less than one nanometer, using graphene. Since it was discovered in the mid-2000s, graphene has been lauded as the savior of the semiconductor industry. In 2010, Andre Geim and Konstantin Novoselov, of the University of Manchester, UK, were awarded the Nobel Prize in Physics “for groundbreaking experiments regarding the two-dimensional material.”

Made up of a single layer of graphite, graphene is the thinnest material in the world and possesses electron mobilities (a measure of how fast electrons in a material can move in response to external voltages) higher than silicon. These characteristics are attractive features and have generated tremendous interest from the semiconductor industry. However, as scientists learned more about graphene and proved it could be used as a potential material in transistors, initial excitements gave way to a greater appreciation of the design and fabrication challenges ahead.”